diff --git "a/deduped/dedup_0358.jsonl" "b/deduped/dedup_0358.jsonl" new file mode 100644--- /dev/null +++ "b/deduped/dedup_0358.jsonl" @@ -0,0 +1,62 @@ +{"text": "Physicians' awareness of their important role in defusing the obesity epidemic has increased. However, the number of family practitioners who treat obesity problems continues to be low. Self-efficacy refers to the belief in one's ability to organize and execute the courses of action required to produce given attainments. Thus, practitioners who judge themselves incapable of managing obesity do not even try. We hypothesized that practitioners' self-efficacy and motivation would be enhanced as a result of participating in an interactive course designed to enrich their knowledge of obesity management.t-test. The interviews were analyzed by qualitative methods.Twenty-nine family practitioners participated in the course, which was accompanied by qualitative interviews. The difference between the physicians' pre-course and post-course appraisals was tested by paired p < 0.0005). A deeper insight on the practitioners' self-efficacy processes was gained through reflection of the practitioners on their self-efficacy during the interviews.Post-course efficacy appraisals were significantly higher than pre-course appraisals (Up-to-date information and workshops where skills, attitudes and social support were addressed were important in making the program effective. In tht survey in 2000.Guidelines published in 1996 for the management of obesity recommended setting a modest weight loss and weight maintenance, rather than a targeted ideal weight, as goals . Lately,Many chronic health problems are exacerbated by unhealthy behaviors and harmful environmental conditions. From the psychological perspective, healthful lifestyles and environmental conditions may yield large health benefits. The widespread adoption of a healthier lifestyle rather than medical technologies has resulted in a substantial decline in premature mortality and morbidity . The medSelf-efficacy refers to the belief in one's ability to organize and execute the courses of action required to produce a given attainment . Such beSelf-efficacy is measured by the strength of a subject's beliefs in the ability to execute requisite activities. Social cognitive theory distinguishes among three basic processes of personal change: the adoption, general usage and maintenance over time of new behavioral patterns . EfficacTo our knowledge and after a literature search, no intervention studies have yet been conducted to change the self-efficacy of FPs in Israel towards treating obese people. Among other courses for which physicians usually receive credits towards their annual professional training, a course was offered by the Israeli Academic Medical Council. The course was initiated by the Israeli Association of FPs and recommended by the Medical Professional Journal of FPs in Israel. Registration in the course was open to all FPs. The present research was a preliminary study. This being the case, the first group of FPs who attended the new course participated in it. Though that group contained a small number of subjects, the importance of investigating a new program contribution to FPs self-efficacy for future research was evident.The objectives of the course were to enrich the knowledge of FPs with up-to-date information on obesity and to raise their motivation to treat it. The study objective was to determine if an interactive course would raise the self-efficacy of FPs to treat obesity. It was hypothesized that the self-efficacy of FPs would be enhanced as a result of participating in an interactive obesity-treatment course.Twenty-nine FPs chose to participate in the course along with other Continuing Medical Education (CME) courses. All participants work as FPs in public health care clinics throughout the country.This study was based on a one group, pre-course \u2013 post-course test design, without a control group. It was accompanied by qualitative interviews to validate the results of the data analysis.The strength of the efficacy beliefs of the FPs to treat obesity was estimated by using a five point scale Likert type questionnaire containing nine items Table . The subThe Alpha Cronbach for the reliability of the tests was \u03b1 = 0.88 for the pre-course test and \u03b1 = 0.90 for the post-course test Table .The following aspects of structure validity were examined: (a) for the content aspect, the items matched the domain concept map. Final version was rewritten on the basis of experts' and researchers' comments. The items were finally presented in an nonsequential order to make subjects think while reading the questions and not relate to earlier answers; and (b) for the substantive aspect, FPs were interviewed regarding their self-efficacy to assure that the questions were clear and did not require rewording. The interview was a 30 min. open interview. A physician was given open questions such as: \"Describe your feelings and thoughts of efficacy to treat obesity\" or \"How the self-control lectured help your efficacy perceptions\". The subject was free to speak openly on the issue. The interview was actually a verbal reflection of thoughts and feelings of the subjects. It was recorded and later on transcribed by the researchers. The interview was analyzed by the constant comparative qualitative method of analysis ,21. Evert-test in order to compare pre and post strength of efficacy beliefs for treating obesity.The present study design used paired Participants answered the questionnaire before the start of the course and at the end of the last session, and a few days later participated in an open interview. The course was interactive and contained 12 clinical and psychological lectures given by experts in all subjects relating to obesity Table . Every lFPs filled in clinical report forms and presented the cases they treated and tools they used. The cases were discussed during the course and feedback was given by colleagues and experts. The course supplied the FPs with knowledge, skills and psychological tools such as: food and physical activity diaries, decision making tools, self-evaluation tools, self-report tools, self-monitoring tools, persuading tools, stimulus control tools and counter-conditioning tools, to treat obesity.Medical knowledge and skills were not examined. The course attempted to raise the self-efficacy of FPs by creating a supportive atmosphere, providing feedback, recalling successful experiences, learning from models, bringing patients into the workshops, discussing sociological and psychological problems and allowing reflection of thoughts and emotions. Studies have shown that reflection enhances metacognitive processes such as: self-monitoring, self-evaluation, self-reaction and attribution ,22-25. St-test and the results of the qualitative analysis indicate that the criteria derived from the interviews matched those of the questionnaire. When speaking openly, FPs addressed the same issues that made up the domain theoretical concept map.Several researchers have suggested that quantitative efforts in the study of self-efficacy should be complemented by qualitative studies aimed at gaining a deeper understanding of attitudes and emotions -24. The The study showed that acquisition of knowledge and skills enable a person to meet personal standards of merit that tend to heighten beliefs of personal efficacy . In judgThe impact of the questions analyzed in this study extends beyond the issues asked by the questionnaire. New insights were gained through qualitative analysis: FPs analyzed the process they experienced during the course. They described how their efficacy beliefs were enhanced through reflection, feedback and the supportive climate of the course. Studies have shown that reflection enhances self-efficacy processes since self-appraisal of efficacy is structured by experience and reflective thought . FPs repIn summary, an effective program of widespread change in health practices includes four major components. The first is informational and intended to increase the physician's awareness and knowledge of the subject. The second involves development of skills needed to translate informed concerns into effective action. The third is aimed at building a robust sense of self-efficacy to support the exercise of control in the face of difficulties that inevitably arise. This is achieved by providing repeated opportunities for guided practice and corrective feedback in applying the skills in simulated situations that people are likely to encounter. The final component involves creating social support for desired changes. The present program contained all these components.The limitations of the study were the small number of participants and the reliance on one motivated group of FPs. These limitations result from the fact that this was the first course organized by the Israeli Association of Family Physicians on the subject of obesity. It was important to study the effect of the program on FPs self-efficacy for future continuing education and research.We recommend studying the effect of interactive courses on the lifestyle and weight loss of FPs. Future research should consider randomized samples from larger courses and analyses of the correlation between course success and actual FPs performance, FPs' self-efficacy and treatment outcomes e.g: BMI change, Lipids and BP. We also recommend studying the differences in obesity treatment outcomes and in general health feelings between patients whose FPs attended obesity courses and patients whose FPs did not.Practical recommendations for Continuing Medical Education planners would be to focus on workshops rather than on lectures, to enhance those processes the FPs felt less efficacious to go through and to have guidance of Endocrinology experts' as well as Psychology and Education specialists to improve communication with patients and their families in an effort to enhance motivation to loose weight. It is also recommended to bring patients to the workshops to reflect on the treatment they have received.The author(s} declare that they have no competing interests.SK and AF conceived and designed the study, participated in the collection, analysis and interpretation of data and drafted the manuscript. SV Participated in the statistical analysis, interpretation of data and draft of the manuscript. SP participated in the design of the study, data collection and interpretation. All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:"} +{"text": "The quality of psychosocial assessment of children in consultations varies widely. One reason for this difference is the variability in effective mental health and communication training at undergraduate and post-qualification levels. In recognition of this problem, the Royal College of Paediatrics and Child Health in the United Kingdom have developed the Child in Mind Project that aims to meet this deficit in medical training. This paper describes the evaluation of a workshop that explored the experiences and expectations of health care professionals in the development of a training programme for doctors.The one-day inter-professional workshop was attended by 63 participants who were invited to complete evaluation forms before and immediately after the workshop.The results showed that the workshop was partially successful in providing an opportunity for an inter-professional group to exchange ideas and influence the development of a significant project. Exploring the content and process of the proposed training programme and the opportunity for participants to share experiences of effective practice were valued. Participants identified that the current culture within many health care settings would be an obstacle to successful implementation of a training programme. Working within existing training structures will be essential. Areas for improvement in the workshop included clearer statement of goals at the outset and a more suitable environment for the numbers of participants.The participants made a valuable contribution to the development of the training programme identifying specific challenges. Inter-professional collaborations are likely to result in more deliverable and relevant training programmes. Continued consultation with potential users of the programme \u2013 both trainers and trainees will be essential. Childhood mental health disorders are common. A Department of Health survey of 5 to 15 year-olds in England and Wales, 5% had conduct disorder, 4% had emotional disorders and 1% rated as hyperactive -7. FurthKnowledge of psychosocial and mental health problems is only part of the patient assessment process. The ability to communicate effectively with the patient is pivotal for accurate assessment -13. A unNew graduates in the United Kingdom are expected to be able to \"communicate clearly, sensitively and effectively with patients and their relatives\" . Teachin(RCPCH) aims to develop psychosocial awareness and interviewing skills of paediatricians (in training). This will improve the assessment and management of psychosocial issues that effect children. To achieve this goal, a modular training programme in child and adolescent mental health is proposed. The programme will be piloted with senior house officers (SHOs) on paediatric rotations. Building additional modules into existing SHO training programmes is problematic since clinical and other commitments already consume a shrinking working week. Therefore, a key consideration in developing the training modules is to work within existing SHO training by maximising both planned and opportunistic teaching and learning in psychosocial care and interviewing skills appropriate for working with children, adolescents and their families.In order to address these two issues, the Child in Mind Project at the Royal College of Paediatrics and Child Health At an early stage in the project, the team thought it appropriate to elicit ideas and experiences of interested professionals as well as recruit individuals to help with developing the project. An open invitation to attend a one-day workshop held at the RCPCH was advertised in relevant professional newsletters for paediatricians, child psychiatrists, and child psychologists and by word of mouth in other disciplines. The invitation stated that the workshop would elicit the views of an inter-professional group interested in developing a training programme to improve psychosocial assessment and child-centred communication skills of doctors. Participants were chosen from twice the number of applicants to represent a balance of disciplines and geographical spread. Selection within these criteria was made on the order of receipt of applications. Numbers were limited by the capacity of the College facilities.This paper describes the evaluation of the inter-professional workshop in the development of the training programme.The opening plenary session introduced the Child in Mind Project together with the aims of the workshop Table . These aImmediately after lunch, a plenary session was held in which the key issues from the morning sessions were shared with the entire group. The four parallel sessions that followed focused on core process issues in training: promoting role-play, introducing technology, integrating new with existing programmes, and assessment. The final plenary session provided an opportunity for groups to share their experiences and then a wider discussion considered key issues from both morning and afternoon sessions. An action plan was devised based on this discussion and was shared with participants providing an opportunity for participants to continue to be involved in the project.All participants were invited to complete evaluation forms immediately before and after the workshop. The pre workshop form explored participants' reasons for attending, their expectations, their most important issue in relation to the workshop, their experiences in learning about communication and education, their current role, age and sex Figure . The posSixty-three participants attended the workshop of whom 23 were clinical psychologists (36.5%), 18 paediatricians (28.6%), 9 psychiatrists (14.3%), 9 nurses (14.3%) and one representative from each of the following professions: social work, education, play therapy and occupational therapy (6.4%). Forty-one participants were female (65.1%) and 22 were male (34.9%). Approximately twenty participants (31.8%) left the workshop immediately prior to the closing plenary session. This was unexpected and apparently not triggered by anything more than a need to catch commuter and intercity trains. That is, the exodus seemed unrelated to the quality of the meeting. Twenty-two participants (34.9%) completed the pre workshop form and 28 completed the post workshop form (44.4%).Of the 22 respondents completing the pre workshop form, 17 were female (77.3%) and 5 were male (22.7%) with an age range of 34 to 58 years and mean age of 45. Although the group were inter-professional, the respondents were predominantly medical with 10 paediatricians (45.5%), 5 child and adolescent psychiatrists (22.7%), 4 clinical psychologists (18.2%), one occupational therapist (4.5%) and one educator (4.5%).Nineteen (86.4%) participants reported previous formal training in communication as part of undergraduate, post-graduate and continuing professional development. Training included theoretical and skills practice within and outside of paediatrics at fundamental and advanced levels . Eighteen participants (81.1%) reported at least some previous formal training in education.Participants' reasons for attending the workshop were diverse and included a strong interest and or experience in the major themes of the workshop \u2013 assessment and management of psychosocial issues in paediatrics, paediatric interviewing and training.To enhance the \"voice of the child\" in paediatrics. (4)I have a long standing interest and involvement in the teaching of junior paediatricians the skills of communication, family therapy and management of behavioural and emotional issues. I have been trying to find ways of formalizing mental health training for paediatricians. (1)Because I have a very real interest in improving the awareness and training of paediatricians in the psychosocial aspects of paediatrics. (7)To participate in the development of paediatric psychological training. (17)To help develop teaching of mental health issues in childhood. (18)Having worked in the area of paediatric psychology for some years, I am particularly interested in developing the awareness of paediatricians re psychological issues. (19)To contribute to the planning of teaching paediatricians how to tackle social and emotional issues. (20)To better understand needs of paediatricians for training in psychological needs of children and families. (11)Some participants wanted to develop existing local programmes.To feedback ideas to our paediatric College/Clinical Tutor who did not get a place at the course. (12)To try and improve our in-house teaching of psychological factors in paediatrics. (13)One participant acknowledged a deficit in current training.I realize we are generally very poor at integrating psychological aspects of child and family health into the busy acute training programme. (2)The second question asked participants what they were expecting from the workshop. Various themes emerged and included the generation of ideas for the Child in Mind project generally and specifically in the development of training materials. Participants expected to be able to exchange ideas on what and how to change existing training and there was an expressed desire not only to influence these developments but to ensure they are deliverable.To meet, share and hopefully influence colleagues. (17)To participate in putting together relevant training modules and to have a voice in the future training of paediatricians. (19)To ensure that programmes will be acceptable. To broaden ideas around what to include in communication programme. (5)To offer my experience in direct work with children, adolescents and families. (9)An understanding of where the project is so far \u2013 aims, methods, plans. A chance to contribute. (14)A second theme related to expectations of inter-professional collaboration both in the development and delivery of the training module and the third and overlapping theme focussed on the opportunity for networking.Decrease inter-professional tension and enhance collaboration. (4)Participants' perceptions of the most important issue to be addressed in the workshop reflected their different expectations. Training issues were dominant and focused on both the content and process. Content issues included thinking about ways of raising the significance of the assessment and management of psychological problems in children and adolescents together with the need to identify the child and adolescent's perspective separately to their family's and health care professionals.Developing a culture of respect for child and family. Accessing children's thoughts and feelings independently of their parents or other professionals. (4)Process issues included identifying ways to maximise existing expertise, to use limited resources efficiently, to encourage participation from paediatric trainers and trainees and to consider assessment and evaluation as integral to the training programme.Twenty-eight (44.4%) participants completed the post workshop form. Demographic data was not collected. The \"did not attend\" option on the evaluation form is not included in Table Using a 3-point scale from not at all, partially to completely, participants rated the helpfulness of the sessions in meeting the objectives of the workshop Table . The majRespondents were asked to identify the most important issues that they thought had been addressed in the workshop. Several participants wrote of the need to change the existing culture to one in which psychosocial assessment and communication skills are valued. There was also acceptance of diversity in workplaces and the training offered therein. To ensure that the new training programme is deliverable it must be sufficiently flexible to fit within these diverse settings and that it must be evaluated. The need for training both supervisors and trainers was considered requisite for implementing any programme.Learning about the hospital paediatric culture and previous difficulties of teaching SHOs and getting the culture right. (20)The importance of mental health teaching/learning for all doctors caring for children/families. (2)Delivering training for trainers and that the child mental health programme needs to be integrated into existing paediatric training. (3)Need to address appropriate training and supervision of SHOs and for consultants to be trained first themselves. (5)The importance of introducing a general shift. The extreme inflexibility of the system as a whole. (9)The realities of teaching busy SHOs who are preoccupied with passing exams. (14)Changing culture of consultants to understand importance of training for mental health and communication skills. (24)Some participants valued the opportunity to learn about existing effective practices while others gave consideration to who should teach, how and that whatever is taught must be relevant.The importance of taking a full history and empowering SHOs to ask difficult questions, to reflect on their practice and to have supervision in order to understand what to do with the information they have gathered. (17)Introduction of video review of consultations/interactions with children and parents to paediatrics. (22)Participants used a 3-point scale from not at all, partially to completely to rate the degree to which they met their expectations. Eight participants completely met their expectations (28.6%) while twenty participants (71.4%) partially met their expectations.In response to being asked what worked well in the workshop, participants identified the opportunity to exchange ideas with colleagues with different levels of experience, who work in different settings and have different professional backgrounds.The group sizes for sessions were valued since they were sufficiently small to enable several participants to express their views and large enough for diverse experiences to be shared. The plenary sessions were helpful in summarising group sessions and consolidating broad ranging issues.The enthusiasm of delegates was thought to contribute to the success of the workshop together with the relaxed atmosphere and the genuine desire of participants and organisers to change existing practices.Most participants recorded at least one response to being asked how the workshop could be improved. The single most frequently cited issue related to the venue. Groups were too large for their rooms and for two groups, their presence in the same large room impeded discussion.Other improvements included stronger facilitation in some groups to ensure all views were heard and that the discussion stayed focused. Providing delegates with basic information prior to the workshop on the aims, objectives and content of group sessions could have improved the quality of the discussions. Participants expressed a desire to attend the group sessions of their choice. One participant thought that the workshop was too rigidly organised between content and process and that this limited creativity in thinking about training. Two participants suggested including SHOs for whom the training will initially be delivered.I felt unable to contribute much of my experience and knowledge with the tight preset agenda. I was particularly wanting to discuss raising awareness of child protection issues, and working with children in complex and or chaotic home situations. Also were there many current paediatric SHOs here today. If not, there should have been. If so, could they have contributed more? (17)Sign up for preferred workshops on arrival \u2013 I don't remember what preferences I indicated but they certainly weren't the ones I was allocated. More general paediatricians \u2013 meeting seemed to be dominated by psychiatrists/psychologists (23)The workshop was planned with definite areas for discussion and a very strong split between content and process. This defined structure led to cramping of ideas (27)Maybe including some real SHOs just for the occasional reality check (14)The workshop was valuable in contributing to the development of the Child in Mind Project training programme. The content and process of the programme were explored and several issues emerged that will need serious consideration by the Child in Mind project team. These include the strongly expressed need for a change in culture within the health care system that will embrace child-centred mental health care. The magnitude of change required is uncertain but may well be extensive given evidence that a study based in general practice in The Netherlands reported that the inclusion of the child in all phases of the consultation was \"limited\" with parents frequently speaking for the child, the child not questioning the parent, and the GP supporting this behaviour by minimal exploration of meta-communicative behaviours. The authors described this process resulting in a dyadic emphasis as being \"institutionally co-constructed\" .Ways to change the health care culture in the United Kingdom were not explicitly identified. However, the project teams' desire to implement the training programme in a few centres that were already enthusiastic suggests that creating centres of best practice is inherent in their approach for change. This supports theoretical approaches for effective institutional change ,26. ThatThe inter-professional nature of the workshop was beneficial in exchanging views from different perspectives. This supports the findings of the few studies in medical curriculum development that reports this approach . Most paAlthough consultation with other stakeholders was not identified by this group, it is important that they are also included in the development and evaluation of the training programme. Community participation \u2013 especially of key stakeholders, is often lacking in all phases of professional education . In order that the training can best meet the needs of its intended targets their voices should be considered. The medical education literature strongly supports inclusion of patient voices in all aspects of curricula development -30.The importance of training the trainers of the programme was identified as key to success of implementation. Although agreement was not sought, there was a powerful sentiment that trainers should be inter-professional. This notion may also address cultural barriers that relate to doctors' lack of understanding of other health care professional roles by exposing them to trainers who have mental health assessment and/or communication skills expertise. The nature of support provided to trainers may vary reflecting the diverse settings in which the training programme will eventually be implemented.There appeared to be agreement that the workshop was not an appropriate forum for identifying the details of content and process of the training programme. Rather core issues were identified in psychosocial assessment, mental health and communication. Effective approaches to learning patient-centred communication skills are labour-intensive (videotaped interviews with feedback) ,32 so maThe purpose of eliciting participants' reasons for attending and their expectations of the workshop is to help make sense of their satisfaction afterwards. Although the invitation outlined the purpose of the workshop, participants came with varied views that to some extent reflected their level of experience, their unique professional perspective and their interpretation of the information provided in the invitation. However, there was an overarching expectation that each would contribute to the development of a training programme. It is important to reflect on the reasons that only 28.6% of the participants reported that their expectations were completely met.The suggestions given for improvements offer insight into why more participants did not meet their expectations. Restating the project team's aims at the commencement of the workshop may have been helpful. Although some participants felt able to express their views others were unable to do so because of the structure of sessions, the way in which they were facilitated and the settings in which the discussions took place. Providing a more open forum for discussion may have generated different ideas. The breadth and depth of the \"culture change\" some participants consider essential for implementation of the training programme is extensive and is likely to have influenced their judgement as to what could be realistically achieved both in the workshop and the training programme.The physical limitations of the workshop impeded discussion in some groups.Although group sizes were thought appropriate, providing spaces in which they could work will need to be considered in future workshops.There are several limitations with this evaluation project some of which were beyond the control of the evaluator (DN).\u2022 Higher response rates may have improved the quality of the evaluation. It is possible that respondents differed to non-respondents which may influence the results in someway although it is difficult speculate how.\u2022 Scheduling the evaluation forms as part of the workshop may have increased response rates and may also help participants to focus on their expectations immediately before the meeting and then afterwards in considering what they achieved.\u2022 The low response rate in relation to the final plenary session may be explained by the request to complete the forms immediately after the workshop. It is possible that some participants wanted more time to reflect on their experiences. It may have been more helpful to contact participants after the workshop.\u2022 Further, the responses may not represent the diversity of opinions expressed during the workshop nor were the professional groups equally represented in the evaluation forms. For example, no nurses completed the pre workshop evaluation form. It is unclear why this was the case as all respondents were equally encouraged to complete the forms.Future evaluations of workshops attended by disparate groups may consider:\u2022 Scheduling the completion of evaluation forms into the workshop timetable\u2022 Using identifiers to link pre and post workshop evaluation forms\u2022 Following-up participants some time after the workshop to elicit their considered viewsDespite these methodological weaknesses, the evaluation offers useful insights to the management of an inter-professional workshop for curricula development.The workshop provided the Child in Mind project team with valuable insight relevant to the development of a deliverable training programme in mental health and communication. This was an adequate forum in which the ideas and experiences of an interested inter-professional group could contribute although there were several ways in which this could have been improved. The diversity of the settings in which the programme will be delivered was highlighted as was the need for cultural change and support not only for trainees but the trainers themselves. Continued consultation with this inter-professional group together with broadening the consultation process to include other stakeholders may lead to the development of an effective training programme. Commencing the programme in sites with clinicians who are receptive to change of this nature is likely to influence its' success. Evaluation will continue to be essential to monitor the process. The enthusiasm of the participants needs to be harnessed to ensure that the long-term goals of the project team will be met.The author(s) declare that they have no competing interests.All authors contributed to each phase of the project although DN took a lead role in writing the paper. DN was responsible for the evaluation while ST and QS were instrumental in the development of the workshop.The pre-publication history for this paper can be accessed here:"} +{"text": "A new funding body (the EDCTP) will fund clinical trials in developing countries, particularly in Africa, that help to develop affordable interventions against HIV, TB, and malaria. How is it doing so far? A promising European funding body is stumbling over the details Last year the European Parliament and Council formed the european and Developing Countries Clinical Trials Partnership (EDCTP). The aim of this new funding body, which has a budget of €400 million spread over five years, is a noble one: to fund research in developing countries, particularly in Africa, that contributes to the development of affordable prophylactics and drugs for HIV/AIDS, tuberculosis, and malaria.Unfortunately, the organization has not got off to an auspicious start. Its executive director, Piero Olliaro, was ousted from power at the first EDCTP annual forum at the end of September. There have been rumblings of discontent among grant applicants who say that the first round of grant assessments was administered poorly. And not-for-profit organizations that would like to partner with EDCTP have been left in the dark regarding whom to speak to at the organization.This omission is significant because partnership is one of the key tenets of the EDCTP. European research agencies are slowly beginning to realize that they need to cooperate with each other if they are to be competitive with the United States. The history of many European countries is such that Europe has much stronger ties with Africa than does the United States, so it makes political sense for the European Union to fund research that provides a springboard for European researchers to compete effectively with US scientists.Crucially, the EDCTP was also set up to enable European and African scientists to work together as equal partners. There is increasing recognition that the paternalistic, colonial attitude that pervaded \u201ctropical medicine\u201d in the past just will not do. The EDCTP hoped to change that by having a Partnership Board that contains equal numbers of African and European representatives. However, the EDCTP Assembly, which contains a representative from each of 14 EU member states but none from African countries, has the power to veto the decisions of the Partnership Board, which is supposedly the scientific decision-making authority.Doing clinical trials in Africa is far from easy. There are too few adequately resourced research centers, and those that do consistently perform well are oversubscribed. Therefore, there is a clear need for \"capacity building\"\u2014development of a research infrastructure, in terms of both equipment and personnel, that is capable of coping with the challenges of clinical trials. The EDCTP hopes to contribute to this essential endeavor by funding clinical trials that are sustainable in the long term. In particular, it believes that the best way to train a new generation of African scientists is by teaching them on the job, that is, involving them fully in the planning and execution of the trial, rather than flying in European experts who leave as soon as the trial is finished.A commitment from European researchers to be engaged for the long term is essential for the success of these projects. In addition, partnerships need to be brokered with national programs in Africa to ensure that the new capacity can be sustained over time. The end goal is to produce centers of excellence that are run by Africans doing internationally recognized research that conforms to Good Clinical Practice guidelines. But this will only happen if African researchers are treated as equal partners and are allowed to be fully engaged in the projects that are taking place in their countries.So can the EDCTP work, or is it doomed to failure? In many ways the organization has a great deal going for it. Although the budget of 3400 million spread over five years is tiny considering the combined burden of HIV/AIDS, tuberculosis, and malaria, it is important to remember that it is the biggest single European project for clinical trials in Africa. In many ways the EDCTP is a demonstration project: if some success can be achieved it is very likely that additional funds will follow. The project is certainly strengthened by the involvement of Pascoal Mocumbi, the former prime minister of Mozambique, as High Representative of the project. Mocumbi is highly respected by the global-health community and carries considerable weight with African politicians. Mobilization of political will within Africa will be essential if research capacity is to be sustained for the long term.On the downside, it seems clear to most insiders that the management structure needs to be radically changed and partnership with other organizations needs to be improved. The EDCTP Assembly met on October 28 and 29 to discuss these issues and to elect a new leader. At the time this editorial went to press, there was still no public announcement of the outcome of this meeting. In addition, the political infighting that pervades European politics at all levels needs to be controlled, or at least managed effectively. This might be a tall order, but it is essential if this worthwhile and high-profile project is to succeed."} +{"text": "In response to the Blackburn and Rowley essay on the President's Council on Bioethics, several thought-provoking opinions on ethical challenges in biomedical research are expressed by prominent stakeholders It is a great pity when vested interest and dogma dominate what should be a well-informed and rational debate. The essay by In the United Kingdom, we have had an almost continuous debate since the mid 1980s on topics relating to research on early human embryos. I myself have been involved in some of this debate, especially over the last few years, relating to human embryonic stem cells and nuclear transfer. I will not dwell on the political outcomes of this debate, which are widely known, but I want to stress that it has been one that has been very well informed, with contributions from all sides, including many highly respected moral philosophers and bioethicists. These include notable individuals such as Dame Mary Warnock and bodies such as the Nuffield Bioethics Council, who have been especially valuable because of their independence.So why are the conclusions reached by bioethicists in the UK, who are generally supportive of research involving human embryos, different from those of the President's Council on Bioethics? The same scientific information is available on both sides of the Atlantic. The rules of logic are the same. So it has to be the way the information is interpreted or filtered. This implies bias or vested interest or the input of dogma that is based on belief rather than rational thought. Some examples of this are discussed in the Blackburn and Rowley essay, and they are very worrying. The scare mongering about preimplantation genetic diagnosis is ridiculous\u2014simple mathematics shows that it is implausible to use this technique to screen the usual number of embryos obtained in one round of in vitro fertilisation for more than two or three genetic traits, while we know that intelligence must rely on many more. I am a great fan of science fiction, but I can recognise it as such. I worry that some members of the President's Council seem unable to do this. Many of these daft ideas were already promoted in a book by Francis It is certainly very unfortunate if the input of real science in the Council is to be reduced. The scientific issues are complex. For example, we certainly do not know nearly enough about either adult or embryonic stem cells to say which will be the best for therapies, and of course it is possible that both will turn out to be useful for different problems. Both also offer exciting new ways to explore human disease and the influence of genetics and environment without having to rely on human experimentation. But any committee looking into what is ethically acceptable has to be provided with a balanced view of what will be possible in the near future. There is no point in being too speculative, in part because it is also difficult to predict what will be ethically acceptable in the future. If cures come from the use of human embryonic stem cells, then I suspect that there will be widespread acceptance, as happened with heart transplants and with in vitro fertilisation, both of which were initially greeted with horror by many.It is impossible to have an informed debate without accurate and appropriate information, and there seems little point in having a debate that is not informed. Because of various sensitivities, it seemed to me before the creation of the President's Council on Bioethics that for far too long the issues relating to embryo research had not been considered properly within the United States. The President's Council was therefore an opportunity to redress this situation. But from the evidence I fear it will not succeed. Moreover, it does the general public a disservice to pretend to have a serious committee exploring issues of bioethics when that committee fails to live up to the ideals of impartiality and rationality."} +{"text": "Public satisfaction with policy process influences the legitimacy and acceptance of policies, and conditions the future political process, especially when contending ethical value judgments are involved. On the other hand, public involvement is required if effective policy is to be developed and accepted.Using the data from a large-scale national opinion survey, this study evaluates public appraisal of past government efforts to legalize organ transplant from brain-dead bodies in Japan, and examines the public's intent to participate in future policy.A relatively large percentage of people became aware of the issue when government actions were initiated, and many increasingly formed their own opinions on the policy in question. However, a significant number (43.3%) remained unaware of any legislative efforts, and only 26.3% of those who were aware provided positive appraisals of the policymaking process. Furthermore, a majority of respondents (61.8%) indicated unwillingness to participate in future policy discussions of bioethical issues. Multivariate analysis revealed the following factors are associated with positive appraisals of policy development: greater age; earlier opinion formation; and familiarity with donor cards. Factors associated with likelihood of future participation in policy discussion include younger age, earlier attention to the issue, and knowledge of past government efforts. Those unwilling to participate cited as their reasons that experts are more knowledgeable and that the issues are too complex.Results of an opinion survey in Japan were presented, and a set of factors statistically associated with them were discussed. Further efforts to improve policy making process on bioethical issues are desirable. In Japan, it was not until 1997 that a law was finally enacted to legalize organ transplant from a brain-dead body. Since 1968, when the first heart transplantation from a person declared brain dead was performed, there have been long-standing struggles in Japan for and against this procedure. In addition to many non-governmental institutions and individuals, the Japanese government \u2013 both the legislature and administrative bodies \u2013 engaged in a variety of efforts for this enactment. A number of factors have been suggested for the prolonged lack of policy in this area: deep public mistrust of the medical profession caused by the 1968 heart transplant; the Japanese culture which still holds traditional Japanese view of death and the body; and the lack of the broad public consensus required as a precondition for a policy . In 1986, the Japan Medical Association formed the Bioethics Discussion Group, a study group of interdisciplinary nature, and in 1988 issued its Final Report, which encouraged brain death legislation to facilitate organ transplantation. With the goal of shaping public opinion, the Japan Organ Transplantation Society sponsored a series of open symposia in 1989. [Period 2]. A group of politicians from the major party started investigating the current situation in other countries, considering possible legislation. Activities of patients' groups reportedly helped shift public attention away from the brain-dead potential donor to the seriously ill person who needs a transplant ,11. At tFinally, in early 1990, more than 30 years after the first transplant, the office of the Prime Minister established a special commission, Provisional Commission for the Study of Brain Death and Organ Transplantation. To encourage public involvement, the Commission held a series of public hearings and town meetings, and issued newsletters. Its 1991 interim and 1992 final reports presented both a majority view and a minority view. The former stated that a social consensus on brain death had already been achieved, and the latter argued that such a consensus had not yet been achieved. Both groups approved organ transplant when the consent of the donor was definitely obtained. A number of scholars argued that it should be a personal decision whether or not one's death is to be determined by brain death criteria, making individual consent the basis of both brain death and organ donation . . In 1999, two years after the law was passed, the first heart transplantation was successfully conducted, at Osaka University Hospital . That saAs was indicated in Figure Our results also indicate that as the number of people approving organ transplants from brain-dead bodies increased, the number of opponents increased in parallel, although in smaller numbers. This increase in political awareness or in political knowledge, as suggested elsewhere , led to As in other countries, the public debates on brain death and organ transplant, both in the private sphere and in the public sphere, were new attempts at governance of socio-technical innovation in the field of biomedicine. If social mobilization is fueled by the inability of the institutional system to respond adequately to public concerns, the issue status in Japan in the 1980s, when many individuals and institutions started to pay attention and get involved in the debate, might indicate insufficient mediation of the actors for conflict resolution (by the government) before and during that period. Generally in post-war Japan, a relatively small number of political and administrative elites have left the handling of many social conflicts to the workings of traditional social relations . SimilarIn the early 1990s when the Ad-hoc Council was set up, the Japanese government introduced a variety of measures to resolve social disputes by inviting the public into policy discussions. Our finding that many people recognized the issue and formed their opinions at times other than this period, however, suggests that these tactics were not very effective in terms of raising public awareness. In European countries, it was reported, public involvement measures served well as focusing devices, which helped attract attention and facilitate discussion among the various public . The difIt is remarkable that more than 40% of respondents were unaware of any past government policy, despite longstanding struggles around the issue and much media coverage. Of those who were aware, about half of them had favorable opinions of government efforts, while a slightly larger percentage had negative views. The fact that only 30% of respondents reported satisfaction indicates that there is much room to improve public awareness, acceptance, and appraisal in the policy process on bioethical issues.Multiple regression analysis disclosed that age, period of opinion formation, and knowledge of donor cards are independent factors affecting public appraisal of past governmental efforts. Individuals tended to give higher appraisal points when they were older. Indeed, age seems to be a major factor in opinion formation on several policy issues . PerhapsA national effort to incorporate ethical considerations into policy rests on an academic reservoir of technical experts, legal scholars and humanists, and on the public understanding of science and its social implications, as well . The proAccording to Taylor and Fiske , people In relation to this point, it should be noted that some changes in the policymaking process were regarded as important. A majority of respondents suggested that better information disclosure and more respect for both patients' and experts' opinions are desirable in the policy process. It follows that more effective involvement of the public, especially those stakeholders, in policymaking is warranted. Although an empirical assessment is not available, some procedures used in France and Germany might merit attention in modeling future policymaking for other countries. The National Consultative Bioethics Committee of France holds an annual public conference where, in addition to an activity report from the Committee, many ethical topics are discussed, with the participation of both experts and laypeople . The GerIn many countries, participation has gained momentum in a variety of policy domains . In healIn our study, a majority of people (61.8%) responded that they would not participate in future policy discussions, while the rest (38.2%) responded that they would. The absence of association between government appraisal and participation intent indicates that the latter is determined by factors other than the former. Multivariate analysis showed that younger age, earlier period of first attention, earlier period of opinion formation, and more knowledge of past governmental efforts are positively associated with the intent to participate in future policy discussions. This indicates that the more attentive members of the public, namely those long-term observers with their own opinions and knowledge of current policy, have more interest and consequently a greater intention to participate in policy discussions. This finding is consistent with past studies indicating that participation is facilitated by policy knowledge and/or political sophistication . PositivReasons cited for being unwilling to participate in the process indicate that many feel unqualified or unknowledgeable but not necessarily too busy or uninterested. Further analysis revealed that older people are more likely to cite \"Experts better\" and \"Ineffective\", and are less likely to choose \"Busy\"; that females are more likely to cite \"Difficult\", and that people tend to cite \"Ineffective\" when they are more knowledgeable about past government efforts, while choose \"Busy\" or \"Uninterested\" when they are less knowledgeable. These findings suggest that despite their latent interest in the issue, people are unwilling to commit themselves to policy discussions because of their perception of inadequacy stemming both from their lack of knowledge and sense of inefficacy, as judged from past experiences. The absence of an association between appraisal and participation suggests that people might be uncertain about their own competence and efficacy in policymaking. It can be inferred that people hope for a means of understanding the issue, so as to formulate their opinions for themselves.Political participation is facilitated by having a personal stake and perceived self-efficacy in policymaking. Conversely, it could be hampered by both indifference to the issue and a sense of powerlessness . More spIn the context of Japan, it is important to remember that public involvement measures thus far used were not very effective in raising public awareness and that people consider some changes desirable in the policymaking process. It is possible that more people could be inspired to participate by changing the design of public involvement measures, from a consultation type of involvement to a partnership model . EffortsMore innovative methods of public participation show promise and deserve consideration in improving policy process on medico-ethical issues and increasing public satisfaction with policy and politics. These include consensus conferences, citizens' juries, scenario workshops, deliberative opinion polls, among others . DistincAs is always the case with mass opinion surveys, this study cannot escape the possible bias introduced by the low response rates of polls . A samplAmong many topics to be considered for future research is the function of (mass) media vis-\u00e0-vis public opinion formation. The media should be examined critically as they influence both experts, policymakers, and the public. Mass media have a dual function in these processes: as a conduit of debates and negotiations as well as a source of influence . Also onGovernment decisions and their outcomes, namely the enactment and subsequent implementation of organ transplants, attracted public attention and helped formulate public opinion on the issue, more than did the processes leading to enactment. In the case of the concept of brain death and the legalization of organ transplant in Japan, many people still were unaware of past government efforts in policymaking, including the measures used for public involvement, despite past longstanding social debates. Only a small percentage of the public indicated satisfaction with the process. However, those who were attentive to the issue, knowledgeable of the past policy process as well as of the current policy, tended to rate the policy process more positively. Although people do not always manifest their intent to participate in future policy discussions, they might maintain sufficient interest in biomedical issues and have a latent wish to get involved in the policy process.The author(s) declare that they have no competing interests.All the authors fully participated in the planning, designing, and carrying out of the surveys for this study. HS performed the statistical analysis and drafted the article. All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:"} +{"text": "Medical ethics has existed since the time of Hippocrates. However, formal training in bioethics did not become established until a few decades ago. Bioethics has gained a strong foothold in health sciences in the developed world, especially in Europe and North America. The situation is quite different in many developing countries. In most African countries, bioethics \u2013 as established and practiced today in the west- is either non-existent or is rudimentary.Though bioethics has come of age in the developed and some developing countries, it is still largely \"foreign\" to most African countries. In some parts of Africa, some bioethics conferences have been held in the past decade to create research ethics awareness and ensure conformity to international guidelines for research with human participants. This idea has arisen in recognition of the genuine need to develop capacity for reviewing the ethics of research in Africa. It is also a condition required by external sponsors of collaborative research in Africa. The awareness and interest that these conferences have aroused need to be further strengthened and extended beyond research ethics to clinical practice. By and large, bioethics education in schools that train doctors and other health care providers is the hook that anchors both research ethics and clinical ethics.This communication reviews the current situation of bioethics in Africa as it applies to research ethics workshops and proposes that in spite of the present efforts to integrate ethics into biomedical research in Africa, much still needs to be done to accomplish this. A more comprehensive approach to bioethics with an all-inclusive benefit is to incorporate formal ethics education into health training institutions in Africa. Medical Ethics, which became the template on which the code of ethics of the American Medical Association was based in 1847 [Before the modern discipline of bioethics evolved, ethics had been on the centre stage of medical practice for more than two millennia, since the time of Hippocrates. In 1803, Thomas Percival published his book on in 1847 . Of the The scope of bioethics has continued to expand in response to changes in societal dynamics, medical technology and health care practices. Perhaps more because of its antecedents than for want of content, most of the discourse and writings in the early days of bioethics centred on issues of patient-physician relationship, respect for person, best interest of the patient, and justice in health care delivery. Today, traditional bioethics discussions and literature are changing as new ethical concerns evolve around dilemmas posed by new technology on the subjects of end of life issues, organ donation, human reproduction and human genomics. Moreover, the bioethics agenda has expanded to include the subjects of resource allocation, organizational ethics and public health ethics, among others.Bioethics in its present form is rooted in and largely dominated by western culture. The tempo and content of bioethics discourse are largely influenced by the technological creations of the developed world. However, ethics is not exclusively the domain of the west. Core ethical values are essentially the same for all human communities leaving aside each community's customs, culture and preferences . AccordiIn the bioethics literature, bioethical discourse and arguments have been most prominent and intense concerning research involving human participants. One major achievement in this regard is the creation of an oversight body that sees to the proper design and conduct of research that conforms to generally acceptable and established ethical guidelines. There resides in this body the duty of ensuring that research sponsors and investigators abide by established conventions for carrying out clinical research. They also perform the role of assuring the safety of research participants and ensuring that participants (and/or society) benefit from the outcome of research. The relevance of this body to modern day health care and research is partly exemplified in the absence of any major scandals in the form and magnitude of those already recorded in history. The establishment of research ethics boards has not solved all the ethical problems of biomedical research though. There is still a lot to do to re-structure, re-empower and re-position the board to match the complexities of the present-day technology-driven medical research and practice.Various bodies within and outside Africa have pioneered the movement towards ensuring that medical research in Africa conforms to international ethical guidelines. This is the aspect of bioethics that is most visible in Africa and has been anchored partly by the Pan African Bioethics Initiative (PABIN), a pan-African organization that was established in 2001 to foster the development of bioethics in Africa with a particular focus on research ethics (5). This idea has arisen in recognition of the genuine need to develop capacity for reviewing the ethics of research in Africa. It is also a condition required by external sponsors of collaborative research in Africa. Ethics workshops and conferences have been held in different parts of Africa, including Tanzania , Zambia While the present efforts and achievements are commendable, much still needs to be done for the effects to filter through to the grassroots, which is the main arena of research activities and where the burdens of research are most felt. I say this for the following reasons. First, the present efforts are still limited in extent and effect. Hitherto, most of the conferences have taken place in two or three geographical zones of the continent and have been limited to a few days of activities. Of course, the interests and motives of the sponsoring agencies together with the presence on the ground of those who are available to organize the conference locally determine where, when, for how long and the number of participants in the workshop. There is thus restriction on the number of researchers who can attend the conferences from all over Africa and on the amount of knowledge that can be imbibed in those few days. The consequence is that the same few people attend the conferences most of the time. These attendees from a few centres might not be able to change unethical research practices in their countries. Attempts by these few to train their colleagues locally are often constrained by lack of funds.Second, absence of national directories of research activities in most African countries makes the magnitude of biomedical and social sciences research in Africa to be underestimated. For example in Nigeria, five categories of research and researchers are easily identifiable. Individual or institution-supported research is done by students, clinicians (including resident doctors) and other scientists. This category constitutes a significant proportion of research in tertiary academic and health institutions. Industry-sponsored research is undertaken by researchers for pharmaceutical companies to promote new or old drugs. Such research protocol may be indigenously developed or be a part of multicentre trials. In most instances, these companies do not go through the institutions where the researchers are based, but deal directly with individual researchers, who may or may not subject the research protocol to an ethics board review. Collaborative research with colleagues from the developed countries is often externally funded. It includes hospital and community-based trials and mostly involves experimenting with drugs or vaccines. Of particular ethical concern in collaborative research is the fact that external sponsors may differ in their motives for conducting research and there may be limited applicability of research benefits to the country or local community . MoreoveThird, a majority of Africa's research participants are highly vulnerable given their low level of formal education and the political, social and economic milieu in which they live. The fourth reason is that Africa is a pluralistic society with diverse peoples and cultures. While general guidelines may apply in most cases, in some the peculiarities of each ethnic and cultural group will significantly affect what research is done and how it is done in those communities. Lastly, not every research centre has established an ethics review process. Where already established, most of the ethics review committees are grossly underfunded and unequipped for their duties.This is the branch of bioethics that addresses ethical conflicts that arise in daily clinical practice in health care institutions, through the establishment of hospital ethics committees and ethics consultation services. Fletcher and Siegler define ethics consultation \"as a service provided by an individual consultant, team or committee to address the ethical issues involved in a specific clinical case. Its central purpose is to improve the process and outcomes of patents' care by helping to identify, analyze, and resolve ethical problems\" . An ethiDo we need the services of hospital ethics committees or consultants in African hospitals or at the bedside? That may not be the point presently. However, clinical ethics is neither about committees and consultations, nor about technology and end-of-life issues alone. Common sense and intuition are insufficient to address all ethical issues that arise in patients' care. The well-intentioned clinical decisions and judgments of yesterday may turn out to be unsound in the searchlight of today's ethical scrutiny. Although core moral virtues have generally guided medical practice in Africa as elsewhere, there is the increasing need to apply both cognitive and behavioural ethical values to everyday decision making at the bedside by the physician as well as other health-care professionals . PerhapsIt is not here suggested that next on the agenda of health care in Africa is to devote attention and resources to training bioethics consultants for the bedside. Most people in Africa still do not have access to qualified health personnel and reasonable health care. The point is that deliberate efforts should be made to train present and future health care providers to be aware of the core moral virtues required of them in their duties to patients; and be sensitive to the ethical values of their patients, their families and the society.In the developed world education in ethics is no longer a \"hidden curriculum\" that is It is now time for Africa to join the rest of the world by introducing ethics education into the curricula of all medical schools where it is not presently taught. This is where the future of bioethics and health care delivery and research in Africa lies. Apart from some countries in the southern and eastern part of Africa and a handful of universities in other parts of Africa, there is no formal ethics education in most of Africa's medical schools. Ethics education to medical students is a necessary and required commitment to accomplishing an all-round training of the doctor whose decisions are both technical and moral. Much attention has wholly focused on the technical aspect of medical education, leaving the student to develop his or her moral attitudes passively through observation and intuition. Formal ethics teaching aims at equipping students with a common framework on which to reconcile patients' medical needs with their values, perceptions, situations and beliefs ; and mayPertinent to any discussion about teaching bioethics in Africa is the issue of shortage of trained bioethicists to fill the vacancies that would be created in academic institutions in many African countries. Apart from South Africa and a few others, most countries in Africa lack the requisite bioethics manpower that would be needed in the medical schools. Even in institutions where bioethics is already part of the medical curriculum, it is unlikely that there are enough bioethics teachers. It is in this regard that the efforts of international agencies that fund training of developing world bioethicists are noteworthy. Those Africans who have undergone bioethics training in the developed world and have become pioneers in their institutions have an awesome responsibility of establishing credible training agenda for their countries. They are also well positioned to directly seek funding for such home-based programmes from international sponsors. At the beginning, such scholars may encounter some crisis of identity and acceptance within the established academic system. However, such difficulties would fizzle out as they persist in highlighting and proffering solutions to the myriad of contemporary ethical problems within the system and the society. The initial difficulty of publishing their works in established western bioethics literature can be overcome by patronizing local or regional journals that target the primary audience for which their work is meant and open access journals that reach far and wide. Moreover, most bioethics literature consists of commentaries, observations, personal opinions and philosophical reasonings. As African boethicists embark on qualitative research to highlight ethical issues in Africa and provide quantitative data to fill the gap and provide information which are frequently lacking in most western bioethics journals, access to western dominated journals would be enhanced.Bioethics, Chadwick and Schuklenk question the altruism behind training developing world bioethicists in the west and warn against bioethics colonialism. They opine that graduates of such programmes are subjected to western ethical views and ideologies and that the developing world is not funded to develop bioethics capacity based on its own thinking [In an editorial published in a recent issue of thinking . While IIn the light of the foregoing, it is imperative that African bioethics must evolve which should take cognizance of its unique needs and circumstances and which, though amenable to improvement as a result of continuing interactions with other cultures and values, yet is not overshadowed by those influences. The need for the individual clinician/researcher to be committed to upholding high ethical standards and principles that respect the social, cultural, economic, educational and religious values of the people can not be over-emphasized. More efforts are required towards increasing continent-wide awareness about ethical issues in biomedical practice and research through ethics conferences, workshops, national bioethics conferences, the public media and Non-Governmental Organisations (NGOs). Countries where bioethics presence is fairly strong should assist neighboring countries to establish a presence, especially in organizing ethics review committees at research centres and institutions. Within countries, the possibility of joint or regional ethics review committees should be explored.Continuing and expanded support from the international bioethics community is required now more than before, to develop capacity for training of academic faculty, clinicians, researchers, government health ministry officials, NGOs, and the media in bioethics. The initiatives of the National Institute of Health of the United States to provide training grants for bioethics programmes within and outside Africa and the support of other institutions and bodies like the Wellcome Trust, African Malaria Network Trust (AMANET) and European Forum for Good Clinical Practice (EFGCP) among others, to the course of bioethics in Africa are noteworthy. Their support for bioethics capacity building programmes should not be limited to one or two sub-regions but to the whole continent. Further sponsorship should be provided for academic institutions within Africa to establish more short- and long-term training programmes at sub-regional levels. More importantly, these institutions should support the movement towards formal incorporation of bioethics into the curricula of medical schools and other health training institutions in Africa. The present and future needs for this in Africa are most apparent now. According to an African adage, the best time to plant a tree is twenty years ago, the next best time is now.The author was a Fogarty Fellow (2003/2004) at the Joint Centre for Bioethics, University of Toronto 88, College Street, Toronto, Ontario, Canada M5G 1L4.The pre-publication history for this paper can be accessed here:"} +{"text": "Africa in the twenty-first century is faced with a heavy burden of disease, combined with ill-equipped medical systems and underdeveloped technological capacity. A major challenge for the international community is to bring scientific and technological advances like genomics to bear on the health priorities of poorer countries. The New Partnership for Africa's Development has identified science and technology as a key platform for Africa's renewal. Recognizing the timeliness of this issue, the African Centre for Technology Studies and the University of Toronto Joint Centre for Bioethics co-organized a course on Genomics and Public Health Policy in Nairobi, Kenya, the first of a series of similar courses to take place in the developing world. This article presents the findings and recommendations that emerged from this process, recommendations which suggest that a regional approach to developing sound science and technology policies is the key to harnessing genome-related biotechnology to improve health and contribute to human development in Africa.The objectives of the course were to familiarize participants with the current status and implications of genomics for health in Africa; to provide frameworks for analyzing and debating the policy and ethical questions; and to begin developing a network across different sectors by sharing perspectives and building relationships. To achieve these goals the course brought together a diverse group of stakeholders from academic research centres, the media, non-governmental, voluntary and legal organizations to stimulate multi-sectoral debate around issues of policy. Topics included scientific advances in genomics innovation systems and business models, international regulatory frameworks, as well as ethical and legal issues.Seven main recommendations emerged: establish a network for sustained dialogue among participants; identify champions among politicians; use the New Plan for African Development (NEPAD) as entry point onto political agenda; commission an African capacity survey in genomics-related R&D to determine areas of strength; undertake a detailed study of R&D models with demonstrated success in the developing world, i.e. China, India, Cuba, Brazil; establish seven regional research centres of excellence; and, create sustainable financing mechanisms. A concrete outcome of this intensive five-day course was the establishment of the African Genome Policy Forum, a multi-stakeholder forum to foster further discussion on policy.With African leaders engaged in the New Partnership for Africa's Development, science and technology is well poised to play a valuable role in Africa's renewal, by contributing to economic development and to improved health. Africa's first course on Genomics and Public Health Policy aspired to contribute to the effort to bring this issue to the forefront of the policy debate, focusing on genomics through the lens of public health. The process that has led to this course has served as a model for three subsequent courses , and the establishment of similar regional networks on genomics and policy, which could form the basis for inter-regional dialogue in the future. Inequities in global health continue to be among the major challenges facing the international community . Despite, and the Institute for Molecular and Cell Biology-Africa (IMCB-A), founded in 1999 to study the molecular mechanisms of tropical infections. A further example is the new Biosciences Facility for Eastern and Central Africa that was recently launched as part of a NEPAD initiative . Par. Pare.g.e.g. . GenomicThe most efficient means of garnering political support is often to go directly to the politicians themselves \u2013 those who have been supportive or outspoken of the issues in question \u2013 to put the subject before their colleagues. The course itself represented an important step in this direction, as it brought together a spectrum of stakeholders, including academics, civil society, and government officials. The course, and the subsequently established network, therefore furnished an opportunity for direct communication and dialogue among individuals with a shared vision, including policymakers in a position to \"champion\" the issues and proposals that emerged from the course to their colleagues and others.NEPAD offers a possible forum to bring the subject of genomics-related biotechnology onto the political agenda, and provides a means of informing African leaders of genomics and its relevance to improving health and development in Africa. In particular, the AGPF recommends the establishment by NEPAD of an 'African Genomics Committee', which would provide a plan for utilizing genomics and other new technologies to enhance health in Africa, advocate for increased investment in S & T, target other relevant stakeholders in individual countries, educate policy makers about the need for a strong R&D base established through partnerships across Africa, and organize steering committees to identify gaps and implement strategies for improvement.Participants agreed on the need to consider emerging technologies like genomics in light of Africa's specific health challenges, and consequently on the importance of prioritizing these and identifying strategic entry points. Infectious diseases, genetic and other non-communicable disorders, sanitation, nutrition, environmental pollution and loss of biodiversity were all proposed as areas requiring concerted attention, with a special emphasis on the potential for using genomics-related biotechnology to target the three biggest killers in Africa: malaria, HIV/AIDS and tuberculosis. There are already well-known African-led initiatives to apply scientific innovation to combat important health concerns, such as the Multilateral Initiative on Malaria, and the African Malaria Vaccine Testing Network (AMVTN). It will be important to build on existing success stories, and to identify gaps in terms of priority health areas receiving inadequate attention. This will help to focus efforts and to more efficiently channel limited resource, both financial and human. A regional approach, which has since been adopted by NEPAD, was proposed as a promising mechanism for harnessing existing competence to address local needs.This survey would identify strategic areas of strength, such as existing centres of excellence, potential areas of improvement, and health priorities receive inadequate attention. It would also serve to identify local and national innovators, and to inform the structuring of Regional Centres of Excellence described below.For several years, genomics has been linked with a number of high-profile, intensely controversial issues like human cloning and genetically modified organisms. While emerging technologies like genomics raise a number of important ethical and social issues that deserve careful consideration , a nuancPublic engagement was seen to form part of a long-term strategy for capacity building, and raising the overall profile of science and technology in Africa. The discussions reflected a conception of capacity strengthening as intimately linked with quality education \u2013 at all levels, and across disciplines. Core to this debate among course participants was the belief that endogenous capacity must be developed in order that Africa can begin to be self-sufficient, and itself become an innovator. Participants identified the following categories as needing attention:There is a need to introduce innovative techniques to teach science and technology in the classroom, in order to generate interest and aptitude in the subject matter from an early stage in the educational process. Besides contemporary scientific approaches, indigenous knowledge and its applications to health could also be a relevant component to include in the curriculum.Those in a position to shape policy should be familiarized with codes of ethics pertaining to their field; moreover, they should be educated about how best to capitalize on international frameworks in order to ensure that their countries benefit from such arrangements, and are not exploited. Policy makers should develop strategies for negotiating their interests collectively in international forums, when appropriate, given shared needs and values.There is a general need to strengthen capacity in the area of communication, in particular on increasing the level of science literacy among the media. This might include integrating journalism and science programs at the college and university levels. There is a corresponding need to improve the ability of scientists to communicate the relevance of their work to the public, and to policy-makers.There is a great need to build capacity in Africa with regard to the ethical, legal and social issues (ELSI) which inevitably accompany the emergence of new technologies. Strategies would in many cases involve sensitizing the public to issues of relevance, such as their rights as patients and participants in research , encouraging dialogue about the social consequences of introducing new technologies into traditional settings, and putting frameworks in place (e.g. ethics review boards) to ensure that ethical, quality and safety standards before research is undertaken.Networks provide a means of generating new ideas, pooling the creative energies of individuals, and exchanging advice and expertise around a particular area of focus, in this case genomics and health policy. Such networks could play an advocacy role, combining the voices and the influence of key players from diverse disciplines and sectors, to advance a common aim. Collaborations, at the level of institutions \u2013 both within and between countries and regions \u2013 would facilitate the transfer of both knowledge and technology. During the course, it was pointed out that there is a particular need to encourage linkages between universities and industry to, among other things, facilitate the move from research and development to product generation and commercialization. This could include mechanisms to facilitate relationships between universities undertaking research in biotechnology and local industries. Institutional partnerships and collaborations at all levels, including internationally, can mean the channelling of resources to common areas of focus, and pooling the relative strengths and resources of partner institutions [Along with the need to strengthen the R&D base in science and technology, participants of the course identified a related need to increase the emphasis on commercialization \u2013 not only as a tool for sparking innovation but also to permit the generation of capital necessary to sustain the industry. An important step in the process of moving toward commercialization is the forming of alliances within countries, between universities and industry, sometimes known as \"cross-linking\". The fruitfulness of the Africa course, where people from across sectors and sub-regions came together with a common mission, re-enforced the value and the importance of establishing cross-sectoral networks and collaborations. itutions . Such coEnsuring that the benefits of science and technology, including emerging fields like genomics, requires a long-term strategy for sustained investment.Three models were suggestedThe establishment of an African Science and Technology Fund, dedicated to supporting research and development in the area of health-related biotechnology, would rely upon the contribution of African governments.The establishment of an Investment Fund for genome-related biotechnologies for improving health would represent an innovative approach to obtaining capital, providing a further incentive for investors to put money into development by creating a fund that provides a return on investment, as well as furnishing funds for advancement. Such a fund might be dedicated to providing capital for the development of mature, or future, health-related technologies.Capitalizing on existing funds allocated for research related to diseases afflicting Africa, such as the WHO's Global Fund to Fight AIDS, Tuberculosis and Malaria. Genomics and biotechnology represents a powerful set of tools for health improvement, and the World Health Organization through its Genomics and World Health (2002) report has raised it as an important issue deserving international attention. It is important to use this positive emphasis to give weight to the case for the relevance of biotechnology to health in developing countries, particularly for policy makers.With respect to R&D, there are already areas of strength on the continent; it is crucial to identify localized expertise, and to establish linkages with centres elsewhere in the region, as well as abroad, to ensure the transfer of knowledge and of technology, and to facilitate human resource development. Infrastructure must be developed to attract qualified African researchers to remain in or to return to Africa \u2013 both to support them, technically, intellectually, and socially and to provide them with similar opportunities for creativity and growth as may be found in other locales. The Biosciences Facility, established in 2003 by NEPAD, takes up this challenge, promoting \"scientific excellence by bringing together a critical mass of scientists drawn from national, regional and international institutions in state-of-the-art facilities where they can undertake cutting-edge research to help solve the most important development constraints faced by the poor in Africa\" . While tDeveloping countries in various parts of the world have proven that they too can have strong technology sectors, and make important contributions in terms of science and innovation. Their successes represent an opportunity to bring to the attention of politicians that there are countries succeeding in genomics. A detailed study of these models can provide important insights into how Africa can capitalize on the promise of genomics and biotechnology, particularly as it relates to health. In 2003, the Joint Centre for Bioethics completed a qualitative study of R&D in biotechnology in South Africa; similar studies are underway in Cuba, Egypt and China. Research of this kind could feed into more systematic efforts in the region to better understand how some developing countries, including those in Africa, have managed to develop S&T research and manufacturing capacity in the health sector.Africa. These regional centres of excellence need not preclude the existence of national centres of excellence. The Biosciences Facility is modelled on such an approach.The proposed centres would be distributed across Northern, Southern, Eastern, Western and Central African sub-regions. Each centre would have its own area of focus, in terms of targeted health problems, depending on regional expertise. The Centres would not be the sole preserve of each region, but would in fact use the strengths and specializations of each region to achieve the goal of harnessing genomics to improve health in The course on Genomics and Public Health Policy in Africa was carefully designed, with inputs from both its Canadian and African co-organizers, to have a programme and participant profile reflecting the inter-disciplinarity of the issue being considered. Genomics cuts across S&T, environmental, development, industrial, education and health policy and generates important ethical, legal and social issues. It therefore requires a genuinely participatory and multi-stakeholder approach, as well as frank discussions about both the potential promise and perils of a relatively new science.The strength of the course, as reflected in the evaluations submitted by participants, was the rare opportunity for discussion and networking among opinion leaders from different sectors. Both during and between sessions, participants exchanged perspectives and experiences with others from different regions of the content, and from different disciplines. Senior political officials, journalists, academics, and civil society representatives worked together in Study Teams to create proposals. Discussions were lively and open, with broad participation from those in attendance. However, a weakness of the course was the absence of industry representatives, who would certainly have contributed an important and valuable point of view. The small number of women participants was also a notable disadvantage. Later courses modelled on the Nairobi offering had greater success in drawing participants from industry and obtaining a better gender balance. Notably, however, the recommendations that emerged from these courses, while reflecting differences due to regional priorities and context, did not vary considerably despite the broader contribution, particularly from the private sector ., as well as a web-based discussion board. While there was some initial activity on the discussion board, this eventually subsided, and was soon evident that this approach had failed. In an effort to revive the momentum and to solicit ideas from AGPF members about how to best move forward with the network, a short survey was sent to members asking what their needs were, both in terms of the network as well as in terms of the technical facilities at their disposal. The response rate was extremely low; however, those who provided feedback confirmed what the participation level suggested: namely, that information technology facilities in Africa are such that very few individuals, outside of some well-equipped academic or private institutions, have regular access to the internet. The web-based discussion board was, therefore, in practice a highly unsustainable option for the majority of participants. The point was also raised that it was not enough to be connected electronically; there was also a need to share a more tangible goal or project, and to have a more visible leader from within the group, to galvanize efforts and motivate continued interaction. One respondent explained that finding the time to contribute to such networks is extraordinarily difficult for many Africans, who often \"wear many hats\". As a result, a general interest was insufficient to justify diverting time from other tasks; a concrete, realizable goal was essential for engaging individuals who already feel over-stretched. As a consequence of these inputs, an email-based forum was established, since most AGPF members have better access to email than to the internet, and a moderator was temporarily appointed over the group. Activity on the forum improved and continues today, more than two years later, though interventions are irregular and generally extend to the sharing of information or material of interest, rather than discussions about issues.A major outcome of the Nairobi course, and one which had strong support from participants, was the creation of a virtual network to facilitate ongoing interaction and discussion. Within two weeks of its completion, a website was created for the course The India course on Genomics and Public Health Policy was held in January 2003, less than one year after the inaugural Nairobi effort. Based on feedback from the previous course, the questionnaire requesting feedback about participants' technical and substantive needs in relation to the creation of a network was distributed during the course, to permit the creation of a network that was much more responsive to the needs of the participants. Moderators from among the participants were nominated before the course' end and their roles clarified, to facilitate the sustainability and autonomy of the network.Later in 2003, two further courses were held in Oman and in Venezuela, both of which added a further element demonstrating the learning from the first two courses. On both occasions, the Joint Centre for Bioethics collaborated with the Regional Offices of the World Health Organization; in the first instance, with the Eastern Mediterranean office (EMRO) and in the second, with the Pan-American Health Organization (PAHO). This collaboration ensured that the recommendations of each course had an institutional structure through which they could be channelled, to reach the ear of decision-makers. EMRO and PAHO have extensive links with ministries of health within their regions, as well as with representatives from civil society and industry. This provided an opportunity for the results of the course to have a much wider impact. By contract, the impact of the Nairobi course is very much linked to the efforts of individual participants to engage with their constituencies and with the NEPAD initiative, of which one of their members is now a senior actor. The Forum developed following the Nairobi course has not provided a framework to drive action the way it was initially intended; however, it continues to provide a portal for information-sharing and dialogue.The executive course on Genomics and Public Health Policy in Africa was the first of its kind to be held on the continent. The response of participants indicated a tremendous enthusiasm for and interest in discussing the emerging technology of genomics and its applications for addressing the health woes of Africans. The sessions covered a spectrum of topics, from basic science, to ethics, business models and international frameworks \u2013 exemplifying the range of intersecting issues relevant to informed discussions about genomics and related policy. The course also was a demonstration of the fruitfulness of a multi-stakeholder approach. An important aim of the course was to encourage network-building and the development of meaningful interactions, as a foundation for sustained dialogue among opinion leaders. Participants were encouraged to develop independent proposals in a collaborative environment, rather than to be passive recipients of \"expertise\" from the session leaders. The result was a series of concrete proposals for action, and the establishment of an e-network to provide a forum for ongoing communication, discussion and elaboration of the issues and proposals raised during the course. Several participants agreed to raise the proposals and themes articulated to their colleagues; the course also generated some publicity, as journalists invited to attend and to participate actively in the meeting reported on the key issues in various media ,17; see ; see 17;across regions in the developing world. Each of the three executive course held to-date has addressed similar themes in relation to genomics and health; but each has also been adapted to the particular context and interests of the host country or region. This has partly been achieved through active collaboration between the Joint Centre for Bioethics and the host institutions. The electronic networks provide a means of generating a long-term impact, driven by participants who are empowered, in their particular capacities, to take forward the ideas shared and the proposals developed through their interaction. The Nairobi course also highlighted the importance of being proactive in soliciting suggestions from participants about creative means of virtual networking that realistically address the poor information technology infrastructure in most parts of Africa. It also was instructive in demonstrating that a network is not itself self-sustaining; it must be driven by a clear, shared vision among participants, and possibly even a concrete and realizable project. Moreover, ideally a moderator from within the group should take leadership in feeding the forum, and motivating ongoing participation.Since the completion of this course, three more offerings have taken place, one in India in collaboration with the Indian Council for Medical Research (ICMR) in January 2003, another in Oman in August 2003, and a third in Venezuela in 2004. A fourth course is being planned for a venue in South-east Asia. The Nairobi offering demonstrated clearly the receptiveness of African researchers and policy makers to such an initiative, and captured the vision of a cross-section of stakeholders around how to ensure that the new wave of scientific promise does not pass them by, or crush them in its wake, but instead is harnessed for better health and to further economic development in their region . The couconcrete proposals to inform policymaking.The New Partnership for Africa's Development (NEPAD) has made science and technology (including genomics and biotechnology) a key platform in its plan for economic renewal ,9. IndeeThe author(s) declare that they have no competing interests.All authors participated in and contributed to the course. ACS drafted the manuscript. PAS and ASD conceived of the course, refined the manuscript for critical content and approved final version; and with JM, participated the course design and its coordination. AGFP members provided intellectual input, through their lively discussions and proposals during the Course on Genomics & Public Health Policy in Africa, held 4\u20138 March 2002.. This course was funded primarily by Genome Canada and the International Development Research Centre (Canada). PAS holds a Distinguished Investigator Award from the Canadian Institutes for Health Research. ASD is supported by the McLaughlin Centre for Molecular Medicine. The African Centre for Technology Studies, which hosted the course, was supported by the Norwegian Agency for Development Co-operation.The Canadian Program from Genomics and Global Health is funded by several sources listed at"} +{"text": "The authors describe the Ptolemy Project, a recently developed model of electronic access to medical literature for surgeons in developing countries. The program provides for East African surgeons to become research affiliates of the University of Toronto and have access to the full text resources of the university library, via a secure system that monitors and evaluates their usage [The Ptolemy Project .Health has improved in developing countries more rapidly over the last half century than it did in Western countries from the 17th century onwards . The chiMaking any serious improvement in mortality, morbidity, and disability among the global poor will require more locally driven collaborative research and wider usage of the scientific literature . The funResearch capacity is lacking in the developing world, particularly in East Africa, making it vital that up-to-date research information is available to practicing physicians as a means to stimulate locally based and collaborative research. The need for the application of research information and the stimulation of research programs in East Africa is exemplified by the fact that a total of 400 surgeons are responsible for providing care to more than 200 million people. Isolation, burden of practice, and lack of research training and funding are the most common reasons for the dearth of research, and access by surgeons to contemporary scientific literature can help .All of these problems bring dissatisfaction to doctors, and also to patients, who often travel days to reach medical care facilities only to be placed on a wait list. For example, in Ethiopia the waiting list for elective paediatric surgery/neurosurgery is as long as 8\u201310 months, and for other elective general surgery disciplines the waiting list is 6\u20138 months, irrespective of the disease pathology. The situation is similar in the other East African countries. Lack of resources makes surgical practice, surgical education, and research difficult in Africa [Electronic media were introduced to East African surgeons in 2001. The Office of International Surgery at the University of Toronto has provided hands-on training on using these media to East African surgeons, as well as to the current trainees studying to take exams to become fellows or members of COSECSA. COSECSA candidates and surgical trainees are a prime target audience for Ptolemy, we believe, because those who learn to read the literature at an early stage in their careers are more likely to play leading roles in promoting education, research, and training in their regions. Surgeons in East Africa who want to sign up for Ptolemy download the registration and consent forms from the Ptolemy Web site and submit these to the Ptolemy coordinator in Tanzania or the Office of International Surgery in Toronto. The criteria to become a Ptolemy participant are shown inhttp://www.utoronto.ca/ois/SIA.htm), which also has instructions on how to sign up for the full course materials.The Ptolemy Project also offers a reading course called \u201cSurgery in Africa\u201d, which is designed as a pilot project to train leaders in surgical education from Africa. \u201cSurgery in Africa\u201d is a self-directed, online, journal-based course primarily directed at surgical trainees who are undertaking the COSECSA Fellowship. The course is also available to all surgeons in the East African region, and internationally, who are interested in international surgery. The course started with extensive bibliographies on a selection of controversial topics relevant to practice in Africa, as well as a discussion forum for the participants. With the \u201cSurgery in Africa\u201d reading course, we hope to place online medical information at the disposal of African surgeons. The course reading materials are available online on the Office of International Surgery Web site .Consent to electronic monitoring of their library usage.Be prepared to enter into a research affiliation with the University of Toronto Office of International Surgery.Have regular access to the Internet.Agree to participate in surveys to assess their use of the service supplied.Agree not to sell the information they obtain, redistribute it for financial gain, or allow others to use the service provided for financial gain.Acknowledge that the University of Toronto Office of International Surgery retains the right to discontinue their access at any time without any form of compensation.Participants mustAmerican Journal of Surgery (401 papers accessed)Current Orthopedics (359 papers)The British Journal of Surgery (276 papers)The Journal of the American Association for Pediatrics (265 papers)Burns (257 papers)Surgical Endoscopy (225 papers)"} +{"text": "This commentary provides an overview and selected highlights from the scientific program of the 5th Annual Meeting of the International Society for Behavioral Nutrition and Physical Activity. With six world-renowned keynote speakers; a spirited debate; 16 symposium sessions; more than 150 peer-reviewed oral papers and poster presentations; 3 cutting-edge practical workshops; and networking opportunities with over 250 delegates from 27 countries around the world on the menu, the meeting certainly lived up to the highest of expectations.Health at every size: a new weight paradigm for obesity and weight-related issues, chaired by Dr Marie-Claude Paquette, which suggested that an approach emphasizing health, acceptance of body size, physical activity and normalized eating, rather than 'weight loss', may be the antidote to the obesity epidemic.Among the conference highlights was the focus on research themes of increasing importance internationally. The International Society for Behavioral Nutrition and Physical activity was formed in recognition of the significant impact these two key behaviors have on health. While sedentary lifestyles and poor diet pose a range of health risks, currently there is a global focus particularly on those risks posed by an 'obesity pandemic', to which physical inactivity and certain dietary behaviors are arguably key contributors. This theme was reflected in two keynote sessions, from Professors Jim Hill and Steve Gortmaker, and a number of symposium and free paper sessions, discussing issues such as the behavioral causes of obesity; the potential contributing roles of individuals, parents, schools, regulators, and the broader environment; and opportunities for obesity prevention and intervention with children, adolescents and adults. A particular highlight was the inspiring session on obesity in Latin America, in which Juliana Kain, Juan Rivera and Kim Gans identified the very concerning rapid increase in obesity rates in countries such as Chile and Mexico, as well as among Hispanic adults living in the USA. The very promising results of early intervention programs to address this problem amongst school-children in Latin America at least provide some hope for the future, and the successes and challenges facing public health researchers working in these regions were inspiring to hear. The Society also prides itself on encouraging new paradigms by which to consider issues related to behavioral nutrition and physical activity, and this was evident, for example, in the session on The ISBNPA, however, is certainly not simply 'another obesity society', but draws together international expertise in investigating and intervening with nutrition and physical activity behaviors. A second key research theme illustrated in many outstanding presentations was the significant advances achieved in understanding the determinants of these behaviors. Environmental determinants, including built, natural, socio-cultural and policy factors, were a particular focus, and delegates heard from some of the world's foremost experts in the advanced approaches and methods used to investigate such determinants. These included the thoughtful keynote session given by Dr S.V. Subramanian, one of the world's leading experts in multilevel statistical methods, on the importance of social and neighborhood context and health, and the value of new methodologies for enhancing our understanding of these contextual determinants. Exciting methodological advances in measuring behavioral determinants were also provided in the challenging and thought-provoking symposium on Item-response theory, chaired by Dr Louise Masse.For many of us working in the fields of behavioral nutrition and physical activity, our ultimate professional aims include the development of sufficient knowledge and understanding to enable us to modify health-compromising behaviors and promote nutrition and physical activity to optimum levels. Evidence of substantial gains in these pursuits was abundant at the meeting. For instance, the impressive Pro-Children study has achieved numerous successes in promoting fruit and vegetable consumption in children across Europe, as outlined in the symposium chaired by Professor Hans Brug. Important achievements in behavior change interventions in worksites (symposium chaired by Professor Simone French), general practice (chaired by Dr Torben Jorgensen) and using computer tailoring (chaired by Willemieke Kroeze) were also showcased. Nonetheless, it is clear that we still have some way to go in our efforts to effect health-promoting behavior change. For instance, the need for truly theoretically-driven behavior change interventions, in which clearly defined behavior change techniques are systematically tested, is necessary in order to shift our intervention efforts from 'inventive art' to 'experimental science', as argued eloquently in the keynote session given by Professor Charles Abraham. For many delegates, a key highlight of the meeting was the riveting keynote debate in which the value of conventional versus more novel theoretical models was politely but assertively posed and challenged by Professors Hans Brug and Ken Resnicow. A little controversy; the questioning of conventional thinking and stirring up and reviewing of established ideas are surely signs of a successful conference, and the keynote debate certainly hit the mark in this respect. Professor Resnicow's introduction to 'Chaos Theory', in which juggling Velcro balls leading to an 'epiphany' provided a useful analogy for the random, chaotic, and non-linear process by which many individuals arrive at the 'tipping point' for behavior change, was particularly novel and well-received. A paper based on Resnicow's presentation has now been published in the International Journal of Physical Activity and Nutrition , with coThe conceptual innovations evident in the themes described above were complemented in the program with discussions and examples of cutting-edge methodological advances. Three pre-conference workshops provided the ideal opportunities to facilitate discussion and dissemination of practical applications of these methods. For example, the keynote on social context and health highlighted methodological advances in analytical techniques valuable for the consideration of environmental and contextual influences on health behavior, complemented by the inclusion in the program of the world-renowned workshop on multi-level statistical modelling, presented by Drs Frank van Lenthe and Jos Twisk. Similarly methodological themes described in the symposium on Applications of Item Response theory were explored in detail in the practical workshop on Measurement Concepts and Methods arising from IRT, facilitated by Drs Diane Allen and Louise Masse. Finally, the theme of childhood obesity and opportunities for parental coaching in its management was examined further in the workshop offered by Dr Moria Golan.Certainly among the many enjoyable aspects of the ISBNPA program was the announcement of the awards to encourage student and early career members of the Society. Congratulations are due to Ellen Haug, University of Bergen, Norway, and Dr Eric Hodges, Baylor College of Medicine, USA, for best oral presentations; and to Anne-Marie Meyer, University of North Caroline at Chapel Hill, and Dr Charlotta Pisinger, Research Centre for Prevention and Health, Denmark, for best poster presentations from student and early career researchers. Students and early career researchers were also treated to a 'Meet the mentors' event, in which they had the opportunity to participate in small-group discussions on research career-related issues with five seasoned researchers in the field from around the world.th ISBNPA meeting was a resounding success from which delegates walked away inspired, challenged, informed, and hopefully with many new ideas, new colleagues and new friends. Thanks are due to hard-working local organizers; the ISBNPA Program and Executive Committees; the many generous meeting sponsors; and most of all to the conference delegates. The success of any scientific meeting rest heavily on the efforts of attending delegates, and I heartily thank all delegates for such valued contributions to the program and to the International Society of Behavioral Nutrition and Physical Activity.In summary the 5"} +{"text": "Considering the continent's sizeable carbon stocks, their seemingly high vulnerability to anticipated climate and land use change, as well as growing populations and industrialization, Africa's carbon emissions and their interannual variability are likely to undergo substantial increases through the 21st century.The African continent has a large and growing role in the global carbon cycle, with potentially important climate change implications. However, the sparse observation network in and around the African continent means that Africa is one of the weakest links in our understanding of the global carbon cycle. Here, we combine data from regional and global inventories as well as forward and inverse model analyses to appraise what is known about Africa's continental-scale carbon dynamics. With low fossil emissions and productivity that largely compensates respiration, land conversion is Africa's primary net carbon release, much of it through burning of forests. Savanna fire emissions, though large, represent a short-term source that is offset by ensuing regrowth. While current data suggest a near zero decadal-scale carbon balance, interannual climate fluctuations induce sizeable variability in net ecosystem productivity and savanna fire emissions such that Africa is a major source of interannual variability in global atmospheric CO Africa stands out among continents for widespread and deeply entrenched poverty, slow economic development, and agricultural systems prone to failure during frequent and persistent droughts . Africa The diverse elements of the global carbon cycle have been the focus of much recent research -5; resea2 concentration measurements and atmospheric transport models. However such 'top-down' solutions have large uncertainties, particularly for Africa and other tropical regions, due to the paucity of appropriately located CO2 concentration measurements , such tgetation as well getation that maygetation -38.-1 globally and 0.4 Pg C y-1 for Africa, each of similar magnitude to estimates of total land use-related C emissions from those regions , or are precursors to radiatively active gases (e.g. ozone precursors). Methane and other hydrocarbons, carbon monoxide, and black carbon releases in Africa are almost entirely of pyrogenic origin, and are thus included in the biomass burning term Table 27,28,4,428,41. mosphere . At the The export of dissolved organic and inorganic carbon (DOC and DIC) in river water discharged to oceans is, by and large, offset by DOC and DIC delivered in precipitation Table . Africa 2 flux estimates for Africa than for global or tropical land areas, in general. Taken together, inversion results demonstrate that Africa's net role in global carbon cycling is highly uncertain. Furthermore, lack of data causes the inverse solution for southern Africa to trade off with solutions for South America and the southern oceans, such that results can vary widely between regions with no net change in overall source/sink strength . Sol. Sol2 flth should 2) and other greenhouse gases, largely from the burning of fossil fuels , and suWith as much as 40% of the world's fire emissions, about 20% of global net primary production and heterotrophic respiration, at least 20% of global land use emissions, and a major source of interannual variability in global net carbon exchange, African carbon dynamics are of global significance. The continent's vast carbon stocks seem to be highly vulnerable to climate change, evidenced by strong sensitivity of net ecosystem productivity and fire emissions to climate fluctuations. Being highly variable and insufficiently studied, there is a need for continued and enhanced observations of Africa's carbon stocks, fluxes, and atmospheric concentrations to enable more precise assessments of Africa's carbon cycle, and its sensitivity to natural and anthropogenic pressures and future climate.2 to the atmosphere as well as increase the magnitude of interannual variation in Africa's C fluxes by increasing Africa's biomass burning emissions and reducing the continent's net ecosystem productivity. If realized, these trends would have enormously important implications for global carbon dynamics and biospheric feedbacks to the climate system.In years ahead, Africa's land use pressures will undoubtedly increase and climate changes are anticipated to intensify drought cycles and make much of Africa warmer and dryer . FurtherThe author(s) declare that they have no competing interests.2 concentrations contributing to Figure All authors participated in detailed discussions that led to this review paper. CAW compiled and analyzed the data and drafted the manuscript. NPH originally conceived the paper and contributed to data analyses, interpretation, drafting and editing the manuscript. JCN, RJS, JAB and ASD provided intellectual input on available data and previous analyses, and on the synthesis, presentation and interpretation needed for this review. DFB made data available from a global time-dependent inverse analysis of COTable 2 content of precipitation reported in Miotke a,76 a28,7"} +{"text": "The design of clinical research deserves special caution so as to safeguard the rights of participating individuals. While the international community has agreed on ethical standards for the design of research, these frameworks still remain open to interpretation, revision and debate. Recently a breach in the consensus of how to apply these ethical standards to research in developing countries has occurred, notably beginning with the 1994 placebo-controlled trials to reduce maternal to child transmission of HIV-1 in Africa, Asia and the Caribbean. The design of these trials sparked intense debate with the inclusion of a placebo-control group despite the existence of a 'gold standard' and trial supporters grounded their justifications of the trial design on the context of scarcity in resource-poor settings.These 'contextual' apologetics are arguably an ethical loophole inherent in current bioethical methodology. However, this convenient appropriation of 'contextual' analysis simply fails to acknowledge the underpinnings of feminist ethical analysis upon which it must stand. A more rigorous analysis of the political, social, and economic structures pertaining to the global context of developing countries reveals that the bioethical principles of beneficence and justice fail to be met in this trial design.Within this broader, and theoretically necessary, understanding of context, it becomes impossible to justify an ethical double standard for research in developing countries. The design of clinical research trials deserves special caution, for such research is always at risk of crossing the fine line between regard for individual rights and potential exploitation of research subjects. Infamous experiments like the Tuskagee Syphilis Study have rendered evident the dangers for individuals when we cross that line. To safeguard human subjects, the international community has agreed on standard ethical principles, particularly the World Medical Association's Declaration of Helsinki; while it is encouraging that these frameworks exist, they remain open to interpretation, revision, and debate. With the 1994 placebo-controlled trials to reduce maternal to child transmission (MTCT) of HIV-1 initiated in Africa, Asia, and the Caribbean, we saw a breach in our consensus concerning the application of these principles, namely how to apply ethical standards to research conducted in the context of resource-poor settings.In fact, the 'contextual' apologetics for this breach are inherent, I will argue, in current bioethical methodology. As an application of ethical theory, bioethics pays particular attention to context by acknowledging the unique influence of relationships and the immediate environment on an individual's experience. In terms of developing countries, bioethics has grounded its contextual analysis on the discourse of scarcity and sacrifice . In partThe debate over the application of research ethics in developing countries surfaced with the early prevention of MTCT of HIV trials. While in 1994 there was an existing protocol from the AIDS Clinical Trial Group 076 (ACTG 076) for preventing MTCT, high antiretroviral (ARV) costs and insufficient infrastructure placed the regimen out of reach for the majority of the HIV-infected population in the developing world. To find a more cost-effective and applicable treatment for resource-poor settings, randomized placebo-controlled trials were initiated to investigate a short-course ARV regimen. However, these studies sparked intense debate as they clearly violated the condition of equipoise: that placebo groups are only deemed ethical if there exists sufficient uncertainty regarding the merit of the intervention. In other words, if there exists no gold standard of care then placebos can be justified. The arguments for not providing the 'gold standard' available in developed countries were founded on the existing low 'standard of care' in the context of developing countries. The NIH and CDC, both principal funding organizations of the studies, defended the studies' design: \"it is an unfortunate fact that the current standard of perinatal care for the HIV-infected pregnant women in the sites of the studies does not include any HIV prophylactic intervention at all\" and that placebo controls \"will be the most reliable answer to the question of the value of the study compared to the local standard of care .\" In oppWhile traditional ethical theory seeks fundamental principles to guide our actions, much of the current bioethical literature rejects claims to the effect that morality can be reduced to a set of universal principles . They arSince the Belmont Report defined the four principles of bioethics, namely the principles of non-maleficence, beneficence, justice and autonomy, they have been used to ensure that ethical standards are applied to research. To begin, the principle of non-maleficence states that research must cause no harm to subjects and the principle of beneficence states that due to their participation in research, all possible benefits to subjects should be maximized. This immediately raises some questions for the case at hand. While it can be argued that no outright harm was afforded to the mother-infant pairs in the placebo group since their access to ARVs was no different than it would have been within their country context, it raises the question of to whose standard of care was the trial responsible? Moreover, can we further justify using this low standard of care within a resource-rich, developed world led research trial, thus violating the principle of beneficence? What is missing is an acknowledgement of the interlocking political, social, and economic contextual factors of these trials and an examination of what 'standard of care' ought to mean. Ultimately, the question becomes whether the contextual argument is enough to justify violating the principle of beneficence.To answer this question, we must first make a distinction between the accessibility of AZT within the developing country as opposed to accessibility in a clinical trial. Supporters of the placebo-control design did not argue that it was financially or logistically unfeasible to provide the gold standard ACTG076 regimen for the control group, but rather that it was unnecessary because of the low standard of care which existed in the developing countries. Lurie and Wolfe argue that this contextualized 'standard of care' justifies withholding a readily available treatment [The tenuous nature of this contextualized justification becomes clear when we take a broader view of the factors determining an individual's access to health care. First, we must acknowledge that a person's scope of choice is often determined by forces beyond her own control. As Amartya Sen describes, not only do those in developing countries face economic deprivations, they are in turn subject to substantial 'unfreedoms'. These 'unfreedoms' include lack of employment (or freedom to participate in the market) and lack of access to health care . In partFurthermore, we can expand the scope of context to include both an evaluation of a nation's internal health care priorities and overriding global economic inequities. Recognizing the role of developing nations in the global economy, we see that it is not entirely by choice that developing countries provide a standard level of care that does not include adequate MTCT prophylaxis. Developing countries are in fact given very little option under the continuing reverberations of the 1980s debt crisis. After heavily borrowing from the International Monetary Fund and the World Bank, countries have faced 'forced' economic reform through the structural adjustment policies of these lending institutions such as the devaluation of currency and enforced transition into export-based economies. This economic re-structuring has either required or caused a significant erosion of social service infrastructure, including health care and has particularly impacted many of the most vulnerable populations [When expanding the parameters of context, the use of placebo-controlled trials in developing countries also fails to meet the principle of justice. This principle ensures that those who bear the burden of research risk will ultimately receive the benefits of the research . In otheTo assess the principle of justice within a broader understanding of the HIV epidemic, we must determine whether the research subjects would indeed receive adequate benefits for their participation. At the beginning of the debate, Annas and Grodin challenged the notion that an affordable treatment would ever be operational given the exceedingly low health care resources available to developing countries . While tIn terms of weighing the research subject's contributions to the trials against their benefits, which ten years later seems partial at best, an expanded context forces us to look at all the potential beneficiaries of this research. It has been argued that advances in more effective and cheaper methods of preventing vertical transmission are as likely to be implemented in developed countries as in developing countries . The facThe 'standard of care' debate has continued since the MTCT prevention trials and has prompted the inclusion of paragraph 29 in the Declaration of Helsinki:the benefits, risks, burdens, and effectiveness of a new method should be tested against those of the best current prophylactic, diagnostic, and therapeutic method. This does not exclude the use of placebo, or no treatment, in studies where no proven prophylactic diagnostic or therapeutic method exists .This attempt to create more stringent standards for the use of placebo-controlled trials regardless of the contextual standard of care, has been attacked by the international community as \"out of touch with contemporary thinking\" and overWherever appropriate, participants in the control group should be offered a universal standard of care for the disease being studied. Where it is not appropriate to offer a universal standard of care, the minimum standard of care that should be offered to the control group is the best intervention available for that disease as part of the national public health system .What determines the appropriateness of offering a universal standard of care may be scientific criteria or, as Schuklenk argues, may be erroneously conflated with economic criteria such as the low standard of care available in resource-poor settings . This daThe arguments put forward to understand the ethical dilemma created by the short-course ARV trials for the prevention of MTCT of HIV should not be interpreted to mean that research should never be done in developing countries. Rather, developed nations need to honestly assess their role in such research, take responsibility for their actions, and abstain from the exploitation of ethical loopholes as provided by the contextual nature of bioethics. It is essential that we consider context in our ethical deliberations, but we must be critical of our definition of context. It would be tragic if we allowed ethical principles to be manipulated for the exploitation of vulnerable populations, the psychological comfort of the true beneficiaries, and the effacement of real differences between individuals and populations. To this end, we must always remember that the inclusion of context is a corrective to traditional ethics, not an invitation to exploitation. As demonstrated above, the framework of the short-course ARV trials is fundamentally challenged when context is taken seriously. However, the current global situation does not engender optimism that this exploitative research constitutes an isolated incident. We must not shirk our own recognized ethical responsibilities \u2013 at the heart of research design we must situate the proper articulation of context, towards which I submit the above as a first step.AIDS Acquired Immune Deficiency SyndromeARV AntiretroviralAZT ZidovudineCDC Center for Disease ControlHIV Human Immunodeficiency VirusNIH National Institutes of HealthMTCT Mother to child transmission"} +{"text": "Changing the mindset of road users in Africa will be a challenge, says the author, but many lives are at stake. Research into road safety in developing countries is scarce, especially in Africa. This is inconsistent with the size of the problem: it has been predicted that by 2020, road traffic injuries will rank as high as third among causes of disability-adjusted life years (DALYs) lost . While SIn addition, while developing countries already account for more than 85% of all road traffic deaths in the world , the upsBecause road traffic injuries have long been considered to be inevitable and caused by random, unpredictable events, the international community's response to this worldwide public health crisis came relatively late. The World Health Organization (WHO) arranged a consultation meeting in April 2001, which led to a report entitled \u201cA 5-year WHO strategy for road traffic injury prevention\u201d that summarises the main recommendations from the working group . In 2003I reviewed published studies from Africa in the field of road traffic injuries to identify pending priorities for future research programmes that would enable the promotion of effective public health policies in road safety.The number of vehicles per inhabitant is still low in Africa: less than one licensed vehicle per 100 inhabitants in low-income Africa versus 60 in high-income countries. Fleet growth leads to increased road insecurity in developing countries . This exA comprehensive literature review published in 1997 showed that pedestrians accounted for between 41% and 75% of all road traffic deaths in developing countries . In AfriThe severity of road traffic crashes is also likely to be much greater in Africa than anywhere else, because many vulnerable road users are involved, but also because of the poor transport conditions such as lack of seat belts, overcrowding, and hazardous vehicle environments. Death/injury ratios are, however, not easy to compare because of the differential reporting bias for fatal and non-fatal injuries.The paucity of surveillance data from African countries leads to uncertainties, and probably to major under-estimates of the size of the problem . ImplemeIn Ghana, a study conducted in a rural area showed that most injured people are transported to hospitals staffed by general practitioners with no training in trauma care . Even inImprovement in this area is not an easy task. Even a developed country like the United States needed several years to achieve effectiveness of its trauma care system . There aResearch efforts enabling progress in primary and secondary prevention involve several fields: the human factor, including behaviours and driving capacities, vehicle conditions, and infrastructure. Although those topics are relevant to road safety all over the world, there are specificities in Africa that need to be taken into account.African public perception of the risks of road traffic injury must be understood in order to be able to adapt and apply prevention campaigns that have proved successful elsewhere. Because research in this field has never been a priority, almost nothing is known. Most available results are from South Africa and are concerned with superstition of cab drivers , perceptThe risk associated with drinking and driving has been thoroughly described in developed countries, but is certainly also a key determinant in developing countries, as shown in Kenya , NigeriaSpeed control probably carries the greatest potential to save lives. The key factor in the effectiveness of traffic regulations is the drivers' perception that they run a high risk of being detected and punished for infractions . UnfortuAlmost everything remains to be done in the field of medical driving incapacities. I found only two studies on the subject from Africa, related to visual impairment of drivers in Nigeria and SoutEven though the impact of vehicle condition has almost never been scientifically assessed , the issFinally, road infrastructures are also a component of road safety and are often prioritised by governments and funding agencies. However, scientific evaluation studies are missing , mainly As recently advocated by Khayesi and Peden , road saIn order to gauge the magnitude of past research efforts in road traffic injuries from Africa, I performed an automatic search in the PubMed database with the MeSH term \u201caccidents, traffic\u201d and found a total of 25,320 references. Of these, only 290 were selected by adding the MeSH term \u201cAfrica\u201d. For the purpose of comparison, I performed the same search with MeSH term \u201cHIV\u201d and found 193,695 references, of which 12,674 related to Africa. An Ad Hoc Committee on health research was convened in 1996 by WHO and provided estimates of research and development expenditures for major global health problems using a capture\u2013recapture method [Many results on road injury prevention are available from developed countries. We now urgently need to scale up surveillance and research efforts in developing countries in order to determine how to build on these results ,69, takiIn Africa, driving a car is still considered a privilege, an enviable option, not a risky task with inherent responsibilities. Unfortunately, Africa has other burning public health priorities. Documented success stories in road safety are needed to demonstrate that road traffic accidents need not be inevitable and unpredictable, but are avoidable. Changing the mindset of road users will be a challenge, but many lives are at stake.Alternate Language Article S1(201 KB DOC).Click here for additional data file."} +{"text": "The consequence of the low rate of penetrating injuries in Europe and the increase in non-operative management of blunt trauma is a decrease in surgeons' confidence in managing traumatic injuries has led to the need for new didactic tools. The aim of this retrospective study was to present the Corso di Chirurgia del Politrauma (Trauma Surgery Course), developed as a model for teaching operative trauma techniques, and assess its efficacy.the two-day course consisted of theoretical lectures and practical experience on large-sized swine. Data of the first 126 participants were collected and analyzed.All of the 126 general surgeons who had participated in the course judged it to be an efficient model to improve knowledge about the surgical treatment of trauma.A two-day course, focusing on trauma surgery, with lectures and life-like operation situations, represents a model for simulated training and can be useful to improve surgeons' confidence in managing trauma patients. Cooperation between organizers of similar initiatives would be beneficial and could lead to standardizing and improving such courses. The treatment of the thoraco-abdominal trauma has always represented a surgical challenge, owing to the peculiarity of these injuries. The need for specific training for surgeons involved in the care of these patients is justified by the difficulties in obtaining an exhaustive pre-operative assessment, the need for prompt decision-making, and the often limited available resources.Furthermore, the number of surgically treated trauma patients has markedly decreased in recent years, owing to many factors, such as the low rate of penetrating trauma, the improved safety system of vehicles, innovations in diagnostic tools, and the discovery of alternative treatments -4. Thus,in vivo animal models, generally large-sized swine, which simulate human thorax and abdomen quite well, enables extremely realistic situations to be recreated, even to the point of putting the participants under stress [A variety of didactic methods, based on \"simulated training\", have been suggested ,8 and cor stress ,9,10.In Italy the general surgeon is trained by six-year residency in General Surgery or in General and Emergency Surgery. However, there are no specific residency or university courses for Trauma Surgery. Managing trauma differs greatly from region to region; in most regions there is a lack of reference centers for thoracic and abdominal traumas, which are treated by the general surgeon of the nearest hospital, whereas specific traumas, such as neurosurgical, orthopedic and burns, are treated by specialized surgeons.Instead, in our region (Emilia-Romagna) there are three reference trauma center . These centers are equipped with all the necessary resources to treat all kinds of trauma.Since November 2002, a multi-trauma surgery-training course has been running in Bologna, Italy, which is mainly aimed at general surgeons, who, owing to their work, more frequently run into trauma injuries and residents who currently have less chance to gain experience in trauma surgery.The aim of this study was to present this course and discuss its purposes and educational effectiveness compared with similar courses. We also wanted to assess whether our model could be used as a qualified updating course. We present herein the results obtained in the first seven editions of the course.Gruppo Aperto per lo Studio del Trauma \u2013 GAST) belonging to the Clinical Emergency Surgery and Emergency Unit at La Sapienza University in Rome, Italy. The Trauma Surgery Course was conceived to share the experience of these two groups with other surgeons involved in trauma patient management. The course is mainly aimed at thoracic and abdominal trauma, as this usually involves the general surgeon, while specific topics, such as neurosurgical or orthopedic aspects are addressed in lessons on \"Multispecialized approach in E.R.\" and \"Complex pelvic truma\".Our project is based on the experience acquired in thoracic and abdominal injuries at Maggiore Hospital of Bologna, Italy, over the last 16 years. This hospital has been a reference centre for trauma management for several years, with over 400 cases of major (ISS>25) traumas per year. Cooperation has been set up between our multidisciplinary team of Emergency Surgery and Trauma and the Open Group for the Study of Trauma ; 12 out of 14 physicians are ATLS qualified, and 8 of them are ATLS instructors.The course is restricted to 18 participants, selected on a first come first served basis, owing to the difficulty to equip the veterinary operating rooms and enable each participant to practice the surgical techniques. The course lasts two days. Day one focuses on theoretical aspects, by analyzing the main topics of diagnosis and treatment and consists of surgical exercise on large-sized swine. All the participants are invited to perform the common surgical procedures Table used in The importance of the effectiveness and speed of intervention is frequently stressed. The animals are alive, intubated and monitored under the care of a veterinary anesthetist for the duration of the intervention. To better simulate a life-like situation, each table of the operating theatre is equipped and served by a qualified surgery nurse. For each animal there are 3 participants, distributed according to their capability and experience, and 2 tutors (1 for the abdomen and 1 for the thorax). The animals are treated in accordance with the Italian law on the use of laboratory animals.The comprehensive evaluation of the degree of learning takes into account technical skills, the ability to understand the clinical aspects, identify priorities, and repair the induced lesions in a life-like situation. The participants are evaluated by the tutors of the practical part of the course and rated: insufficient, sufficient, good, and excellent. Three different scores are attributed for abdominal surgical techniques, thoracic surgical techniques, and emergency surgery techniques, such as damage control and emergency thoracotomy.Twenty CME credits are awarded by the National Committee for Continuous Training. All the participants receive a form to fill in to evaluate the training course in terms of relevance of the topics, quality of teaching, and effectiveness in providing a continuous education.The course is held twice a year and is standardized and fully reproducible.At the end of course each participant is issued with a certificate.In the first 7 courses held from 2002 to 2005, 126 participants attended from 19 regions, uniformly distributed among northern, central and southern Italy.The mean age of participants was 43.9 years (range 30\u201360); ten were women . 124 were general surgeons or worked for humanitarian organizations and 2 were fifth year residents. The mean work experience, after residency, was 16 years (range 3 months \u2013 33 years). Nine (7.1%) participants were heads of surgery units.All the participants attended the theoretical lessons and had the opportunity to practice the programmed surgical techniques starting from the abdomen and ending with the thorax and heart.Since the participants were grouped based on their capability and experience, during the practical stage they were evaluated accordingly.None of the participants received an \"insufficient\" score. , with the majority of them rating good or excellent in all three fields .54 (42.8%) participants defined the course as \"highly relevant\", 70 (55.5%) \"relevant\", and 2 (1.5%) \"quite relevant\". None of the participants defined the course as \"slightly\" or \"not relevant\".The quality of the teaching on the course was considered by the participants as \"excellent\", \"good\", and \"satisfactory\" in 81 , 44 , and 1 cases, respectively. In no instance the quality of teaching was considered as \"mediocre\" or \"insufficient\".Finally, with respect to the evaluation of the efficacy of the course in providing a continuous education, the answers of participants were \"highly efficacious\", \"efficacious\", and \"moderately efficacious\" in 77 , 36 , and 13 cases, respectively, whereas none of the participants defined it \"partially efficacious\" or \"totally inefficacious\".The need to train surgeons to apply emergency surgery techniques not commonly applied, and to use new instrumentation has led to the spread of training programs based on life-like simulations, using instrumental or animal models -14. NormThe problem concerning specific training for the trauma surgeon has been dealt with in the past years by organizing courses with surgical simulation. The International Association for Trauma Surgery and Intensive Care organized a two-day course on the Definitive Surgical Trauma Care, including both theoretical and practical training on cadavers and live animals . A one-dThough we started from a different background, our proposal is very similar. The structure of our course is original but comparable to the few courses held in the USA and in other non-European countries. The \"main topics\" and the life-like situations for the evaluation of practical skills are essentially the same. However, the peculiarity of our course is that we pay more attention to the multidisciplinary approach as well as to diagnostic and resuscitation problems. Another peculiarity of our course is the presentation of some lectures by experienced nurses, in the idea of providing supportive arguments to the concept that the traumatized patient is a very complex one, and that only a multidisciplinary approach can produce the best outcome.We obtained very encouraging results from the first courses due to the high degree of attention paid during teaching sessions and the participation in the discussion of clinical cases. Therefore, although the parameters were not easily quantifiable, all the participants demonstrated with varying degrees of skill that they could successfully manage \"unfamiliar surgical situations\". Moreover, the participants judged the course to be very useful for their own training: more precisely, 98.3% rated the course favorably with regards to the need for personal updating; 99.1% for the quality of the teaching and 89,6% for the efficacy of the course for personal training. To further improve the theoretical aspects of the course, we currently mail some of the lectures before the course begins, with the aim of giving the participants pre-course preparation.On this basis, we can reasonably assume that the course was successful, due both to the peculiarity of the topic and the involvement in the practical section of the course. The participants coming from all the Italian regions indicated that the need for CME for surgeons involved in the management of such injuries was felt in many centers.Of note, all but two of the participants had previous vast working experience. This can be explained by the fact that participation fee (the fee for 2005 was \u20ac 1200) is quite high and more likely to be afforded by senior surgeons. However this cost only covers the overheads of the course. This highlights the difficulties in participation of post-graduate residents that are still in training, unlike the American course that attracts not only attending surgeons but also fellows and residents . One wayIdeally, a course should provide a theoretical education and a practical training of participants, by obtaining their direct involvement, and thus responding to their needs and expectations. With regards to the continuous education program, a future improvement of the course might be achieved by organizing workshops on particular clinical cases or on particular implications in the treatment of polytrauma. We are currently preparing a questionnaire to distribute to ex-participants in order to verify the course's impact on their day-to-day work.The problem of training trauma surgeons, i.e. the lack of experience in treating traumas in operating room, is common to all countries and the Since we are aware that most participants of our courses work in hospitals where the management of trauma patients is not common, we have introduced sections focusing on diagnosis, the emergency approach, and nursing management in the operating room and in the ward. This represents the peculiarity of our course compared to others.Finally, considering that most training programs have a common basis and teaching method, we think that cooperation among the teaching staffs that organize similar courses would be useful to ensure a uniform standard course with a single tested method with regards to evaluation of the participants, choice and assessment of the teaching staff, and planning of updating. This could lead to obtaining official approval as already occurs for the ATLS and, therefore, provide all participants with a similar background.In an age of advanced technology in distance learning with telematic and computer methods, we believe that a course on the management of trauma, designed to create extremely realistic conditions of stress in the operating room as in life-like situations is very useful to train the trauma surgeon.Besides the efficacy of the course, many other aspects should be discussed in the training institutions. For instance, if these courses are organized especially for surgeons who sporadically run into the traumatic diseases, should they become a mandatory part of training the general surgeon? If the main goal is to enable all surgeons to deal with traumas in the best possible way, how can we motivate students-in-training, who can hardly bear the expenses? On the basis of the results obtained previously in our courses and those of similar education programs, we can conclude that a theoretical and practical course, such as the \"Trauma Surgery Course\", is a good updating tool on trauma pathology for surgeons who work in Hospital Emergency Surgery Units or those belonging to Humanitarian Organizations, who are used to dealing with this pathology in foreign countries. Skills in trauma surgery should be an integral part of the surgeon's training and it would be important to gain the support of Scientific Societies to carry out these courses. Integration with other courses would lead to a wider diffusion and recognition of the teaching method."} +{"text": "Planning for the next pandemic influenza outbreak is underway in hospitals across the world. The global SARS experience has taught us that ethical frameworks to guide decision-making may help to reduce collateral damage and increase trust and solidarity within and between health care organisations. Good pandemic planning requires reflection on values because science alone cannot tell us how to prepare for a public health crisis.In this paper, we present an ethical framework for pandemic influenza planning. The ethical framework was developed with expertise from clinical, organisational and public health ethics and validated through a stakeholder engagement process. The ethical framework includes both substantive and procedural elements for ethical pandemic influenza planning. The incorporation of ethics into pandemic planning can be helped by senior hospital administrators sponsoring its use, by having stakeholders vet the framework, and by designing or identifying decision review processes. We discuss the merits and limits of an applied ethical framework for hospital decision-making, as well as the robustness of the framework.The need for reflection on the ethical issues raised by the spectre of a pandemic influenza outbreak is great. Our efforts to address the normative aspects of pandemic planning in hospitals have generated interest from other hospitals and from the governmental sector. The framework will require re-evaluation and refinement and we hope that this paper will generate feedback on how to make it even more robust. As the world prepares for the emergence of a pandemic strain of influenza, trans-national, national and local organisations and agencies are designing plans to manage community outbreaks. In addition, the medical community is identifying scientific research priorities and needs related to the anticipated pandemic -5. Thereof pandemic planning efforts, as opposed to the ethics in pandemic planning. For example, he argues persuasively that it is problematic that all three countries' plans accept particular conditions of resource scarcity as planning assumptions \" plans also presuppose certain ethical values, principles, norms, interests and preferences\" . It is i added]\" . As we sTake the example of triaging ventilated beds in an ICU. In theory, decision-makers rely on scientific evidence to determine how best to maximise benefit in the allocation of ventilated beds, but science cannot tell us whether or not the initial decision to maximise benefit is just. Because the notion of maximising benefit is derived from a reflection on values, ethical analysis is required to determine why a utilitarian approach to triage though maximisation of benefit is preferable to the assignment of ventilated beds on a different basis, for example that of greatest need. Even if the utilitarian maximisation of benefit is thought to be ethically sound, how to implement a system based on this criterion is not ethically straightforward, and requires ethical reflection about what counts as good stewardship, and about the moral obligation to demonstrate transparency, accountability, fairness and trustworthiness in the allocation of scarce resources.The importance of ethics to pandemic planning is in the \"the application of value judgements to science\" , especiaThe use of ethical frameworks to guide decision-making may help to mitigate some of the unintended and unavoidable collateral damage from an influenza pandemic. As Kotalik argues, the incorporation of ethics into pandemic plans can help to make them \"instruments for building mutual trust and solidarity at such time that will likely present a major challenge to our societies\" . Using eOne of the key lessons from the Toronto SARS experience was that health care institutions and their staff could benefit from the development of ethical frameworks for decision-making . The intBuilding on key lessons from SARS -14 and tIn Ontario the need for guidance on the ethical issues pertaining to an influenza pandemic has been widely acknowledged. As word of our work on an ethical framework for Sunnybrook and Women's College Health Science Centre (S & W) became known, we were invited to join other hospitals' pandemic planning efforts. There was also broader sectoral interest in ethics, and we were invited to join the Ontario Ministry of Health and Long Term Care's (MOHLTC) efforts to design a pandemic plan.Our working group was formed in response to the pandemic planning initiative that took place at S & W in early 2005. The hospital's Clinical Ethics Centre was invited to provide ethics support in this planning initiative. It soon became apparent that the scope of the issues went beyond the purview of clinical ethics to include organisational and public health ethics. Expertise in organisational and public health ethics was quickly procured through the University of Toronto Joint Centre for Bioethics which is a partnership between the University and sixteen affiliated healthcare organizations that includes S & W among its partners. S&W was subsequently de-amalgamated into Sunnybrook Health Sciences Centre and Women's College Hospital, thus the ethical framework is currently being implemented at Sunnybrook HSC.Ontario Health Pandemic Influenza Plan [As the framework took shape, we were invited to join the MOHLTC planning efforts. We began to work with the Vaccine and Antiviral working group at the MOHLTC, and we adapted our work to meet the related but distinct challenges facing government. While our work with the MOHLTC began with the Vaccine and Antiviral working group, the ethical framework we developed for the MOHLTC was eventually included in the nza Plan not as aExpertise in clinical ethics was important to the development of this framework because of the knowledge, skills and experience clinical ethicists need to address dilemmas or challenges found in the daily clinical arena. An obvious challenge was how to integrate expertise in public health ethics into a framework designed to guide decision-making in clinical health care settings. A related challenge was to thoughtfully integrate generally accepted principles and values from clinical ethics with those in public health ethics. In order to meet this challenge, the authors turned not only to the respective ethics literature, but also to the SARS experiences of Toronto hospitals and health care providers. A review of the SARS literature, and that of public health ethics more generally, guided the integration of the public health and the clinical ethics perspectives -14,17-19Not surprisingly, the literature on clinical ethics has little to say about disaster preparedness and how to make decisions about such things as triage under extraordinary circumstances. The ethics literature on bioterrorism and battle-field triage informed our thinking and called our attention to important issues such as the duty to care, reciprocity, equity and good stewardship -25. The The ethical framework was vetted through S & W's Pandemic Planning Committee, the Joint Centre for Bioethics' Clinical Ethics Group , the MOHLTC Vaccine and Antiviral Working Group, and the MOHLTC pandemic planning committee. Through this process, we refined the framework and we are grateful to these groups for their valuable insights.The ethical framework is intended to inform decision-making, not replace it. It is intended to encourage reflection on important values, discussion and review of ethical concerns arising from a public health crisis. It is intended also as a means to improve accountability for decision-making and may require revision as feedback and circumstances require.The framework is divided into two distinct parts, and begins with the premise that planning decisions for a pandemic influenza outbreak ought to be 1) guided by ethical decision-making processes and 2) informed by ethical values. Ethical processes can help to improve accountability and it is hoped that, to the extent that it is possible for ethical processes to produce ethical outcomes, the substantive ethical quality of decisions will be enhanced. Recognising, however, that ethical processes do not guarantee ethical outcomes, we have identified ten key ethical values to guide decision-making that address the substantive ethical dimensions of decision-making in this context.In planning for and throughout a pandemic influenza crisis, difficult decisions will be made that are fraught with ethical challenges. Our framework around ethical processes is based upon the \"accountability for reasonableness\" model developed by Daniels & Sabin and adapThe second part of the framework identifies ten key ethical values that should inform the pandemic influenza planning process and decision-making during an outbreak. These values are intended to provide guidance, and it is important to consider that more than one value may be relevant to a situation. Indeed, the hallmark of a challenging ethical decision is that one or more value(s) are in tension and that there is no clear answer about which one to privilege in making the decision. When values are in tension with one another, the importance of having ethical decision-making processes is reinforced (see above.)overall goals of pandemic planning, which generally include the minimization of morbidity, mortality, and societal disruption. Nevertheless, this is not to say that that a procedural engagement about the overall goals of a pandemic response would not benefit from using the ethical framework to guide and shape debate. A description of the values that should guide decision-making can be found in Table The values identified in our ethical framework were based initially on previous research findings on ethics and SARS at the University of Toronto Joint Centre for Bioethics (JCB). This work was funded by a Canadian Institutes of Health Research grant in 2004 through 2006 and has led to several key publications on the ethical dimensions of SARS ,36-39. IIncluded in the framework are \"hot button\" ethical issues that we identified through our work with Toronto hospitals and the MOHLTC. These issues were as follows:a) Targeting and prioritizing populations for vaccines and antiviralsb) Intensive Care Unit and hospital bed assignmentc) Duty to cared) Human resources allocation and staffinge) Visiting restrictionsf) Communications and how reviews of decisions will be handledThese \"hot button\" issues are not intended to be exhaustive, but rather they serve to illustrate how the values in the ethical framework can be used to identify key ethical aspects of decision-making.Let us take the issue of targeting and prioritizing populations for vaccine and antivirals to illustrate how the values in the ethical framework can help guide decision-making. The values of solidarity and protecting the public from harm would require that priorities be set to maximize the capacity to help society ensure that the ill are cared for during a pandemic. Furthermore proportionality would require that decision-makers consider who within the community are most vulnerable to the contagion as well as who are most likely to benefit from immunization. A well-informed public conversant with the values in the ethical framework and aware of the expertise that informed the ranking of priorities for immunisation would be consistent with value of trust and the principle of transparency.Lastly, while knowing how to use the framework to inform decision-making is vital, there is more to ensuring that the framework will be used or useful.We have identified three necessary, if not exhaustive elements to the successful integration of ethics into hospital pandemic planning processes. These elements are 1) sponsorship of the ethical framework by senior hospital administration; 2) vetting of the framework by key stakeholders and; 3) decision review processes.Whether or not an ethical framework is used to inform decision-making in a health care institution depends to a large extent on people in senior positions of an organisation seeing its relevance to the decision-making process. In part, this is dependant on how robust the framework is, but it also requires the willingness to frame (at least some) pandemic planning issues as normative in nature.Some may argue that the values in the framework are too stringent or impractical to implement under crisis conditions, especially those found in the Ethical Processes part of the framework (see Table The senior administration at S & W (many of whom were part of the Pandemic Planning Committee) had previous experience with the accountability for reasonableness framework for decision-making, and thus their pandemic influenza planning committee was already familiar with the Ethical Processes part of the framework, and they were receptive to the idea of being guided by an ethical framework. Senior administrators may also have been receptive to the ethical framework because, as they learned from SARS, organisations that did not have decision-making processes that honoured the values for ethical process during SARS have been dealing with a legacy of collateral damage to staff and patients in the form of distrust and low morale . For theIn order to obtain support for, or \"buy in\" to an ethical framework, it is important that key stakeholders in an institution vet the framework. This requires careful consideration of who the key stakeholders are in an institution. Not only should this include those with responsibility for decision-making, but also those who will be affected by decisions taken. For the vetting process is not just intended to create \"buy in\" but also to decrease the likelihood that interests and issues that are relevant to pandemic planning will be neglected or overlooked, thereby enhancing the moral legitimacy of the values in the framework. In addition, a process of stakeholder vetting increases the likelihood that the values instantiated in the framework resonate with the stakeholder community.It has been our experience that the values in the framework did resonate with the pandemic planners with whom we have shared this ethical framework. The primarily pragmatic justification for the selection of the values in the framework means that the framework is provisional so it ought to be subject to revision in light of compelling argument, empirical evidence and further stakeholder feedback. It is important to note, however, that the iterative and inclusive process through which the values in the framework were deliberated amongst the various stakeholder groups lends them a form of discursive ethical legitimacy and helps to justify their inclusion in the ethical framework. We intend that the framework invite further dialogue about its legitimacy and its adequacy. We will return to this issue in the final section of this paper.Ideally, the vetting process would include people who can represent the interests of patients, families and volunteers who are part of the hospital's constituency. Although patient relations, human resources and occupational health representatives from S & W provided guidance and feedback in the development of the framework, direct input from patients and family representatives was not obtained. One limitation of our framework is that is has yet to be vetted by these important stakeholders.The importance of solidarity to the management of a public health crisis would also suggest that the public and other health care organisations be considered stakeholders in hospital pandemic planning. While it may not be pragmatic for hospitals to undertake broad public consultation and vetting processes for their pandemic plans in general, and their ethical frameworks in particular, solidarity and equity suggest that these broader stakeholder interests are relevant to pandemic planning. Consequently, opportunities for broader ethical dialogue about pandemic planning need to be encouraged.Formal decision review process template. Unpublished; 2003) that aids organisations in identifying existing and establishing new mechanisms that can be used for the formal reviews of decisions. We believe decision review mechanisms are an essential part of ethical decision-making in a public health crisis, and are one way to put the values in the ethical framework in to action.In order to ensure that the support of key stakeholders is maintained through an outbreak, there need to be effective communication mechanisms in place. An important aspect of responsive decision-making processes is ensuring that there are formal opportunities to revisit and revise decisions as new information emerges. As part of our ethical framework, we formulated a template for decision review processes, , and that to formalize this process is to increase its fairness and moral legitimacy. Indeed, there may be existing mechanisms which can handle these kinds of reviews.Formal mechanisms for reviewing decisions are needed in order to capture feedback from stakeholders on key decisions, and to resolve disputes and challenges. These processes are important for ensuring that decisions are the best possible under the circumstances given changing information and for engaging stakeholders constructively around the difficult decisions that must be made. Given the unpredictable nature of public health emergencies and the difficulty this poses for those in charge of planning and decision-making, it is reasonable to assume that decisions will be revised throughout the pandemic influenza crisis. Disputes or challenges may arise from the restrictions or requirements imposed on staff, patients and families during a pandemic influenza outbreak. Thus, decision review processes are essential. Again, while some may argue that this is too stringent a measure for a time of crisis, we argue that reviews of decisions will be taking place regardless sponsorship from senior hospital administration; 2) vetting by stakeholders and; 3) decision review processes.\u2022 An ethical framework is robust to the extent that pandemic influenza planning decisions are seen to be ethically legitimate by those affected by them.\u2022 In order to increase the robustness of pandemic planning in general, timely public debate about the ethical issues is essential.The author(s) declare that they have no competing interests.AT, KF, JG and RU contributed equally to the development of the ethical framework. AT drafted this manuscript and KF, JC and RU contributed equally to the revision of the manuscript. All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:"} +{"text": "South Africa is likely to be the first country in the world to host an adolescent HIV vaccine trial. Adolescents may be enrolled in late 2007. In the development and review of adolescent HIV vaccine trial protocols there are many complexities to consider, and much work to be done if these important trials are to become a reality.This article sets out essential requirements for the lawful conduct of adolescent research in South Africa including compliance with consent requirements, child protection laws, and processes for the ethical and regulatory approval of research.This article outlines likely complexities for researchers and research ethics committees, including determining that trial interventions meet current risk standards for child research. Explicit recommendations are made for role-players in other jurisdictions who may also be planning such trials. This article concludes with concrete steps for implementing these important trials in South Africa and other jurisdictions, including planning for consent processes; delineating privacy rights; compiling information necessary for ethics committees to assess risks to child participants; training trial site staff to recognize when disclosures trig mandatory reporting response; networking among relevant ethics commitees; and lobbying the National Regulatory Authority for guidance. Adolescents have been involved in trials for vaccines to prevent sexually transmitted infections/diseases like Human Papilloma Virus (HPV) and Herpes Simplex Virus type-2, (HSV-2), both in the developed and the developing world ,2. In MeBecause adolescents are severely affected by the HIV epidemic -7, they South African researchers anticipate enrolling 16\u201318 year olds in a phase IIb proof of concept vaccine trial towards the end of 2007. These adolescents will be at high risk of HIV infection. Further, it is envisaged that 12\u201315 year old adolescents will be involved in phase I/II trials as early as 2008. The phase I/II studies will determine the safety, tolerability and preliminary immunogenicity in pre-teens and young adolescents of candidate vaccines that are currently being tested for preliminary efficacy in adults and older adolescents. The phase I/II studies will involve a small number of healthy adolescents, at low risk of acquiring HIV infection.In the development and review of adolescent HIV vaccine protocols, there are many legal complexities that need to be addressed. This article sets out complexities linked to consent requirements; special legal protections for children in need of care and protection; and procedural requirements for the approval of such research. These complexities are not unique to South Africa because in many jurisdictions where such trials may occur adolescent participants will have limited legal capacity, and the enrolment of adolescents must take account of local laws dealing with, for e.g., the age of lawful consent to sex and obligations on certain adults to report abuse. Furthermore, like South Africa, very few developing countries will have dedicated research laws. Therefore this article also discusses implications of these legal complexities for role-players in other jurisdictions. We make a series of recommendations for additional work that needs to be done in order to realize the optimal involvement of adolescent participants. We refer to \"child\" as a person under the age of 18 ,12 and aPhase I/II HIV vaccine trials with adolescents would aim to recruit a small number of healthy adolescents who are at low risk of acquiring HIV infection. Phase IIb trials will recruit adolescents at higher risk of HIV infection. The trials themselves will comprise of a number of interventions, including a general physical examination and medical history-taking; assessment of HIV risk factors including personal questions about sex and substance use; personalized risk reduction counseling; administration of an experimental HIV vaccine or placebo via injection; blood draws for laboratory safety and immunogenicity testing; and regular testing for HIV infection. Adolescents will be classified as \"low risk\" if they are not sexually active as defined as primary abstinence or secondary abstinence . Although many interventions in a phase I trial may not hold out the prospect of direct benefit for adolescent participants, there are interventions that may benefit participants, such as personalised risk reduction counselling. Additionally, there may be associated benefits such as identification of medical conditions like hypertension and early referral to care, access to care for intercurrent illness or reproductive health, and referral for abuse. Despite conceptual difficulties in classifying whole protocols as either \"therapeutic\" or \"non-therapeutic research\" it is poCurrently South Africa does not have a comprehensive ethical-legal framework regulating research with children. The Constitution (s12(2) (c)) prohibits research without informed consent which prFor research to be lawful within any legal system it must comply with substantive and procedural requirements established in law and ethical guidelines. The nature of these obligations varies from system to system, however most establish requirements relating to consent, ethical review and scientific validity. In South Africa, there are three key issues that must be taken into account when ensuring that adolescent research is lawful. These are elaborated on below:(i) Consent requirements must be met(ii) Legal obligations in child protection laws must be complied with; and(iii) There must be compliance with requirements for ethical and regulatory review.In South Africa, for adolescent trials to be lawful, consent must given by a participant with legal capacity to consent, or if not competent, by a person with the authority to consent on the participant's behalf.25 With regard to \"non-therapeutic\" research some have argued for independent child consent if there is no risk at all [In terms of current South African law, there is no provision setting out when children may provide their own independent consent to research. However children may consent independently to medical treatment from the age of 14 ,23. Accok at all . In termk at all ,21.However under future law, in terms of s 71 of the National Health Act (NHA) consent The NHA also specifies that consent must also be obtained from minors if they are \"capable of understanding\" . There iOne example: if an adolescent tests HIV infected during a trial, the adolescent may expect the researchers to keep such information confidential, and society may regard this as reasonable given that adolescent's have the right to HIV testing and confidentiality from the age of 14 outside of a trial context [Another complexity is that given that adolescents will not be able to consent to research independently complex privacy issues arise. Although the NHA does not specifically refer to a child/minor's right to privacy in research, a child does have a constitutional and common law right to privacy. In terms of this right a person with an expectation of privacy is entitled to keep aspects of their life private provided this expectation is regarded as reasonable by society . This is context . HoweverA second example: adolescents may have expectations of privacy regarding their sexual risk information. In this instance society may not regard this as reasonable as parents are legally responsible for their children and are required to protect them from harm. Withholding information from parents regarding risks facing their children such as experimentation with drugs or alcohol may mean that a parent is unable to meet their legal obligations to protect the child. Therefore a parent may be entitled to risk information. However parents could be asked to waive their right to access such information, if other safeguards are in place like referral to counselling, and if the information does not involve significant risks and criminal activity like sexual abuse.A further complexity is that there is evidence that adolescents when compared to adults are less likely to spontaneously consider risks and benefits, are lesswhether adolescents can consent independently to research or if they cannot, which adults have the capacity to provide proxy consent byexamining research specific legislation, legislation that establishes the age of majority or that provides adolescents with capacity to consent to specific acts such as medical treatment. Child Care legislation may also describe the persons with legal authority to act on behalf of children. Ethical guidelines should also be consulted for advice.For South African trials, we recommend that HIV vaccine trial researchers anticipate a future change in the age of majority. Researchers could currently consider obtaining parental consent for all under-21's, and be prepared to submit protocol amendments to obtain expedited approval to obtain parental consent for under-18's when the change in the law becomes effective. For trials requiring large numbers of adolescents to be enrolled, researchers must consider how they will assist primary care-givers looking after orphans with the complex legal process of transferring guardianship to enable them to provide lawful consent to adolescent participation. For other jurisdictions, we recommend that clarity be obtained on We recommend that South African researchers consider how they will establish when an adolescent has the necessary depth of understanding for the higher standard of competence required for consent or whethPrivacy rights for sexual risk information and HIV status will have to be delineated and both parents and adolescents will have to understand what information parents will/will not have access to. In South Africa this detailed work will hinge on the \"legitimate expectation\" of privacy test. In other jurisdictions, however, this analysis should also be done using relevant legal principles.In all settings considering trials, consent processes should be designed that are sensitive to characteristics of adolescent decision-making. While group formats, like Vaccine Discussion Groups, may be effective for disseminating information about trials, cognizance should be taken of how peer influence may affect the evaluation of risk. It is likely that extended interpersonal contact with a knowledgeable trial site counsellor may effectively improve understanding and coundoes not hold out the prospect of direct benefit, the allowable risk level is a minor increase over the risks of daily life or routine medical and psychological tests (\"everyday risk\"); if justified by the risk-knowledge ratio. The one guideline in exception [does hold out the prospect of benefit there is no explicit upper limit of risk however the risks must be justified by the benefit. It is likely that if HIV vaccine trials are reviewed by a number of RECs, they will disagree on how to apply the risk standards for non-beneficial research [In order for adolescent participation in HIV vaccine trials to be lawful in South Africa, current common law requirements must be met, that is consent to such research must be in accordance with public policy . In othexception permits xception . Most Soresearch . It is pBecause the majority of recent guidelines now require it, we recommend that RECs become familiar with \"component analysis\" to demarcate interventions as beneficial or non-beneficial and to a11 in section 42 requires medical practitioners, amongst others, to report suspected ill-treatment, abuse or neglect of children to the Department of Social Development. Failure to report is a criminal offence. Additionally, the Family Violence Act [any person who examines, treats, attends to, advises, instructs or cares for any child, who suspects that the child has been ill-treated, must report this to a Commissioner of Child Welfare, a social worker or the police. The future Children's Act [any person to identify children in need of care and protection and to refer these to a social worker. In terms of these laws, it is argued that HIV vaccine trial staff would have a legal duty to report abuse or ill-treatment disclosed by adolescent in a trial. Due to the broad meaning of terms such as \"ill-treatment\" disclosures of rape and or some cases of under-age sex would need to be reported to the appropriate authorities.It is a principle of international law that special legal protections ought to exist to protect persons during childhood . South Aence Act states ten's Act in sectiWe recommend that study staff be trained to recognise those disclosures that trigger a mandatory reporting response. Consent procedures should inform parents and adolescents about this limit to confidentiality. The protocol should not only spell out how formalistic legal requirements will be met but broader ethical requirements to promote children's' welfare, such as whether such information will be disclosed to parents. Furthermore, stakeholders in other jurisdictions will need to establish whether any special protections exist for children, such as the mandatory disclosure of HIV status, and if any special obligations are placed on researchers regarding children in need of care or protection. Such laws may either be found in criminal codes or in child specific laws.To be lawful, research must be approved by the relevant authorities.The NHA (s73) sets outIn terms of current law, for all HIV vaccine trials, a permit must be obtained from the Executive Council of Genetically Modified Organisms for any research into a genetically modified organism such as an HIV vaccine. This is a body established within the Department of Agriculture .In terms of current law , the MinPhase I adolescent HIV vaccine trials may be classed as \"non-therapeutic\". When Section 71(3)(b)(iv) of the NHA becomes firstly, allow adolescents into trials and secondly, to license an adolescent vaccine. Very specifically, in the South African setting, we recommend that researchers anticipate the public policy assessment that the Minister will have to undertake by framing their protocols in a way that assists the Minister, or delegated authority, to make a speedy determination. RECs in all jurisdictions planning such trials should be aware that public policy considerations are becoming increasingly important in the regulation of research and are being reflected in law. Therefore research-specific and health specific laws should be consulted to establish whether there are specific limits on certain forms of research. Researchers who craft their protocols with thoughtful attention to ethical guidelines may meet most, if not all, of the legal requirements. Where the law is unclear, researchers should consult with their REC or get legal advice from a lawyer trained in research ethics and law.We recommend that RECs begin to network with each other to build consensus about adolescent trials, including the acceptability of trial interventions in terms of national risk standards. Like regulatory authorities in all the jurisdictions planning adolescent HIV vaccine trials, the Medicines Control Council should be requested to articulate the data it will require to HIV vaccine trials with adolescents will pose legal complexities for all jurisdictions in which they will take place. Complexities may stem from a lack of legal guidance, or a lack of tools for using legal concepts, and some disharmony between ethical guidelines . The leg1. Investigators must plan for the complex consent processes that will be required including assessment of understanding2. Investigators must compile the information necessary for RECs to assess potential risks to child participants to establish if these meet national risk standards for child research3. It must be determined whether adolescents have privacy rights to their sexual risk information and HIV status and if so, whether these will be waived or not4. Trial site staff must be trained to recognize when adolescent disclosures (e.g. of abuse or neglect) will trigger a mandatory reporting response5. RECs that will review adolescent protocols should begin to network with one another to prepare for a coordinated response to similar research protocols, and6. The national regulatory authority should outline the data they will need to allow adolescent trials and allow licensure of an adolescent vaccine.A journey of a 1000 miles starts with a single step.Enrolling adolescents in HIV vaccine trials will pose legal complexities in all jurisdictions where these will occur, likely beginning with South Africa. Investigators and REC will have to deal with i) consent requirements (e.g. who must consent? what can be consented to?); ii) obligations to protect children from abuse and maltreatment and iii) procedural requirements for approval of the research . Jurisdictions planning adolescent HIV vaccine trials, like South Africa, will have to consider a range of networking, tool development and training processes to ensure that sound adolescent trials are a reality.The author(s) declare that they have no competing interests.Ms Slack and Strode conceived of the legal analysis and prepared the preliminary analysis. Dr Fleischer redrafted sections of the legal analysis and helped to draft the overall manuscript. Dr Gray prepared the introductory sections, made inputs to the legal analysis and helped to draft the overall manuscript. Ms Ranchod helped to prepare the analysis of the consent requirements and assisted with drafting the overall manuscript. All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:"} +{"text": "Mackie and colleagues performed over 100 interviews with managers and executives at 13 bioscience companies to learn about bioindustry ethics from their perspective. Ethical issues are a growing concern for companies, in the wake of a series of corporate governance scandals and the accompanying sharp decline in societal and investor trust in firms. Some companies have responded to these concerns by creating internal ethics programs. In the aerospace sector, for example, companies have focused these efforts on ensuring compliance with government regulations, while in the energy sector, ethics initiatives have concentrated on environmental issues and corporate and social responsibility .Companies in pharmaceutical, biotech, and bioagricultural industries must not only comply with a wide array of government regulations and balance the profit motive with social responsibility, but also must deal with the complex array of ethical issues raised by doing business in the biosciences. These complex issues include the production and sale of genetically modified foods; gene therapy experiments and embryonic stem cell research to produce new therapies; animal testing for pharmaceuticals; drug pricing at home and in developing countries; the potential misuse of personal genetic information; how to appropriately commercialize and profit from genetic and biological samples; and the creation of transgenic animals for drug production.Although a theoretical debate rages about whether bioethicists should consult to industry ,3, no oUsing the case study method see, we perfEthical issues are a growing concern for companies.BioIndustry Ethics to have non-core business discussions. To ask questions like: \u2018Is there anything wrong with this deal?\u2019 or \u2018How far should we go to be ethical?\u2019 We help them clarify why a certain activity is acceptable and why other choices are not.\u201dSome companies in our study are now putting weight on candidates' values, in addition to their past performance and technical expertise, when making hiring decisions. Six of the companies in this study include interview questions during the hiring process that aim to assess how the potential employee's values align with the ethical values of the company. For example, employees from both Millennium and Maxim explained that technical skills and experience are now combined with the candidate's behavioural and ethical fit when assessing the candidate's merits.A key driver of employee behaviour in any organization is the types of behaviours that are rewarded and promoted by upper management. One medium-sized and three large-sized companies that were interviewed have incorporated ethics into employee performance reviews. For example, a Merck interviewee explained that the intention when designing a performance system is to not create incentives that encourage employees to bend the rules. The employee said: \u201cWe try not to put people in situations where they have to \u2018make a number\u2019 so that they won't be tempted to give a $10,000 research grant to a doctor just to make a sale and meet that number.\u201dGattaca andInherit the Wind\u2014facilitated by an outside ethics expert to draw out issues for ethics debates among employees. Millennium, Genzyme, and Merck have also implemented an Ethics Helpline that employees can call anonymously to get guidance about ethical issues.The majority of the medium- and large-sized companies we interviewed have developed formal ethics education sessions on topics such as research ethics and informed consent. Several of these firms have also introduced less formal forums for ethics discussion, where employees can voice concerns and have questions answered. At Monsanto, these are called \u201ctown hall meetings.\u201d Millennium used popular film screenings\u2014e.g.,All of the companies we interviewed use techniques to reinforce ethics within the company, although these techniques tend to be more formally organized in the larger firms. Some try to remind employees of the importance of ethics by defining core values as part of the company's culture (such as Genzyme's \u201cPutting the Patient First\u201d approach), and some provide oral and visual reinforcements . Ethical guidelines in areas such as clinical trials and sales and marketing of pharmaceuticals were given during training and then reinforced with oral and visual reminders. For these techniques to be effective and to have an impact on the ethical conduct of employees, our interviewees explained that they need to be continually and consistently reinforced by management.Of the 13 companies, seven, spanning all sizes, have extended their ethics approach to their business partners\u2014to share the benefits created by these companies and/or to try to ensure that their partners also follow high ethical standards. The primary bioindustry ethics mechanism used by Diversa, for example, is the benefit-sharing partnerships they have developed with countries that are involved with the collection of biological samples. Instead of secretly taking genetic material from these countries, referred to as \u201cbiopiracy,\u201d Diversa forms partnerships\u2014with, for example, a national park\u2014to collect and process samples. In return, the company provides its partner with some up-front funding and training, along with a royalty percentage on any discoveries that originate from the samples.Novo Nordisk has extended its Triple Bottom Line approach beyond the company to include its suppliers, who must fill out a social/ environmental survey to assess whether they are following the same social and environmental norms to which Novo Nordisk ascribes. If a supplier is found to be violating some of these norms, Novo Nordisk will work with them to improve their standards.Companies of all sizes in our study (seven of the 13) are engaging with external stakeholders on ethical issues, although this seemed to become more of a necessity as firms became larger and higher-profile. These stakeholders include local communities, nongovernmental organizations, governments, interest groups, and consumers. One example is Novo Nordisk's invitations to animal welfare activist groups to tour its labs and to discuss potential solutions to their differences. Explained one Novo Nordisk VP: \u201cIt was successful because of the openness and because we weren't seeking consensus. What we were seeking was to understand each other and to look for areas of commonality\u2026 However, some companies think that the dialogue is sufficient. But it's not. It requires action and responsiveness. There has to be a tangible outcome.\u201dAnother example of listening to stakeholders and acting on stakeholder concerns includes TGN Biotech's efforts to engage citizens of a community in which the company planned to build a pig farm. They held an information night to educate the community about their science and to answer their questions. Interviewees explained that if the community had decided that it did not want the company to build the genetically modified pig farm in their community, the company was committed to finding another location.Some of the fear in society about new science and technology stems from a perception that companies develop their science and technology secretively and do not share negative results. The Vioxx incident with Merck, which occurred after our study, demonstrates the importance of transparency. According to our findings, this is one area where companies are presently struggling to find a balance between protecting important patent and research information and the need to be transparent in a manner that will meet public satisfaction. One mechanism to address this issue was highlighted by the Director of Clinical Reporting at Novo Nordisk, who reported that the company tries its best to publish academic papers on every study to the greatest extent possible\u2014regardless of whether the study shows negative or positive results.A majority of the companies we studied were engaged in discussions with regulators and industry bodies to encourage the ethical adoption of new science and technology. Some of the smaller firms were working to devise the best method of regulation for an emerging science as demonstrated by Interleukin and Sciona (nutrigenomics) and TGN Biotech (transgenesis to make therapeutic proteins). Others were working with industry groups to encourage the use of high ethical standards in areas of genetic information privacy (as done by Affymetrix), animal testing (Pipeline Biotech), and human rights standards (Novo Nordisk).Philanthropic and drug donation programs are a way for companies to give back to, and engage with, society. The latter strategy tends to be limited to the larger firms that have reached profitability, while smaller firms donate employee time and expertise to address societal needs. Merck has created a nonprofit foundation that has invested hundreds of millions of dollars in public\u2013private partnerships to help build infrastructure and deliver needed drugs in Africa and South America to address HIV/AIDS, and for other health crises, such as river blindness. Another example is Novo Nordisk's World Diabetes Foundation, which supports partnerships and initiatives around the world that help build health infrastructure and health-care capacity in these countries. Novo Nordisk works with local organizations and governments to learn what is needed from the developing country's perspective.From our interviews, we found that a few of these companies have methods for evaluating their approach to ethics and for reporting their ethics commitments to stakeholders. Our study originally intended to collect evaluations of each ethics mechanism in order to assess the effectiveness of different approaches. Unfortunately, we found that too few of the companies we interviewed are evaluating their approach to ethics for us to obtain concrete results. However, the following is a description of a few of the evaluation and reporting mechanisms that are starting to be used by some of the larger companies in our study.Merck now requires that every philanthropic initiative they invest in to be subject to an evaluation process in order to assess whether it truly produced the benefit sought, both for the recipient and for the company. Novo Nordisk has internal ethics auditors who rotate through departments and perform ethics assessments on how well employees are living up to their ethical mandates. These ethics auditors evaluate the department and help devise improvement strategies on an as-needed basis. Both Monsanto and Novo Nordisk have a mechanism in place that reports to the public on their initiatives. Monsanto's Pledge Progress Reports and Novo Nordisk's Sustainability Reports are meant to transparently describe the companies' stances on issues and their efforts to live up to their ethical promises.Our findings in the area of ethics evaluation demonstrate a need for future development and research. For bioscience companies who are more familiar with tangible and quantitative outcomes , it is challenging to devise a method to evaluate something as intangible as ethics. Employee surveys, public opinion polls, share price, and product acceptance levels were some of the measurement approaches suggested during our interviews. Although many of the companies studied are not evaluating the effectiveness of their ethics mechanisms, it was very clear that companies feel that evaluating their ethics approaches in order to learn from their successes and failures is a vital component of any bioindustry ethics initiative.The objective of this paper is to highlight specific mechanisms used by companies to address their ethical issues. However, we recognize that the views of senior management of bioscience companies are not the only relevant perspectives on these issues. We feel that one important next step would be to engage the opinions of other key players, such as nongovernmental organizations, governments, academics, and the general public.Another limitation of a study such as this is the risk of social desirability bias. This occurs when the research participant expresses a viewpoint that he or she thinks the interviewer wants to hear rather than what he or she truly believes. Although management opinions were given in this research study, the mechanisms described in this article are not opinions but rather a description of mechanisms being used by the companies\u2014and, thus, they are less subject to bias. At each company, the descriptions of the mechanisms were given by more than one interviewee, and in most cases, we had documents supporting the fact that these mechanisms do occur as described. We recognize these limitations, but feel that because the people we interviewed are closest to the phenomenon, they represent a legitimate viewpoint and a highly logical entry point for empirical research into why and how bioscience companies address ethical issues.Our study uncovered five interrelated approaches, each with several mechanisms to address bioindustry ethics. Based on our findings, a company of any size can start with strong ethical leadership and seek external ethics expertise early on. Internal ethics mechanisms and external ethics engagement mechanisms are other approaches that a bioscience company of any size can implement. As demonstrated by the larger companies in our study, companies can also develop ethics evaluation and reporting mechanisms that aim to keep the company on track and encourage management to monitor the outcomes of their ethical decision making. The mechanisms reported in this article demonstrate ideas for ways in which management in the bioscience industry can begin to address the complex ethical issues facing their companies.Qualitative case study methods were used for this research.Data CollectionData was collected over a two-year period, using a study design approved by the University of Toronto research ethics board. Our research team performed in-depth, open-ended interviews with managers and executives from 13 bioscience companies . Media articles, press releases, and company documents were also analyzed to verify the data resulting from the interviews.Data SourcesData was drawn from (1) interview notes, (2) observations from company visits, and (3) written documents (produced by the company and by other sources).Data AnalysisCase Studies. The three sources of data were analyzed for each company independently to produce 13 qualitative case studies describing ethical decision making in each company. These case studies were verified for accuracy and approved for publication by each firm.Cross-Case Comparison. To perform the comparison, the case studies and interview notes were coded on four themes: (1) What mechanisms are bioscience companies using to address their ethical issues? (2) How effective are their mechanisms? (3) Why have these bioscience companies decided to implement ethics mechanisms? (4) What ethical issues are these bioscience companies facing and addressing with the previously mentioned mechanisms?In qualitative research, this is known as axial coding. This coding process was performed first by one of our researchers and then verified for validity by other team members. Any discrepancies in results were discussed until consensus was reached.The results from Themes 1 and 2 are discussed in the text of this article, and Themes 3 and 4 are addressed inEthical leadershipFounder/CEO/management ethical leadershipAn ethics departmentExternal expertiseExternal ethics consultantEthics advisory boardsInternal ethics mechanismsHiring practices focused on ethicsEmployee performance evaluationsEthics education and forums for ethics discussionEthical reinforcement techniquesExternal ethics engagementEthics mechanisms with partners and suppliersTransparent engagement mechanisms with stakeholdersTransparency of scienceInfluencing industry standards and regulationsStrategic philanthropyEthics evaluation and reporting mechanismsDo the \u201cright thing\u201dRisk mitigationPublic reputationAttract and keep the \u201cright\u201d employeesGuidance in uncharted watersPromote good science"} +{"text": "PLoS Medicine, in which executives and senior managers from bioscience firms were asked what their companies were doing to promote ethical behaviour.Novas discusses the implications of a new study in PLoS Medicine, Jocelyn Mackie and colleagues describe the variety of mechanisms that bioscience firms have put in place to address the ethical issues confronting them [Corporations are increasingly expected to behave ethically and to make their operations as transparent as possible. This is especially the case for the bioscience industry . These firms deal with a commodity like no other: our health and well-being. They also engage in research that manipulates the basic building blocks of life and challenges common understandings of the boundaries between humans and animals, the treatment of disease versus the enhancement of healthy lives , and wheng them .Based on more than 100 interviews with executives and senior managers from 13 firms, the authors sought to find out what these professionals had to say about what their companies are doing to promote ethical behaviour. To date, there has been relatively little empirical research on this topic. As such, it is a timely piece.The authors draw upon a larger study, in which detailed case studies were developed for each firm, including the ethical challenges that the firms faced and the mechanisms used to address them. The firms selected for analysis were chosen because they were known to be developing innovative approaches to dealing with ethical issues. They were further selected to represent the diversity of this industrial sector and to account for variations in firm size and location. In the paper, the authors seek to draw comparatively upon the case studies they developed to highlight the range and variety of mechanisms adopted by firms to address ethical issues.Based on the analysis presented, firms of all sizes and in different market niches are using a variety of approaches to encourage ethical behaviour. First of all, executives are promoting ethics as part of a firm's core values. There is also evidence of specialisation: larger firms are able to create dedicated departments, while smaller firms are incorporating ethics into the responsibilities of senior managers.Corporations are increasingly expected to behave ethically and to make their operations as transparent as possible.The companies studied have further retooled their organisational structures with ethics in mind: ethics shaped their hiring and staff performance evaluations, employees in some firms were given ethics training, and visual and oral reminders were being used in the workplace to reinforce an organisation's commitment to its ethical values. In instances where internal expertise was lacking, external consultants were brought in or independent ethics advisory boards were created to provide guidance and advice. Regardless of size, most of the firms studied were engaging with a range of stakeholders\u2014whether it be reshaping their relationships with suppliers to maintain high ethical standards, consulting with a local community, inviting activists to visit laboratory facilities, or launching corporate philanthropy programmes in Africa. Lastly, firms are beginning to develop measures for evaluating and reporting their ethical behaviour.This study demonstrates that (at least for a selective range of firms in the bioscience industry) some corporations have started to believe that the types of relationships they have with patients, carers, families, physicians, activists, partners, suppliers, regulators, and the public are an essential element of corporate financial and social viability. Considering that some of the firms studied do not seem to have any products on the market, current and anticipated ethical concerns are shaping their corporate practices and bottom lines in the here and now. This paper also provides evidence of how ethical decision making is not an entirely abstract philosophical exercise: ethics has to be embedded in a range of social practices and relationships that need to be continually cultivated and reinforced if they are to be effective.Lastly, it can be extrapolated that ethics is an asset that firms can trade upon. Firms are considering ethics as central not only to their research activities and the dissemination of their products to consumers, but also to the reputation and branding of the company itself. Of course, ethics, like any other asset, has multiple values. Ethics can be used by firms not only to shape their decisions, but to aid corporate public relations campaigns. No doubt, the multifaceted character of corporate ethics programmes can lead to outbreaks of scepticism amongst bioethicists and the public.A question that was left unresolved by the authors was how effective these mechanisms have been at accomplishing their objectives. Whilst firms may not yet be evaluating the actual impact of these mechanisms, it is imperative for bioethicists studying these mechanisms to discuss their relative merits and disadvantages. Better yet would be an evaluation of how ethical these companies actually are. In a similar vein, we need to know more about how and to what extent the mechanisms adopted by firms influenced their research, investment, or marketing decisions. Finding out about such influence will require further empirical analysis and a willingness on the part of executives in the bioscience industry to let researchers not only interview them but also observe what goes on in their boardrooms. This type of transparency would facilitate the investigation of what kinds of problems do and do not get defined as being ethical and the organisational processes that shape firm behaviour. Lastly, it is important to emphasise that firms, like individuals, do not live in isolation. We need to pay greater attention to the broader political, economic, and social context that has encouraged firms to develop mechanisms to address ethical issues, as well as the role of industry organisations and professional associations in facilitating the uptake and dissemination of these mechanisms.It will be important to study bioscience firms as they start to incorporate ethics into their organisational practices and into the very products they develop through their research decisions. To date, this subject area has not been extensively explored by bioethicists. As more aspects of our health and illness are embraced by the bioscience industry, the ethical issues surrounding industry's actions will become an area that is ripe for analysis.A question that this paper opens up for further analysis and debate, given that firms are starting to integrate ethics into their organisational practices, is how and through what forms bioethicists should relate to corporations. At the present time, there seem to be two dominant forms. One form appeals to the values long cherished in academe: independence, critical scholarship, credibility, and integrity ,4. The Perhaps the lessons to be learned come from the bioscience industry itself. As firms have started to reformulate their organisational forms and modes of conduct in relation to changing socioeconomic circumstances, perhaps it is an opportune moment for bioethicists to rethink the subjects they choose to study and how bioethics engages with its various stakeholders. Just as scientists have created new theories and techniques to investigate the phenomena of life, bioethicists need to develop new concepts and tools for proposing how we should individually and collectively relate to one another in a manner that is capable of dealing with the dilemmas that will be posed by the provision of health in the 21st century."} +{"text": "Developing a Web-based tool that involves the input, buy-in, and collaboration of multiple stakeholders and contractors is a complex process. Several elements facilitated the development of the Web-based Diabetes Indicators and Data Sources Internet Tool (DIDIT). The DIDIT is designed to enhance the ability of staff within the state-based Diabetes Prevention and Control Programs (DPCPs) and the Centers for Disease Control and Prevention (CDC) to perform diabetes surveillance. It contains information on 38 diabetes indicators and 12 national- and state-level data sources. Developing the DIDIT required one contractor to conduct research on content for diabetes indicators and data sources and another contractor to develop the Web-based application to house and manage the information. During 3 years, a work group composed of representatives from the DPCPs and the Division of Diabetes Translation (DDT) at the CDC guided the development process by 1) gathering information on and communicating the needs of users and their vision for the DIDIT, 2) reviewing and approving content, and 3) providing input into the design and system functions. Strong leadership and vision of the project lead, clear communication and collaboration among all team members, and a commitment from the management of the DDT were essential elements in developing and implementing the DIDIT. Expertise in diabetes surveillance and software development, enthusiasm, and dedication were also instrumental in developing the DIDIT. The Diabetes Indicators and Data Sources Internet Tool (DIDIT) is a Web-based\u00a0resource designed to strengthen the capacity of the staff and partners of state-based Diabetes Prevention and Control Programs (DPCPs) and staff of the Centers for Disease Control and Prevention (CDC) to conduct diabetes surveillance and program evaluation. The tool contains detailed information on 38 diabetes indicators and their associated data sources . The content, design, and function of the DIDIT have been described elsewhere . This ardiabetes indicator tool.The DIDIT was developed in response to a request from the DPCPs for technical assistance in surveillance and program evaluation . In AuguIn October 2001, an overview and vision of the DIDIT was presented to six focus groups at the annual meeting of DPCP directors. Three key themes emerged:\u00a0The DIDIT should be a Web-based application to allow for content updates and easy accessibility (in contrast to a CD-ROM).The DIDIT should be a reference tool that promotes consistency and standardization of data analysis required for diabetes surveillance.The tool's development should continue to be informed by DPCP representatives (the intended user group) to ensure that it meets the needs of program staff involved in diabetes surveillance.Input from these initial focus groups and subsequent feedback from the DPCPs served as the basis for developing the concept, content, and Web application for the DIDIT.Content development took place in two stages. During the first stage, the work group selected 10 of the originally identified 55 indicators to develop a prototype. The 10 indicators were as follows: 1) diabetes prevalence, 2) annual hemoglobin A1c test, 3) annual influenza vaccination, 4) pneumococcal vaccination, 5) level of diabetes education, 6) diabetes-related hospitalizations, 7) prevalence of end-stage renal disease, 8) hospitalization for lower extremity amputations, 9) physical inactivity, and 10) overweight. Because members of the work group lived in different states, discussions were conducted through a series of telephone conferences and two in-person meetings.During the second stage, the work group selected an additional 28 indicators from the original 55 through a two-round modified Delphi process. Indicators were ranked in priority according to the following four criteria:Healthy People 2010 objectives [Relationship to a national policy objective Alignment with current practice guidelines, such as those from the American Diabetes Association \u00a0Responsiveness to efforts of the DPCPsMeasurability through public data sources, particularly state-level data such as the BRFSS www.cdc.gov/diabetes/statistics/index.htm). All 10 indicators used to develop the prototype as well as all 28 selected during the second stage were retained, with a total of 38 indicators selected for inclusion. At this stage, the selection of fields to describe each indicator and data source was also finalized with input from the DIDIT work group.The typical reason for excluding an indicator was that no state-level data source could be identified to measure it. A list of indicators that were excluded and the rationale for excluding them can be found on the DIDIT . NetMeeting allowed participants throughout the nation to view the DIDIT as it was being demonstrated at the CDC in Atlanta, Ga. Pilot testing is a critical and often overlooked component of the software life cycle, and there are important reasons for conducting it . The objShortly after the release of the DIDIT, the project lead conducted a national training session for DPCPs and CDC staff using NetMeeting. A team of DDT professionals was then assigned the responsibility of providing ongoing user support and training for technical and functional aspects of the DIDIT. The project lead's responsibilities included providing support on questions and issues related to the content and application of the DIDIT in the context of DPCP programs.Several factors were critical to successfully developing and implementing the DIDIT. The factors have practical implications for other agencies that want to undertake a similar effort.The work group members had extensive knowledge and experience in diabetes surveillance and epidemiology, which proved essential in guiding the content and technical contractors during the development process. DIDIT team members were a motivated, dedicated, enthusiastic, and knowledgeable group of DPCP representatives and DDT staff. In addition, the knowledge and skills of the contractor were critical to researching and developing content on indicators and data sources.The project lead effectively solicited the interest and support of DDT management to ensure that financial and staff resources were available to develop the new tool. To sustain interest and support of management, the project lead presented draft content and DIDIT prototypes at various CDC and national public health meetings throughout the development process . These pDevelopment of a comprehensive reference tool such as the DIDIT requires a commitment of time and resources. The management of the DDT supported allocation of resources and time needed to create the DIDIT.The DIDIT project lead had a clear vision of the type of tool that would fulfill the surveillance needs of the DPCP and the DDT. A strength of the project lead was her ability to communicate the vision of the DIDIT to the project team and stakeholders throughout the development process.Development of the DIDIT involved input from stakeholders across the country. Clear and ongoing communication among stakeholders was essential to the development process. During the first in-person work group meeting, we learned that face-to-face interactions were highly appreciated by work group members and that these interactions helped build rapport among members. In-person meetings were arranged at national conferences to avoid issues of travel approval and costs. Timelines and other defined plans facilitated collaboration. A contractor who was skillful at organizing materials, facilitating meetings, motivating work group members, and responding to the needs of work group members was also essential. Because the work group volunteered its time to create the DIDIT, efforts were made to minimize the burden placed on its members. Minimizing this burden helped to maintain a core group of members who have actively participated for more than 3 years.Development of both content and Web application took place incrementally and iteratively. Members of the work group reviewed the content in phases, allowing the content contractor to apply feedback to subsequent phases. Similarly, because an incremental process was conducted that involved analysis, design, and implementation at the same time, the contractors were able to make a demonstration model of the DIDIT during the early phases of development, which facilitated refinements to its content and design. Working with an actual tool triggered ideas among users for additional functions and alternative designs that may have been overlooked at the prototyping stage. A model also allowed us to obtain user input on database-driven features such as system searches.The ability to assess the status of the public's health in a timely, consistent, and accurate manner satisfies the first two of the 10 essential public health services as defined by the Institute of Medicine: 1) \"monitor health status to identify community health problems\" and 2) \"diagnose and investigate health problems and health hazards in the community\" .The DIDIT represents an innovative approach to enhancing the capacity of state and federal agencies to perform public health surveillance. As one user has described, \"The DIDIT offers a one-stop shop that is available 24 hours a day.\" It empowers users by providing them easy access to information that has been reviewed by DIDIT work group members for accuracy and content. In addition to providing a road map for development, this article highlights components that were critical to the successful development of the DIDIT. These components synergistically influenced the development process. Having adequate time, expertise, and commitment of resources, for example, would not have been sufficient for success without the clear communication and rapport among the project team members or buy-in and involvement of all stakeholders. Because these critical factors enhance one another, it is difficult to prioritize them. Other entities that wish to undertake a similar effort of systems development can use these requirements as guiding principles and customize them for their own needs and circumstances.A major benefit of sharing these elements is to prevent other agencies from having to \"reinvent the wheel\" when they can draw directly on the experiences of the DIDIT team. While the technology is available to develop information technology solutions for addressing public health problems, it is vital to have effective processes and methods in place to successfully identify the needs of users and harness and customize appropriate technology to meet those needs."} +{"text": "Specialisation in spinal services has lead to a low threshold for referral of cervical spine injuries from district general hospitals. We aim to assess the capability of a district general hospital in providing the halo vest device and the expertise available in applying the device for unstable cervical spine injuries prior to transfer to a referral centre.The study was a postal questionnaire survey of trauma consultants at district general hospitals without on-site spinal units in the United Kingdom. Seventy institutions were selected randomly from an electronic NHS directory. We posed seven questions on the local availability, expertise and training with halo vest application, and transferral policies in patients with spinal trauma.The response rate was 51/70 (73%). Nineteen of the hospitals (37%) did not stock the halo vest device. Also, one third of the participants were not confident in application of the halo vest device and resorted to transfer of patients to referral centres without halo immobilization.The lack of equipment and expertise to apply the halo vest device for unstable cervical spine injuries is highlighted in this study. Training of all trauma surgeons in the application of the halo device would overcome this deficiency. In the United Kingdom (UK), most spinal trauma presents to district general hospitals where on-site spinal units are unavailable. Patients need to be transferred to tertiary care centres for definitive surgical management.Unstable cervical spine injuries require adequate immobilisation to prevent or limit neurological sequelae during transport. Methods of immobilisation of the injured cervical spine include cervical orthotics , head cervical orthotics (Philadelphia collar and Miami-J collar), cervical traction, and halo-vest immobilisation. The halo vest is the most rigid of all cervical orthoses , and repAlthough this is an effective and relatively safe procedure , Kang anWe investigated the capability of UK district general hospitals regarding the familiarity and confidence of application of the halo vest traction device among the orthopaedic staff, availability of the device, and the implications this may have for training and service delivery in light of the ongoing restructuring of spinal services towards tertiary spine centres.A survey was conducted at 70 UK district general hospitals with designated acute trauma admission status. Eligible centres were identified randomly using an electronic NHS directory.Hospitals with on-site spinal units were excluded. Individual orthopaedic trauma consultants were contacted by a postal questionnaire to assess the level of service provision with regard to halo vest application.The questionnaire was in a simple tick-box style format and assessed whether the hospitals in which the consultants were employed stocked halo vest equipment routinely, their level of confidence to apply halo devices to adult and paediatric trauma patients, and whether they had received adequate training in application or had recent experience in halo vest application.In addition, participants were asked about referral protocols and problems encountered with referral of patients with cervical spine injuries to tertiary spine centres.Results are presented as absolute numbers and proportions together with 95% binomial exact confidence intervals (CI), where appropriate.Altogether, 51/70 consultants responded to the questionnaire, for a response rate of 73%.Nineteen (37%) of 51 district hospitals no longer routinely stocked emergency halo-vest equipment. Just 33/51 of the consultants stated that they would feel confident to apply this device even when available both in adults and children, while the remaining did not feel confident either because of inadequate training or lack of recent experience.Twenty consultants did not receive adequate training in applying the halo vest device. Only fifteen had applied a halo vest in the past two years.Most surgeons had a low threshold in referring patients to tertiary spinal units despite inherent risks associated with transfer of an unstable cervical injury with suboptimal immobilisation . This was despite one quarter of clinicians encountered referral difficulties such as inappropriate delays or problems obtaining specialist advice.Cervical spine injuries can have serious neurological consequences. Patients with these injuries require adequate immobilisation to prevent or limit neurological deterioration during transfer to tertiary spine centres and definitive surgical fixation.The key factor in immobilising the cervical spine is the rigidity of the applied device. Cervical and head-cervical orthoses still allow for variable motion of the cervical segments and therefore are not suitable in patients with unstable cervical spine injuries. Studies assessing the stabilising effects of different cervical orthoses showed the halo-vest device to be the most rigid ,6.The treatment of unstable cervical spine injuries with the halo vest is an established procedure. The halo traction device was first devised by Perry and Nickel in 1959 to overcome problems encountered while using the Minerva plaster for treating unstable cervical spine fractures . The halThe halo vest can be used for both intermediate and definitive treatment of cervical spine injuries, as well as immobilisation after surgical fixation of cervical spine fractures . It may The halo ring is made of graphite or metal with pin fixation on the frontal and parieto-occipital areas of the skull. Development of lightweight composite material led to the design of radiolucent rings compatible with magnetic resonance imaging. Restriction in cervical motion depends on the fit of the halo vest, since improper fit can allow 31% of normal spine motion. The halo vest is the weak link in terms of motion control. Compressive and distractive force can occur with variable fit of the vest. Motion restrictions provided by the halo include the following: limits flexion and extension by 90 to 96%, limits lateral bending by 92 to 96%, limits rotation by 98 to 99% .When compared to cervical traction using skull tongs the halo-vest device keeps patients mobile and reduces respiratory problems. This is specifically advantageous in elderly patients who have a higher incidence of upper cervical spine injuries .Despite its efficacy in immobilising the cervical spine, the halo vest device has its own problems. Complications like pin loosening, pin site infection, discomfort at pin sites, dysphagia, prolonged bleeding at pin sites, and dural puncture have been reported in the literature . This caAlthough in the UK most spinal trauma cases present initially to district general hospitals, our study shows a trend not to stock the halo device in one third of these hospitals. This would mean immobilisation of potentially unstable cervical spine injuries by other, less rigid cervical orthoses. When the halo device was available, only two thirds of the trauma surgeons were confident in applying one. Previously, this would have been considered a prerequisite trauma skill for practicing orthopaedic surgeons in hospitals providing acute services. There now appears to be a wide variation in the provision of this essential service throughout the UK, with a high proportion of trauma units having neither the resources nor clinical expertise to manage these injuries. As the management of spinal trauma becomes more specialised, this is likely to affect service delivery and training, and has important safety implications.One limitation of our study is that, although the sample population was selected randomly, it may still not be representative of all district general hospitals in the UK. Also, the overall sample size and response rate may further limit firm conclusions. Finally, we did not collect data on demographic and professional backgrounds of the respondents and their institutions.Apart from these limits, our study has created an awareness of the existing level of application skill and availability of the halo vest traction device. No comparable study is available in the literature, and it may be of interest to perform similar surveys in other countries.We recommend training all trauma surgeons in the indications, technique of application, and possible complications of the halo vest device.Specialisation of spinal services has serious implications on the initial management of cervical spine trauma in district general hospitals without on-site spinal units. The lack of equipment and expertise to apply the halo vest device for unstable cervical spine injuries in this set up is highlighted. We recommend training of all trauma surgeons in the application of a halo vest device and making this device available for use.The author(s) declare that they have no competing interests.UR was involved in reviewing the literature, drafting the manuscript and proof read the manuscript. JCS was involved in collecting data, reviewing the literature, drafting the manuscript and proof read the manuscript. AS is the senior author and was responsible for final proof reading of the article. All authors have read and approved the final manuscript."} +{"text": "European and Developing Countries Clinical Trials Partnership (EDCTP) was founded in 2003 by the European Parliament and Council. It is a partnership of 14 European Union (EU) member states, Norway, Switzerland, and Developing Countries, formed to fund acceleration of new clinical trial interventions to fight the human immunodeficiency virus and acquired immune deficiency syndrome (HIV/AIDS), malaria and tuberculosis (TB) in the sub-Saharan African region. EDCTP seeks to be synergistic with other funding bodies supporting research on these diseases.EDCTP promotes collaborative research supported by multiple funding agencies and harnesses networking expertise across different African and European countries. EDCTP is different from other similar initiatives. The organisation of EDCTP blends important aspects of partnership that includes ownership, sustainability and responds to demand-driven research. The Developing Countries Coordinating Committee (DCCC); a team of independent scientists and representatives of regional health bodies from sub-Saharan Africa provides advice to the partnership. Thus EDCTP reflects a true partnership and the active involvement and contribution of these African scientists ensures joint ownership of the EDCTP programme with European counterparts.The following have been the major achievements of the EDCTP initiative since its formation in 2003; i) increase in the number of participating African countries from two to 26 in 2008 ii) the cumulative amount of funds spent on EDCTP projects has reached \u20ac 150 m, iii) the cumulative number of clinical trials approved has reached 40 and iv) there has been a significant increase number and diversity in capacity building activities.While we recognise that EDCTP faced enormous challenges in its first few years of existence, the strong involvement of African scientists and its new initiatives such as unconditional funding to regional networks of excellence in sub-Saharan Africa is envisaged to lead to a sustainable programme. Current data shows that the number of projects supported by EDCTP is increasing. DCCC proposes that this success story of true partnership should be used as model by partners involved in the fight against other infectious diseases of public health importance in the region. Tuberculosis, human immunodeficiency virus (HIV) and malaria cross paths in sub-Saharan Africa, the epicentre of the three infections. Although HIV/AIDS, tuberculosis (TB) and malaria are three treatable and preventable diseases, they are having a devastating impact in the world's poorest countries. Sub-Saharan Africa has just over 10% of the world's population but accounts for 90% of malaria deaths, two-thirds of all people living with HIV, and nearly one-third of all TB cases [Global resources devoted to fighting the three diseases have been rapidly scaled up in recent years by various initiatives or programs including the World Health Organization (WHO), Tropical Diseases Research (TDR), Foundation for Innovative New Diagnostics (FIND), the US Centers for Disease Control and Prevention (CDC), National Institute of Health (NIH), European Union (EU), Wellcome Trust (WT), Multilateral Initiative on Malaria in Africa (MIM), to mention a few. These initiatives have achieved significant results in the areas of capacity building, networking, and development of new tools for intervention. One such initiative was the formation in 2003, of the European and Developing Countries Clinical Trials Partnership (EDCTP), a partnership of 14 European Union (EU) member states, Norway and Developing Countries, with a particular focus on sub-Saharan African states. EDCTP was formed to develop new clinical trial interventions to fight the human immunodeficiency virus/acquired immune deficiency syndrome (HIV/AIDS), malaria and tuberculosis (TB) in the sub-Saharan region. Switzerland joined the partnership in 2006.EDCTP has sought to be synergistic with other funding bodies supporting research on these three diseases and promoted collaborative research involving support from multiple funding agencies and harnessing networking expertise across different African and European countries . FormatiThe activities of EDCTP are in seven categories namely: North-North/North-South networking, South-South (intra sub-Saharan) networking, support to clinical trials, support to research capacity building, advocacy and fund raising, management and information management.The notion of establishing and strengthening of research capacity in developing countries adopted by EDCTP is of prime importance in empowering these countries to find rational and efficient solutions to their health problems through scientific research. It is also known that institutional links with other researchers in the north and south greatly facilitate the process of research strengthening through graduate study programs, technology transfer, 'hands-on' research training in the field, expanded networking with contacts of other partners and continued scientific exchanges in the context of actual research programs . In NortThere is need to explain what makes EDCTP different from other similar initiatives. We report that in the short history of EDCTP a model of a true partnership has emerged.The governance of EDCTP has two pillars namely: the \"Partnership\" and the \"European Economic Interest Group\" (EEIG). The \"Partnership\" comprises the Partnership Board (PB), the main strategic body; network of European National Programs (ENNP); and the Developing Countries Co-coordinating Committee (DCCC). The EEIG provides the legal, financial and operational procedures to receive, dispense, account for funds and execute actions recommended by the PB. The DCCC is a committee of African scientists and representatives of regional health organisations from sub-Saharan Africa that focus on clinical trials in the three major diseases. The importance of DCCC is reflected in the true value of independent scientists from sub-Saharan Africa who advise the partnership in several aspects of its work and is an indication of the African contribution. The DCCC shapes the research agenda according to needs and gaps in the African region. Additional African contribution comes in forms of study sites that have research infrastructure, personnel and leadership.In line with the Article 169 of the European Treaty, the success of the EDCTP depends on a solid and sustained collaboration of European national programmes to provide and complement the required funding. The active involvement of African scientists and African contribution explained above is also a sign of joint ownership of the EDCTP programme. Although data on financial contribution from African governments, who mostly contribute staff, infrastructure and facilitates participation of consenting study subjects in EDCTP funded projects, has yet to be defined active participation by many African scientists either as DCCC members or grantees is a sign that the EDCTP programme has been embraced in sub-Saharan Africa. The European national programmes and the study sites in Africa are dependent on each other for success of the programme, ensuring a joint ownership.The EDCTP capacity building strategy in Africa is unique among others in addressing the 10/90 gap , for exaWe recognise that the first few years of EDCTP's existence was full of challenges expected of any newly formed organisation. One of these challenges manifested between 2003 and 2005 when there were rapid changes in management of EDCTP resulting in comments that the management structure of the organization needed to be radically changed and that partnership with other organizations needed to be improved . HoweverCombating the three poverty related diseases on which EDCTP focuses is a global emergency and the world needs quick solutions. Inevitably, since its formation, EDCTP has been under pressure to contribute to finding these solutions. Some authors had predicted that the EDCTP will help to overcome the bottleneck of demonstrating a proof of principle for promising vaccine or drug candidates in testing them in early efficacy trials in endemic areas, particularly in sub-Saharan Africa . How lon. Some key performance indicators that are transparent monitoring tools have been used to show significant progress. Figure Since 2005 there have been substantial improvements in the management and awarding of EDCTP grants. By the end of 2007 EDCTP had funded 74 projects in 29 sub-Saharan Africa countries. A summary of these projects are available in EDCTP annual reports ,12 and tGrants to networking projects and clinical trials have resulted in formation of new networks or support to on-going ones as illustrated in Figure After 2006 the DCCC and other EDCTP constituencies have insisted on demonstration of African leadership and ownership in the EDCTP programme. Capacity building programmes have been tremendously supported as shown in Figure We have this far argued that EDCTP has a good future as long as it receives strong support from both African and European member states in the partnership. As DCCC we propose that the future of EDCTP should include the following elements:1) Stronger investment for research in the three major diseases of HIV, TB and malaria2) Stronger African commitment and leadership3) Consideration to put European member state funding into a common pot which currently is not the case4) Inclusion of basic sciences (discovery research) and phases I and IV studies in the mandate of EDCTP; consolidated in regional networks of excellence5) Expansion of disease coverage beyond the current three poverty related diseases to other neglected ones6) Support for standardisation of methods for evaluation of effectiveness of clinical interventions e.g. biomarkers and correlates of protection, cure etc7) Funding drug discovery of traditional medicinal plants8) Continued support for regulatory and ethics activities9) Negotiations with pharmaceutical companies for third party funding of product researchSome of these points are also contained in the recent EDCTP independent external evaluation . We beliDespite facing enormous challenges in its first few years of existence, EDCTP has succeeded in getting strong involvement of African scientists and, through its new initiatives, such as unconditional funding to regional networks of excellence in sub-Saharan Africa, it is envisaged to lead to a sustainable programme. DCCC proposes that this success story of true partnership should be used as model by partners involved in the fight against other infectious diseases of public health importance in the region.NT and MM work in the African office of the EDCTP. The other authors declare that they have no competing interests.DCCC members wrote the paper and edited the first draft and provided necessary background documents.The pre-publication history for this paper can be accessed here:"} +{"text": "The high disease burden of Africa, the emergence of new diseases and efforts to address the 10/90 gap have led to an unprecedented increase in health research activities in Africa. Consequently, there is an increase in the volume and complexity of protocols that ethics review committees in Africa have to review.With a grant from the Bill and Melinda Gates Foundation, the African Malaria Network Trust (AMANET) undertook a survey of 31 ethics review committees (ERCs) across sub-Saharan Africa as an initial step to a comprehensive capacity-strengthening programme. The number of members per committee ranged from 3 to 21, with an average of 11. Members of 10 institutional committees were all from the institution where the committees were based, raising prima facie questions as to whether independence and objectivity could be guaranteed in the review work of such committees.The majority of the committees (92%) cited scientific design of clinical trials as the area needing the most attention in terms of training, followed by determination of risks and benefits and monitoring of research. The survey showed that 38% of the ERC members did not receive any form of training. In the light of the increasing complexity and numbers of health research studies being conducted in Africa, this deficit requires immediate attention.The survey identified areas of weakness in the operations of ERCs in Africa. Consequently, AMANET is addressing the identified needs and weaknesses through a 4-year capacity-building project. In the wake of such an increase in health research on mostly poverty-stricken and poorly educated populations, given Africa\u2019s weak civic protection systems, it is imperative that attention be paid to the ethical review capacity of African health institutions. Review of research protocols before implementation is now regarded as one of the cornerstones of ethical research involving human participants, and some countries have made it a legal requirement.Recent concerted efforts to address the Grand Challenges in Global HealthThe main purpose of reviewing research protocols is to ensure that the research meets internationally acceptable scientific and ethical standards. It would be unethical for poorly designed research involving human beings to be approved, since data generated from such research would not contribute to the improvement of disease prevention or management. A holistic approach to reviewing research is critical, since issues that relate to ethical principles of autonomy, beneficence, non-maleficence and justice are equally important. One approach that has been proposed looks at seven requirements that should be considered when reviewing protocols, namely, the value of the research in terms of potential to improve health and/or knowledge, scientific validity in terms of experimental design, fair selection of participants in light of the scientific objectives of the research, favourable risk:benefit ratio with potential benefits outweighing potential risks, independent ethical review of the research before implementation, informed consent that emphasises voluntary participation, and respect for the participants recruited.\u2013Although the requirements could be assessed during the review process, implementation of approved research protocols in the field, especially in developing countries, is bound to encounter practical challenges that are attributable to socio-economic factors.8\u201319Thus, although the majority of countries in Africa are reported to now have at least some form of ethical review process in place,14http://shsph.up.ac.za/sareti/sareti.htm), which is based at the universities of KwaZulu-Natal and Pretoria in South Africa, provides training in ethics to African researchers and ERC members through short-term fellowships and long-term educational programmes. Another programme based in South Africa is the International Research Ethics Network for Southern Africa (IRENSA) (http://www.irensa.org), based at the University of Cape Town, which runs short-term training programmes for mid-career African academics, scientists, clinicians and members of ERCs who generally cannot enrol for long-term, full-time programmes. An additional organisation involved in providing educational programmes in Africa is the Training and Resources in Research Ethics Evaluation (TRREE) (http://www.trree.org/site/en_home.phtml) for Africa, which focuses on development of research ethics educational programmes for e-learning and provision of e-resources.In light of the relatively weak ethical review capacity in Africa, it is encouraging to note that a number of not-for-profit African organisations are involved in capacity-building programmes. The South African Research Ethics Training Initiative (SARETI) (http://www.amanet-trust.org) is also a not-for-profit organisation that was formed in 2002, succeeding the then African Malaria Vaccine Testing Network, founded in 1995 to promote malaria vaccine trials in Africa. Although the broad objective of AMANET is still the same as that of its predecessor, the roles and activities of AMANET have been expanded to include (1) trial site development for malaria vaccine trials, which entails infrastructural development and training of research personnel in various scientific fields, (2) strengthening of ethical review capacity in Africa and (3) sponsorship of malaria vaccine clinical trials.The African Malaria Network Trust (AMANET) by a statistician.A response rate of about 84% (31/37) was achieved, making this the most comprehensive survey of ERCs in Africa that the authors are aware of. A total of 12 institutional ERCs from nine African countries gave presentations at two AMANET health research ethics training workshops held in Dar es Salaam, Tanzania (May and August 2007), and a third workshop held in Addis Ababa, Ethiopia, in September 2007. Gaps and shortcomings of the ERCs were identified and possible solutions explored during interactive discussions that followed each presentation. The presentations covered Cameroon, Ethiopia, Ghana, Kenya, Mali, Malawi, Nigeria, Senegal, Tanzania, Gambia, Uganda and Zambia. The ERCs that presented were among the 31 respondents that were interviewed by surveyors. The countries covered in the survey, which include anglophone, francophone and lusophone countries, are shown in As shown in The top five training needs cited by the respondents were scientific design of clinical trials, risk assessment of clinical trials, understanding of trial phases, monitoring of approved studies, and handling of issues surrounding post-trial access to benefits. Overall, 38% of the members had not undergone any form of training in health research ethics. Membership of 10 committees was entirely by staff employed at the institution, while the rest had varying involvement of members from \u201coutside\u201d the parent institution, such as community members, local universities, religious organisations, non-governmental organisations, civic organisations and professional associations. A large proportion (77%) of the surveyed committees relied on funds received from the institutions where they were based in 2005 and 2006. This survey covered 31 ERCs of 37 targeted institutions; 12 of the 31 committees also presented at workshops organised by AMANET. This is to our knowledge the first survey ever to cover so many countries and institutions; previous surveys in Africa have been on a smaller scale and arguably less comprehensive. Furthermore, this survey included anglophone (eight institutions), francophone (22 institutions) and lusophone (1) institutions. Although this survey received replies from most African regions, replies from central Africa were rare. The relatively high response rate could be attributed to the personal visits by the surveyors. Encouragingly, only two of the 37 institutions lacked ethics review committees and did not complete the questionnaire. This is a great improvement from previous times,Operational guidelines for ethics committees that review biomedical research (2000) states:The World Health Organization publication Countries, institutions, and communities should strive to develop ECs and ethical review systems that ensure the broadest possible coverage of protection for potential research participants and contribute to the highest attainable quality in the science and ethics of biomedical research. States should promote, as appropriate, the establishment of ECs at the national, institutional, and local levels that are independent, multi-disciplinary, multi-sectorial, and pluralistic in nature. ECs require administrative and financial support (p2).21The survey shows that most institutions across sub-Saharan Africa have established ethics committees. However, in order to effectively review protocols, ERCs should be composed of members of diverse backgrounds; many ethics committees surveyed are not yet sufficiently multidisciplinary or multi-sectoral. There are also weaknesses relating to gender and age. The UNAIDS (Joint United Nations Programme on HIV/AIDS) guidelines stipulate that an ERC should have a minimum of five members, and there is no upper limit that is set by guidelines. The current study revealed that membership is still problematic for some ERCs in sub-Saharan Africa, with some having as few as three members and others 19 or more. The major reasons cited for the wide variation in membership include unwillingness of potential members to participate in the committees over and above their normal duties and the lack of compensation for the costs incurred in attending ERC meetings. These issues need to be addressed if ERCs are to function properly.Independence of the committees from their institutions is influenced by a number of factors. First, a committee made up of members from the institution that hosts it, without external members, faces a high risk of bias in its work. Second, reliance on the parent institution for financial support also compromises the independence of the ERC. It is therefore imperative that the ultimate goal should be to enable ERCs to generate adequate operational funds in order to reduce financial reliance on host institutions and also to attract members from outside the parent institution. This is all the more important in sub-Saharan Africa, given the limited financial and skilled human resources available and very poor personnel remuneration. However, the cost of running an ERC needs to be determined, if cost-effective fees are to be charged to ensure self-sustainability. Although in developed countries such as the USA, efforts have been made to determine running costs,Training of members before or upon joining an ERC would help to orient them in terms of the standard operating procedures in place and the ethical review procedures of the particular committee. Whereas the volume of trials being conducted in Africa is increasing, 92% of the surveyed committees reported that they are inadequately trained to properly review and monitor trials. Since it may not be feasible for committee members to take long leaves of absence to undergo long-term training away from their workplaces, workshops and web-based courses in health research ethics could go a long way towards meeting the training needs of the committees. Despite the increasing popularity of e-learning, only 4% (14/345) of the members surveyed have benefited from these opportunities, and this percentage may even decrease as committees become more independent, multi-sectoral and pluralistic. Reliance on traditional pedagogical methods, with all their drawbacks, particularly in the least-developed countries, may be the only opportunity.The survey revealed the training needs of ethics committees in sub-Saharan Africa. A closer examination of the responses is guiding the development of ongoing training of members of ethics committees and will be invaluable in the development of upcoming training of investigators.A study conducted in the USA showed variable decisions by different ERCs that reviewed the same protocol for a multi-centre genetic epidemiological study.The survey also highlighted the need for clear roles and responsibilities of national ERCs in relation to institutional ERCs in countries where both national and institutional committees exist. The roles of the committees should be complementary rather than duplicative; it should be clear to the committees themselves and to potential applicants what type of health research protocols should be reviewed by the respective committees. Such clarity would go a long way towards minimising potential antagonism between the national and institutional ERCs of the same country.This paper provides useful public information on the status of ethics committees that stakeholders in biomedical research in Africa would find useful. The major constraints identified are shortage of resources and inadequate training of the ERC members. Sponsors of clinical trials in Africa will also find this a useful inventory when considering compliance of trial sites to international recommendations, and it is hoped that the ethical review process and oversight of research will always be taken into account at the design stage of research in order to include the activities in the budget and in the project time frame.The gaps identified through this survey should be addressed through dedicated capacity-strengthening providing specifically identified and tailor-made support to ensure improvement, instead of conducting such surveys merely for academic purposes. A careful post-intervention survey using the same evaluation tools would be important to gauge the effectiveness of the interventions implemented, and the results should be widely disseminated for the benefit of the members of the scientific community who are involved in health research. Fostering collaborative efforts with other organisations involved in capacity-building of the ethical review process in Africa could go a long way towards minimising the risk of duplication of activities, which would waste resources."} +{"text": "Sharing of tissue samples for research and disease surveillance purposes has become increasingly important. While it is clear that this is an area of intense, international controversy, there is an absence of data about what researchers themselves and those involved in the transfer of samples think about these issues, particularly in developing countries.A survey was carried out in a number of Asian countries and in Egypt to explore what researchers and others involved in research, storage and transfer of human tissue samples thought about some of the issues related to sharing of such samples.The results demonstrated broad agreement with the positions taken by developing countries in the current debate, favoring quite severe restrictions on the use of samples by developed countries.It is recommended that an international agreement is developed on what conditions should be attached to any sharing of human tissue samples across borders. Sharing of tissue samples for research and disease surveillance purposes has become increasingly important. The Global Influenza Surveillance Network coordinated by the World Health Organization (WHO) is one such example. In 2007, however, after Indonesia refused to share its H5N1 samples without a legally binding agreement concerning benefit arrangements and appropriate attention to Intellectual Property (IP) rights (patent) issues within the network, WHO initiated a discussion regarding a Pandemic Influenza Preparedness Framework (PIP Framework) to address these concerns[The Convention on Biological Diversity, which came into force in 1993, contains a section on right of access to genetic resources and the benefits from their use (Article 15). The convention establishes a sovereign right of nations to the genetic resources within their territories and fair and equitable access to benefits arising out of research and commercial use using such resources. Developing countries have referred to this Convention in support of their demand for legally binding agreements regarding transfer of samples, but developed countries have maintained that the Convention is not applicable to the case of influenza viruses. The case is complicated because it is recognized that the Convention on Biological Diversity does not apply to human genetic resources, and the status of flu viruses contained in human tissue is unclear. Currently there is an attempt to develop an International Regime on Access and Benefit Sharing with a draft text expected from a working group sometime in 2010. The issue of benefit sharing in the context of the Convention on Biological Diversity has received quite a bit of attention also in the bioethics literature[Sharing of tissue samples among research groups also raises the issue of deciding what research to do on sample collections and who should be authors on papers from such research. Since only a finite number of research projects can be carried out on any given collection of samples, there has to be an agreed on policy with regard to how one should decide what research to approve. Although a number of tissue banks have adopted decision making procedures, there is little guidance and much uncertainty about what substantive criteria should be used to make such decisions-6. This While it is clear that this is an area of intense, international controversy, there is an absence of data about what researchers themselves and those involved in the transfer of samples think about these issues, particularly in developing countries. In order to begin to explore these issues we carried out a survey researchers who have been or are conducting research on human biological samples, 2) collectors who have been or are collecting human biological samples, 3) ethics committee members who are currently sitting as a research ethics review board member, and 4) policy-makers who have been involved in setting an institution's policy with regard to research on stored tissue samples. Local PIs in each country determined the way of enrolling research participants, and therefore the participants other than the Japanese participants who were enrolled through cluster randomization, were a sample of convenience.For details regarding the questionnaire development, see the publication of results from the first part of the survey on issues related to informed consent. The que1. Decisions regarding location of samples2. Decision making procedures for choice of research on samples3. Issues related to authorship of publications4. Issues related to intellectual property rightsMost of questions were in the form of a binary choice or a five-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree). The survey was conducted between 2005 and 2008. The term \"local scientist\" refers to scientists who live and oversee research and collection in the country where samples are taken. The term \"foreign collaborating scientist\" refers to scientists from other countries.The ethics approval at the US National Institutes of Health was formally exempted by Office of Human Subjects Research (No. 3074). Each collaborating local PI obtained an ethics approval from a research ethics committee of her own institution.The total number of valid responses obtained was 154 in China, 186 in Egypt, 127 in India, 864 in Japan, and 105 in Korea. The response rate for Japan in which the questionnaires were sent out to the potential participants of randomly selected institutions was approximately 33%. For the other three countries where the potential participants were of a sample of convenience, no detailed data about response rate were available.Compared to the other three countries, the respondents in India and Japan were relatively older. Most of the respondents except the Chinese had doctoral degrees. Among all of the respondents, there were 341 EC members, 23.7% of the sample. About a quarter to a half of the respondents in each country reported that they were currently involved in policy making processes concerning research. Other than the Japanese respondents, a majority reported that they were conducting research on stored human biological samples (from 57.5% in India to 89.6% in China) and collecting them for future use in research (from 56.7% in India to 67.2% in Egypt), whereas doing so among the Japanese respondents were only 35.1% and 29.1%, respectively. For additional details regarding demographics see the companion publication[Most of our respondents had not been involved at all in the use of a Material Transfer Agreement (MTA) for the transfer of biological samples . In China and India most of those who had been involved in MTAs had been involved in the development of the MTA itself or the transfer of samples .The respondents were asked about how intellectual property rights related to research on the samples should be handled. There was general agreement that royalties should be shared with the local scientists, ranging from a high of 87.1% in Egypt, to 49.8% in Japan. Smaller percentages in all countries agreed that royalties should be shared with the local population, ranging from 78.7% in India to 35% in Japan. There was also a general agreement that the population from which the samples were taken should given access to products, such as a vaccine or new drug, that arise from research on the samples, ranging from 47.2% in Japan to a high of 89.2% in Egypt.Opinions overall were almost evenly divided about the question of where collected samples should be stored, with 32% agreeing with the statement that samples should always be kept in the country where they were collected and 40% disagreeing with the statement. There were, however, marked differences between the countries, with only 10.2% in Egypt disagreeing with this statement compared with 47.2% in Japan, and 48.8% in Korea.We asked two questions about specific conditions for when it might be reasonable to move samples out of a country. Sometimes appropriate facilities are not available in the country of origin to do important research. The question, however, was phrased in such a way that we asked the respondents what they felt about this being the only condition for transfer of samples out of the country. In all countries, except in Korea and Japan, a high number of respondents agreed that this should be the only condition.It has been proposed that a portion of the sample could be left behind when it is necessary to do an analysis outside the country. We asked respondents about their attitude towards this policy proposal. Again, the acceptability of such proposal was higher in the four developing countries, compared with the respondents in Japan and Korea.We next asked about opinions regarding decision making authority over the stored samples. When samples are stored for future research, decisions have to be made about what research should be done on such samples in the future. We asked respondents to consider various alternatives, giving different levels of control over the samples to local scientists.The weakest involvement of local scientists would require a consultation with them before any research is done. There was general agreement among all respondents in all countries that this should be required, varying from a high of 92.9% in India to a low of 67.7% in Japan.Next we asked whether local scientists should have some decision making power over the use of the samples. A smaller percentage of respondents in all countries agreed with this statement, varying from 90.9% in Egypt to 56.1% in Japan.We asked specifically whether there should be a decision making committee comprised of representatives from the sending and the recipient countries. A high percentage from all countries except Japan agreed with this statement.The strongest control over the use of the sample by local scientists would be if they have a veto power over any use of such samples. Here there was much more divergence of opinions, where only Egypt still had a high percentage of respondents agreeing with this position (83.8%), whereas in Korea 69.5%, in India 69.2%, in China 63.7% and in Japan 47.5% agreed with this proposal.Finally, we asked whether local scientists should always be included on any future protocol team. Here again, there was wide agreement in all countries, but with lower percentages agreeing in Japan (52.9%), and Korea (66.2%) than in the other countries, with the highest again being Egypt (87.1%).We asked how collaborating scientists should handle the issue of authorship. Specifically, we asked how Material Transfer Agreements should handle this issue. Table Finally, we asked the question whether the respondents thought that MTAs should require that local scientists be given the opportunity to provide sufficient intellectual input so that it would be justified to credit them for authorship. There was overwhelming agreement regarding such a requirement, ranging from 90.9% in Egypt to 51.5% in Japan.There are some differences in the answers to these questions with regard to experience with MTAs. For example, among those who have been involved in the use of MTAs as a receiver of samples, only 33.3% agree that local scientists should be authors on all papers arising from the samples, whereas this agreement is at 49.3% among those who have been involved in the transfer of samples. Among those who have been involved in the development of MTAs there is an intermediate agreement at 36.2%.We asked questions about respondent attitudes towards binding regulations regarding rights of local scientists. Again, there was general agreement that binding regulations should be in place to ensure that the rights of local scientists are protected, ranging from 50.5% in Japan, 76.6% in China, 81% in Korea, 89% in India, and 95.1% in Egypt. We asked who should keep these regulations to protect local scientists. Here there was a wide divergence between countries. Around half of respondents in all countries, except India where only 20% agreed, thought that the World Health Organization should do so. Most countries, except Japan, thought that either the local government or the local institution should do so. In Egypt, most favored the local institution, rather than the local government.Finally, we asked the question about their perception regarding pressure to accept unfavorable conditions when negotiating MTAs. For all countries except Korea, few respondents agreed that local scientists are under pressure to accept unfavorable conditions for the transfer of samples, ranging from a low of 8.4% in China to high of 62.8% in Korea, with Japan, Egypt and India ranging from 21.6% to 42%.The choice of countries for this survey was not motivated by a desire to explain the controversy over access to a pandemic flu vaccine. Nevertheless it is interesting to note how the responses in our survey map the positions taken by representative countries in the current controversy over access to pandemic flu vaccines. Our study demonstrates broad agreement for the developing country position in the current controversy over SMTAs within the PIP framework. The respondents would want IP rights to be shared with researchers or the source country, and favor access to products resulting from research on the samples. This is, not surprisingly, most evident among developing country researchers, where as many as 80% are in favor of these positions. But the support is also surprisingly high in Japan, a representative of a developed country, where 35% think that royalties should be shared with the population of the source country. 47% of our Japanese respondents believe that MTAs should require that the source country should be given access to material products such as pharmaceuticals. If our data are representative of the positions taken by researchers and ethics review committee members in these countries, it indicates that there is no broad agreement for the position taken by developed countries in the ongoing debate within WHO.Developed countries, primarily represented by the EU and the US, have consistently taken the position during the debate within WHO that SMTAs should not contain legally binding benefit arrangements nor restrictions on IP rights. At most, there can be reference to guidelines that suggest appropriate benefits to source countries. Even thAlthough IP issues and access to material benefits have been the focus of discussion within the PIP framework during World Health Assembly (WHA) meeting during the past couple of years, developing countries have also voiced other concerns in the debate, although these have not been discussed as extensively. For example, according to the WHA resolution 60.28 in 2007, SMTAs should be based on the principles of \"increased involvement, participation, and recognition of contribution of scientists from originating country in research related to viruses and specimens and attribution of the work and increased co-authorship of scientists from originating countries in scientific publications\". FollowiOur data again support the positions taken by these countries table . For exaInterestingly, our respondents also favor legally binding regulations for the transfer of samples to protect the rights of local scientists. Representatives from developing countries have insisted throughout the discussion on the PIP framework that SMTAs should include legally binding provisions for benefit arrangements as well as restrictions on IP rights. In contrast, developed countries and to a certain extent the WHO secretariat have insisted that benefit arrangements and IP rights should only be referred to in guidelines. This basic disagreement has to a certain extent paralyzed the negotiations, where each side insists on maintaining their positions. Our data demonstrate widespread sympathy for the developing country position among our respondents.The debate over SMTAs in the context of Pandemic Influenza Preparedness and the results from our survey raise the question of how one should move the agenda forward and deal with the impasse reached in the negotiations. Two points seem especially important.One the one hand, some of the suggestions from developing countries and our respondents for specific provisions in an SMTA seem difficult to defend. For example, it does not seem justifiable to demand that source countries or local scientists should have veto rights over any publications resulting from use of stored tissue samples. At least sometimes, this could be analogous to a sponsor, such as a pharmaceutical company, requiring collaborating scientists to sign agreements where they can only publish after consent of the sponsor, leading to a justifiable criticism that this could lead the sponsor to suppress results unfavorable to the sponsor. Similarly, what restrictions one should place on IP rights seem to a large extent to be a matter of what mechanism is best suited to stimulate innovations of products that will have major health benefits. Although there will be disagreements about specifics, it should be possible to have a discussion of the merits of various proposals.On the other hand, it does not seem prudent for developed countries to insist that substantive provisions for benefits should be kept out of SMTAs. Developing countries have continued to insist on their inclusions but their position has been rejected by developed countries and the secretariat. The WHO secretariat should probably recognize the widespread support of the position taken by developing countries, which is also evident from the data in our survey. Previous surveys in Europe have documented considerable worries about commercialization of research on stored samples, both among those involved in biobanks as well This study has several limitations. First, we assessed the choices of survey respondents, most of whom were a sample of convenience. As a result, our findings may be biased toward particular groups of samples and may not be generalizable to other populations or other countries. Second, the small sample of developed and developing countries. surveyed may not be generalizable to developed and developing countries as a whole, respectively. Finally, since we did not probe for reasons for answers from the respondents, it is unclear whether the respondents had motivations besides those mentioned in the discussion for answering as they did.In conclusion, this study demonstrates that there here is substantial agreement amongst all respondents to favor some rights for local scientists and to share in the benefits of research. As seen in the Indonesian case and elsewhere, answers for how to arrive at an agreement for elements of MTAs are urgently needed. Our data also show that there is wide variation in attitudes on this subject between countries and professional groups. This points to a need to explore the sources of disagreement and to develop a coherent framework for understanding benefit sharing and elements of MTAs.When moving forward it may also be important not to focus exclusively on the most difficult parts, namely guaranteed access to product developed using provided tissue samples or issues of IP rights. As the discussion within WHO and the responses to our survey show, there are other contentious issues as well: who decides and based on what criteria does one decide how the samples should be used, and who should receive credits on publications arising out of the research. Specific proposals have been put forward by a variety of developing countries, but have not been taken up in the discussion. Interestingly, these are also issues that are unresolved for tissue banks established in developed countries. A recent report commissioned by the UK Medical Research Council and the Wellcome Trust recommended that a standardized access policy to sample collections be developed. RecentlThe authors declare that they have no competing interests.All authors were involved in the design of the questionnaire. The authors in the countries in which the questionnaire was administered were responsible for translation of the questionnaire to the local language, administration of the questionnaire, and data entry. All authors were involved in the writing of the paper and the analysis of the data. All authors have read and approved the final manuscript.The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6939/11/16/prepubSurveyinstrument. This file contains the survey instrument.Click here for file"} +{"text": "The organ donor shortfall in the UK has prompted calls to introduce legislation to allow for presumed consent: if there is no explicit objection to donation of an organ, consent should be presumed. The current debate has not taken in account accepted meanings of presumption in law and science and the consequences for rights of ownership that would arise should presumed consent become law. In addition, arguments revolve around the rights of the competent autonomous adult but do not always consider the more serious implications for children or the disabled.Any action or decision made on a presumption is accepted in law and science as one based on judgement of a provisional situation. It should therefore allow the possibility of reversing the action or decision. Presumed consent to organ donation will not permit such reversal. Placing prime importance on the functionality of body organs and their capacity to sustain life rather than on explicit consent of the individual will lead to further debate about rights of ownership and potentially to questions about financial incentives and to whom benefits should accrue. Factors that influence donor rates are not fully understood and attitudes of the public to presumed consent require further investigation. Presuming consent will also necessitate considering how such a measure would be applied in situations involving children and mentally incompetent adults.The presumption of consent to organ donation cannot be understood in the same way as is presumption when applied to science or law. Consideration should be given to the consequences of presuming consent and to the questions of ownership and organ monetary value as these questions are likely to arise should presumed consent be permitted. In addition, the implications of presumed consent on children and adults who are unable to object to organ donation, requires serious contemplation if these most vulnerable members of society are to be protected. The controversy about presumed consent has been recently revived in the UK as a consequence of the organ donor shortfall ,2. The dThe understanding of presumption of consent to organ donation may be considered, by some practitioners of law or science, to be an inaccurate and misleading term. This stems from the general understanding of 'presumption' in law and science: as an inference that is made on available fact or evidence with the understanding that vital information that can render the inference invalid may be missing. In law a presumption holds \u2013 that of innocence, for example \u2013 until a substantial body of evidence is produced to the contrary. Just like a scientific theory or hypothesis, a legal presumption is maintained for as long as no evidence is provided to disprove it or no valid objection is raised against it. A presumption, in law and science, is therefore a 'provisional estimate of facts' based onUnlike the presumptions in law or the hypotheses of science, presumption of consent for the use of body organs cannot afford any possibility of abandoning the presumption, reversing the decision or of retracting any action based on the decision . The presumption of consent for organ donation cannot therefore be taken as a presumption of donor willingness, with the specific understanding that there will be a provision for changing the course of action should further evidence emerge, but rather as a presumption of state rights to post-mortem body organs, unless an objection by the 'occupant' of the body is raised whilst the 'occupant' is still in 'residence'. Opponents of presumed consent argue that the absence of donor willingness is morally unacceptable because it can be seen as a violation of their wishes . raises further issues about the right of ownership and hence who should benefit from body organs, and how presumed consent will extend to competent minors and mentally incompetent adults.The debate about presumed consent and importance of optimising the functionality of body organs can extend to ownership and the right to sell these organs -9. It haThe age at which autonomy is granted varies in the UK depending on whether it is with regard to consent to medical treatment or consent to participation in research. In the former case, children are given the status of autonomous adults at the age of sixteen; in the latter the age of consent is eighteen. This distinction has not been properly qualified in the UK and can lead to bizarre situations . Before Decisions on behalf of the mentally incompetent need to be made in cases of medical treatment and in such situations, common law in the UK sets the principle that such decisions should always be made in the best interests of the patient . There hIntroduction of presumed consent into law can take a number of forms, and variations in the application of presumed consent have beeAccording to one mathematical model, organ availability is likely to be higher when presumed consent measures are introduced, even when other confounding factors are taken into account, but ambiguities in the model are acknowledged . ResultsA recent study that invUltimately, whether consent is express or presumed, the donor and/or relatives should be fully informed before any consent is obtained. Special attention needs to be given to those for whom others make decisions concerning their health and welfare. The easiest option would be to apply presumed consent only to specified groups (e.g. autonomous adults) and to disallow any vulnerable individuals from being considered as members of these groups. This would ensure that no situation of abuse of the vulnerable, whether intentional or unintentional, occurs. There is no certainty, however, that such a measure would not invoke cries of discrimination from those who may misinterpret the measure as an indication that the body organs of such individuals are of lesser value. It may also create difficulties and lead to complicated legal arguments with cases brought before the courts should a family situation arise that is similar to that of the case of Y , but witThe issues for vulnerable individuals require further analysis and, prior to the introduction of any legislation that allows for presumed consent, there needs to be a thorough re-examination of current laws that treat children and mentally incompetent adults as non-autonomous. With regard to children, reviews of the age of consent and the notion of Gillick competency with better guidelines for practitioners are needed. When deciding on mental impairment or incapacity, there should be an assessment of its scope, its nature, its duration and its durability .To presume consent may or may not alleviate the shortage of donor organs but it will most certainly raise a number of related ethical and legal complexities that will need to be addressed in order to safeguard against unacceptable practices. Fundamentally, what is meant by presumption and how this is applied to the concept of presumed consent to organ donation will have to be determined. It will need to extend beyond the boundaries of current legal and scientific notions and will involve legal and ethical arguments about the sale of body organs, rights of ownership, donation and bequeathing to beneficiaries. More investigation of attitudes towards presumed consent and why and how these may vary is required. Consideration must be given to objections on religious or cultural grounds and whether and how prospective legislation could impinge on beliefs and practices. As always, protection of the most vulnerable, unable to make autonomous decisions, is of paramount importance in any actions taken. Current laws and practices relating to consent and decision-making capacities of children and vulnerable adults need to be reviewed before introducing legislation that permits the presumption of consent.Consent for body organs cannot be presumed in the same way that presumption is used in law or science. Presuming consent for organ donation places the value of body organ function above the requirement for permission from the donor and raises a number of related ethical and legal questions about ownership and sale of body organs, rights of refusal for children and mentally incompetent adults. The factors that influence donation rates and attitudes to presumed consent have not been clearly identified and require further investigation. Measures for the protection of the most vulnerable need to be addressed before legislation to allow for presumed consent can be permitted.The author declares that she has no competing interests.The pre-publication history for this paper can be accessed here:"} +{"text": "Medical research to improve health care faces a major problem in the relatively limited availability of adequately annotated and collected biospecimens. This limitation is creating a growing gap between the pace of scientific advances and successful exploitation of this knowledge. Biobanks are an important conduit for transfer of biospecimens and related health data to research. They have evolved outside of the historical source of tissue biospecimens, clinical pathology archives. Research biobanks have developed advanced standards, protocols, databases, and mechanisms to interface with researchers seeking biospecimens. However, biobanks are often limited in their capacity and ability to ensure quality in the face of increasing demand. Our strategy to enhance both capacity and quality in research biobanking is to create a new framework that repatriates the activity of biospecimen accrual for biobanks to clinical pathology.The British Columbia (BC) BioLibrary is a framework to maximize the accrual of high-quality, annotated biospecimens into biobanks. The BC BioLibrary design primarily encompasses: 1) specialized biospecimen collection units embedded within clinical pathology and linked to a biospecimen distribution system that serves biobanks; 2) a systematic process to connect potential donors with biobanks, and to connect biobanks with consented biospecimens; and 3) interdisciplinary governance and oversight informed by public opinion.The BC BioLibrary has been embraced by biobanking leaders and translational researchers throughout BC, across multiple health authorities, institutions, and disciplines. An initial pilot network of three Biospecimen Collection Units has been successfully established. In addition, two public deliberation events have been held to obtain input from the public on the BioLibrary and on issues including consent, collection of biospecimens and governance.The BC BioLibrary framework addresses common issues for clinical pathology, biobanking, and translational research across multiple institutions and clinical and research domains. We anticipate that our framework will lead to enhanced biospecimen accrual capacity and quality, reduced competition between biobanks, and a transparent process for donors that enhances public trust in biobanking. In the past decade, unprecedented progress has been made in health research towards realizing the goal of personalized medicine guided by biomarkers and the ability to match the right preventive or treatment with the right patient, at the right time. Key to this progress has been the various '-omics' platforms, as well as bioinformatics, molecular imaging, drug discovery, and in the development of animal models of human disease -3. HowevBiobanks are central to the process of collection of human biospecimens for translational research and have contributed to numerous advancements in our understanding and treatment of disease ,4. BiobaBiobanks range in design and user, from those whose primary focus is to support clinical health care to those that have evolved to primarily support research. Research biobanks exist in many formats from population biobanks to disease-focused biobanks. The latter include informal biobanks associated with small and large research studies, basic research disease-affiliated banks, and clinical trial-biobanks. An escalating demand for biospecimens is resulting in the transformation of biobanking from an immature 'cottage industry' conducted by individuals, into a complex institutional activity ,6. BiobaDespite the advances of biobanking described above, significant issues and limitations remain that are restricting the impact of translational research. The major issues include the need to increase the quality and standardization of biospecimens collected, to enhance accrual capacity in terms of scale and disease representation, and above all, to maintain public trust in these activities. Underlying these issues is the need to ensure sustainability of biobanks and to provide mechanisms for equitable and appropriate access to biospecimens.Quality issues relate to the complications inherent in imposing complex research collection protocols on the routine workflow of distinct clinical organizations. These issues also relate to the difficulty in striking the right balance and appropriate division of biospecimens for both clinical and research requirements ('tissue ethics'). In particular this division makes it difficult to ensure that representative components of the biospecimens exist in both collections. One example of this difficulty is the low frequency with which pre-cancer lesions are captured in research biobanks. Variations between biobanks also influence quality. Even with recent advancement in the way biobanking is conducted, the impact of pre-analytical biospecimen variables, such as collection time , is not Capacity issues relate to both geographical and temporal gaps in the biobanking process. The geographic gap occurs because research biobanks have typically developed in health centres with an active research focus, not necessarily those with the highest volume or diversity of surgical and pathology services. Temporal capacity gaps arise because treatment occurs independently of opportunities to engage patients in research. Most biospecimens arise in the course of clinical treatment at a single location and is often completed before the relevance of the biospecimen to research becomes apparent, diminishing the opportunity to harvest biospecimens using specialized research protocols. One example is the patient who chooses to enroll in a clinical cancer therapy trial and has a formalin-fixed paraffin-embedded (FFPE) block created for the clinical archive. The retrieval of the FFPE archival block for a future biomarker assay is often a significant logistic barrier because it has been consigned to the clinical archive several weeks before the patient chooses to become involved in research. Studies requiring a frozen biospecimen are often impossible because retaining a frozen biospecimen is frequently not part of the standard clinical protocol.Framework issues include inconsistent ethical frameworks, privacy protection efforts and different \"business models\" between biobanks -18. ThesSustainability issues stem from the nature of funding; the limited scale and the non-systematic resources dedicated to biobanking . It has Access issues around biospecimens and their use are seen differently from the perspectives of donors, biobanks, and research users. For donors, it often means having the opportunity to contribute their biospecimen and health data to drive research that can address their specific disease. For biobanks, it means access to potential donors to seek their consent to accrue biospecimens. For research users, it means finding and obtaining the right biospecimens within biobanks and navigating regulatory and oversight processes. Both donors and biobanks face the geographical restrictions noted above, wherein the opportunity to connect and to donate is unavailable due to lack of a formal biobank at the potential donors' health treatment centre. A final issue that contributes to this barrier is the currently pervasive, pre-operative approach/consent paradigm which limits the opportunities for patients to donate to biobanks.One solution to address the issues of standardization of quality and capacity is to create networks of biobanks. This idea has stimulated initiatives and networks at regional and national levels including the Canadian Tumour Repository Network , CaBIG to create transformative health research infrastructure to enhance the national and international competitiveness of BC's health research community. A library is defined as a collection of materials organized to provide physical, bibliographic and intellectual access to a target group, with a staff that is trained to provide services and programs related to the information needs of the target group. Thus, a 'biolibrary' is defined as a collection framework that provides all forms of biobanks and their users with access to human biospecimens. A biolibrary differs from a biobank in that its primary focus is limited to acquisition, cataloguing, and distribution of biospecimens to biobanks BioLibrary s Figure . In contThe BC BioLibrary is a framework which consists of 3 main components: 1) 'Biospecimen Collection Units', established within clinical pathology departments; 2) patient/donor and biobank/user connections and engagement through hospital referral processes and web-based consent and inventory catalogues; and 3) public deliberation to guide its governance. The framework also includes several planned support components including a 'Biospecimen Distribution Unit'. The complete framework as envisaged is described below, followed by the current development status.The Biospecimen Collection Units (BCUs) embedded within pathology departments comprise trained biospecimen acquisition personnel (BCU Coordinators) supervised by the appropriate clinical leader within each pathology department. Training provided by the BC BioLibrary and its collection of standard operating procedures extends the skills of pathologists' assistants and technologists with further knowledge surrounding biobanking, research requirements, protocols, ethics and privacy issues. The BCU facilitates the triage of biospecimens into multiple formats, including formalin-fixed paraffin-embedded tissue blocks, flash frozen or OCT-frozen material. Collected biospecimens are held in short term storage and catalogued by logging a unique BC BioLibrary identification number into the relevant clinical pathology record. Elements of this record are extracted into the BCU inventory database .The consent process relating to biospecimen use for research has traditionally involved three distinct steps - permission to contact, the preliminary interview to ascertain interest and preferred medium for detailed discussion, and the informed consent discussion and agreement itself. The BC BioLibrary, acting as an 'honest broker' enables the key first step, by instituting a process to obtain consent after the surgery or therapeutic procedure ('post-operative consent protocol'). The BCU enables pathologists to routinely harvest and hold portions of biospecimens for research, in parallel with the portions of biospecimens sampled and assessed for clinical diagnosis. Once diagnosis has been completed and any immediate diagnostic need for these portions has expired, the consent status and potential research destiny of these research biospecimens can be determined. The BCU facilitates the contact step by communicating with the responsible clinician once a potential biospecimen has been harvested, to ascertain if the patient/potential donor will provide permission for contact. If permission is granted, the BCU can forward the referral to the relevant, REB-approved biobank. The biobank can then deploy its own consent protocol or request this service from the BC BioLibrary consent office. Following completion of the consent process, the biobank notifies the BCU Coordinator of the consent status for any biospecimens that have been collected.The status of the biospecimen with respect to the potential donor's specific research interests may already be known through a pre-operative consent process, at the time of harvesting. In this instance the BCU can distribute directly to a specific biobank. If consent has been withheld by the patient the research biospecimen is not collected or is destroyed once this patient decision is known. Alternatively, if the patient has not been approached pre-operatively by a biobank, the biospecimen can be collected held by the BCU for a defined period under an approved post-op consent protocol, before its ability to be used for research is determined. If at the end of the defined period, the consent decision is unknown , the biospecimen and all related data are irreversibly anonymized Figure . These aAnother key component of the BC BioLibrary is the development of an improved linkage between biospecimens and biobanks via web-based catalogues of existing biospecimens and consents .The Biospecimen Inventory Catalogue component is designed to provide a list of all biospecimens in short-term storage across different BCUs. This component is still under development. It is envisaged that it will be a searchable database for existing biospecimens that are available for distribution from the BCUs or alternatively from biobanks in the community that have an established REB-approved process for request and distribution of their biospecimens. The information available in this database will contain completely anonymized data: the BC BioLibrary ID, donor's age at the time of biospecimen collection, donor's gender, type of biospecimen and disease classification, and its location and availability. Data will be linked to a request form directed to the BC BioLibrary or to the biobank housing the biospecimen.The Consent Catalogue component will be designed to maintain lists which can be populated by each authenticated, disease-focused biobank seeking access to biospecimens that are collected by the BCUs and that are derived from donors enrolled into the biobank. Access to each list within the Consent Catalogue is restricted to the originating biobank. The Consent Catalogue will be programmed to establish a link between consented donors entered into these lists and their corresponding biospecimens collected in the BCUs. The mechanism for connecting donor consent with the associated biospecimens will be by periodic download of the Consent Catalogue as an encrypted file to each BCU computer workstation. Using an unsupervised query tool, the BCU inventory database will establish linkage between biospecimens at that BCU and consented donors within the Consent Catalogue. All matches will generate a flag in the BCU inventory database as well as a report to enable classification of the biospecimens collected to date by consent status. Based on this report the BCU Coordinator will then destroy, distribute, or anonymize and then distribute biospecimens to the appropriate biobank.Maintaining and improving public confidence is crucial to the social sustainability of biobanking. Public trust is associated with many topics: governance, clarity of mission and motivation, and transparency around issues of funding and use for academic and industry applications. The BC BioLibrary provides an attractive focus for input from the public on all topics due to its broad scope and direct focus on the primary intersection between patients and biospecimen accrual. The BC BioLibrary has been launched with an initial governance structure designed by biobanking experts and under the external oversight of ethics committees, privacy laws, and health research foundations. However, the intention is to actively seek public input into this structure and to evolve by integrating this input into the oversight of biospecimen collection. Public input is sought through a series of public consultation events and based on a consensus building approach that is fostered by deliberative democracy. The focus of these events will evolve from discussion of general questions around biobanking to more specific discussions around the BC BioLibrary and biobanks and their associated governance models.Access to the BC BioLibrary requires scientific review (conducted by a BC BioLibrary user access committee) to determine priority of each user application and authentication including documentation of research ethics approval to receive and work with the human biospecimens requested. Although still evolving as the BC BioLibrary expands from single site pilot BCUs into a network, the BC BioLibrary user access committee is envisaged to comprise representatives from BCU sites and the BC BioLibrary management and executive teams. The committee conducts scientific peer review scaled to the request and logged through formal applications to assign priority for access to BCUs and seeks to ensure feasibility, fairness and accountability. Single site requests are approved at the local BCU level by the site director, site BCU Coordinator, and the BC BioLibrary manager. External and multi-site requests are handled by the full BC BioLibrary access review committee. All activities are reviewed by the BC BioLibrary Executive. The BC BioLibrary creates a forum to seek resolutions of competing requirements for biospecimens through peer review and draws from collective experience in managing access to biobanks. For those conflicts that persist, a balanced consideration through peer review can help to recognize local priorities while also balancing these with donor preferences and the scientific merit of different projects. Most conflicts can be resolved by shared access, division of the biospecimen, or staggered accrual periods or sites. Another important aspect of user access involves authentication of the users' scientific credentials and the ethical and privacy considerations. REB review and approval addresses these aspects and determines whether access is restricted to biospecimens associated with project-specific consent or can also include anonymized biospecimens.Each BCU currently transfers biospecimens direct to the user, but once more BCUs are established, a single portal for transfer and circulation of requested samples will be more efficient. Users may also choose to receive processed biospecimens and to utilize a range of services and advanced analytical platforms available through the Center for Translational and Advanced Genomics connected to the BC BioLibrary . Once diTo prepare for initial implementation of the BC BioLibrary plan, we began by delineating the functional components required. A communications plan was developed and a set of key messages derived to articulate components as they related to five overarching goals. The messages were defined as follows: 1) the BC BioLibrary is a facilitator, not a biobank; 2) the BC BioLibrary is intended to help all interested BC researchers and educators; 3) the BC BioLibrary helps pathologists streamline and improve biobanking activities; 4) the BC BioLibrary enhances quality and accessibility of biospecimens; and 5) the BC BioLibrary contributes to the sustainability of biobanking in BC by developing and upholding the public's trust. We pursued this initial 'communication' effort in advance of functional components to reduce the strong potential for misinterpretation of the objectives and motivation underlying a new plan around biospecimen procurement from the many established key stakeholders. The ongoing need to correct the persistent assumption that biobanking can continue as a 'cottage industry' and the misconception that the BC BioLibrary exists to create a single 'BC biobank' underscores the value of this approach.Implementation began with the establishment of project teams in 2007 to focus on the three main components of the framework: standardization of biospecimens collection and processing ; enhanced communication between the donors, biobanks ; and public engagement around biobanking . These teams are managed by an Executive Committee (9 members) and the Management team (3 members), with oversight provided by a Governance Oversight Committee (9 members). Through these teams and committees the BC BioLibrary is driven by leaders in biobanking and translational research across British Columbia, spanning four major academic hospitals, three health authorities, multiple affiliated academic institutions, and five major institutional biobanks. The latter includes the BC Cancer Agency Tumor Tissue Repository (TTR) program and the Each element of the BC BioLibrary has been submitted for REB approval in a stepwise fashion. The first two elements involved establishing a website and a single, pilot BCU in one pathology department. The website served to communicate with stakeholders around all aspects of biobanking and the activities of the BC BioLibrary. Creation of the pilot BCU was essential to provide a working prototype around which we could engage with the REB and pathology stakeholders. To date this first BCU has collected over 450 biospecimens in an 18 month period. Biospecimens collected include those harvested from donors who provided pre-operative consent to two local studies, as well as biospecimens collected under the post-operative consent pilot and not linked to an identified study. The pilot BCU has also been used to develop over 17 SOPs which detail all aspects of biospecimen harvesting and data capture relevant to the BCU, the BCU inventory database , as well as a web based training curriculum. The evolution from this single, pilot BCU into a functional accrual network has now begun with the recent establishment of two additional pilot BCUs at additional hospital sites and the graduation of the first pilot to a full BCU approved and capable of supporting multiple biobank users. The two additional web-based Catalogues will be deployed to complete the multi-site biospecimen acquisition capability of the BC BioLibrary.An important element addressed by the BC BioLibrary is the deployment of a system-wide post-operative consent protocol. The protocol establishes a maximum time span of 90 days from the time of surgery for holding a biospecimen in a BCU. This corresponds to the typical outside limits of the period of completion of the diagnosis. This duration optimally facilitates the necessary clinical process for all biospecimens by enabling portions of the biospecimen to be reclaimed and processed for clinical purposes if necessary to complete the diagnosis. The parallel processes for obtaining permission to contact, completing the consent decision, and assigning consent status to the biospecimen have also been delineated.The construction of additional components of the framework for centralized distribution has yet to begin. However as part of this planning process the BC BioLibrary conducted a survey in 2008 to gauge the need for frozen biospecimens by BC investigators. The results of this survey showed that over 80% of respondents (n = 55) indicated they were not currently satisfied with their ability to perform their research using biospecimens collected through their own institution. Of those, 98% believed they would benefit from access to biospecimens, with specific requirements for disease-specific (89%) and tissue-specific (77%) biospecimens, collected from more than one institution within the province. The full implementation of the BC BioLibrary BCUs would allow these needs to be met. In addition a literature survey of over 3000 papers reported in cancer research journals at 5 year intervals from 1988 to 2008 shows that use has increased 3 fold. The mean cohort size in research studies that utilized tissue biospecimens has changed from approximately 50 to 150 over this period.The final and key element addresses public trust. A public engagement process has been launched with the first two events held in 2007 and 2009. The design of these events, the methodology and the composition of the participant groups is described elsewhere . BrieflyBiobanking has historically focused on accrual and annotation of biospecimens, but equally critical is the creation of processes for engaging the public before accrual, distributing biospecimens, and cultivating inter-biobank collaborations. Further efforts towards fostering synergy between the public and biobanks and associated processes will enhance scientific and technological advancement and the translation of discovery to the clinic.The BC BioLibrary is a novel, province-wide strategy aimed at public engagement in biobanking, a common framework for biospecimen acquisition embedded in pathology departments, and integration of this framework with existing biobanks and a spectrum of research facilities. The design builds on evolutionary concepts including the repatriation of biospecimen acquisition for biobanks back into pathology departments and shared governance of these processes.As defined above, a 'biolibrary' differs from a biobank. A biolibrary focuses on the complexities of connecting donors with biobanks and on acquisition, cataloguing, and distribution of biospecimens to biobanks. One comparable example of a biolibrary is the Cooperative Human Tissue Network (CHTN) . The proNeither model directly accommodates the consent status of the biospecimen. The CHTN was developed using the non-specific surgical consent as a basis for distribution of anonymized biospecimens with time-of-diagnosis annotation. Both the CHTN and SPIN lack components to effect public engagement. The BC BioLibrary builds on these models to accommodate informed consent status of biospecimens and enable a prospective connection between a biospecimen, the donor's health record, and prospective clinical treatment and outcome data. But perhaps more importantly, the act of communication and the transaction which leads to the approval to collect and store a biospecimen linked to personal health data for research purposes is critical to the future of biobanking. An example of the acute effect on biobanking when public confidence is lost was referred to above . A substAlthough there is a growing body of evidence for the ethical acceptability of post-operative consent process , many biCurrent regulatory requirements for biobanking have been developed to protect the interests of the public. However, the implementation of regulations to address privacy issues that were developed without biobanking in mind has requThe BC BioLibrary framework is designed to maximize the opportunity and capability of injecting high quality, accurately annotated biospecimens into all forms of biobanks. This framework addresses geographical and temporal issues that currently limit the capacity and capability of biobanking. In the process, it provides improved opportunity for oversight of biospecimen usage, standardization of consent and collection processes, and equity in biospecimen distribution to biobanks. Perhaps most importantly, by creating a common shared infrastructure, this framework reduces competition between biobanks and offers a transparent process for donors to participate, thereby enhancing public trust and providing an opportunity for public involvement in designing optimal governance of biobanking.BC: British Columbia; BCU: Biospecimen Collection Unit; OCT: Optimal Cutting Temperature compound; REB: Research Ethics Board; SOP: Standard Operating Procedure; MSFHR: Michael Smith Foundation for Health Research.The authors declare that they have no competing interests.The authors' contributions to this manuscript are reflected in the order names are shown. PHW and JEM supervised all aspects of this study and contributed to the manuscript preparation. ROB and SCG participated in the manuscript preparation. All authors contributed to the conception of the ideas embodied here and to the development and implementation of this study. All authors read and approved the final manuscript."} +{"text": "Reporting of informed consent and ethical approval are important aspects of published papers which indicate the knowledge and sensitivity about ethical aspects of research by the researchers.This study reports description of informed consent and ethical approval in the published psychiatric research in the main journal of psychiatry in India. All original research articles (n=157) published in the Indian Journal of Psychiatry in the years 2000, 2003 to 2007 were included.Informed consent was mentioned in 51% of studies in 2000, which gradually rose to 82% by the year 2007. Ethics committee approvals were mentioned in 2% of studies in 2000, and 25% of reports in 2007. Consent was reported to be written in only 40%, content of the consent forms was mentioned in 17%, and the language of consent form was reported in 3% of the studies where consent was reported.Regulation of ethical principles and formulation of necessary guidelines or rules for research as well as for publications are necessary and desirable to ensure the safety of participants and good quality of research. Almost all nations around the world are familiar with the concept of informed consent and ethical approvals, though the standards of their understanding and implementation differ. The system is paternalistic in some countries where the clinician takes decisions for patients and diametrically opposite, self-deterministic in others, which emphasize patient autonomy. Other countries may lie somewhere along the spectrum and appear heading towards the transparent model. It is a well-recognized principle in medical ethics that the consent of a patient should be obtained before performing any procedure whatsoever, whether it be invasive or noninvasive, for the purpose of research or for treatment. It has also been a legal requirement to conduct research. Informed consent is meaningful only when potential research subjects assess the relevant risk and benefits of proposed intervention and then voluntarily gives authorization to proceed.Informed consent assumes more importance in psychiatric research due to issues related to competence of a person with psychiatric disorder to give consent, validity of the consent given by a patient with lack of insight or impaired judgment, and the proxy consent given by a relative or caregiver. Finally, one may question if it really is possible to obtain free and informed consent from a psychiatric patient with impaired judgment?There is plenty of international literature on concerns regarding informed consent in psychiatric research, whether informed consent (for research or treatment) can be obtained from psychiatric patients, and how competency is established, however, there are very few reports from our country on different aspects of informed consent\u20135 competThere has been a steady growth of psychiatric research in India. The leading journals all over the world proscribe a strict guideline to be followed in the conduct and publication of research. There are no studies, which have examined the issue of consent and ethical approval in the published psychiatric research in India. In this study, we attempted to examine whether informed consent and ethical approval were reported in the published psychiatric research in the Indian Journal of Psychiatry (IJP), the official journal published by Indian Psychiatric SocietyAll original research articles published in the Indian Journal of Psychiatry in the years 2000, 2003 to 2007 were included in the study. Research articles, which warrant consent such as drug trials, trials involving electroconvulsive therapy, studies involving invasive procedures and clinical interviews undertaken for research were included in the assessment. Retrospective chart reviews and case reports were excluded, as it is still not mandatory to obtain consent if the identity is not disclosed. Editorials, reviews, letters to the editor and book reviews were excluded, as consent was irrelevant. The information collected was whether consent obtained or not, the procedure of obtaining consent; adequate description of the content of consent form, the language in which it was written etc; was the consent informed or not; adequate information about the protocol of research was provided or not, whether the consent was written or oral, and who provided the consent. Information about ethical committee approval was also noted from each paper published.In addition, the information regarding the study such as nature of the study, nature of the intervention and the procedure followed in the study - invasive or noninvasive - was also obtained. Collection of blood and radiological investigations other than for routine clinical requirements were considered as invasive. Procedure such as EEG, interviews and interview-based surveys were considered as noninvasive. The data was computed and different aspects of consent were analyzed.There were a total of 157 published studies for which consent and ethical approval should have been obtained. None of case reports obtained consent, however, they did not reveal the identity of the patient. Informed consent was mentioned in 51% of studies in 2000, which gradually rose to 82% in the year 2007 . In 2004The ethical committee approval was sought only by studies conducted at major institutes. The ethics committee approvals were mentioned in 2% of studies in 2000, which rose to 28% in the year 2006 and 25% of reports in 2007. None of the studies described the procedure of obtaining the consent. The consent sought was reported to be written in 40% of the studies reporting consent being sought. In other studies, it is not clear if the consent was verbal or written. The content of the consent forms has been mentioned briefly in a small proportion of reports (17%) and the language of consent form has been specified only occasionally (3%).Consent was obtained more often in studies involving drug trials and invasive procedure than in studies with interviews or scales. Consent was written, informed, and approved by the local ethical committee when pharmaceutical company funded the studies. In a couple of population based surveys, consent and approval was also sought from the local community leaders.There has been a gradual rise in the proportion of studies where consent has been reported. However, there is a great scope for improvement in seeking and reporting the details of the consent. Ethical approval is still not reported for almost 75% of the reports, which is a cause for concern. This may be due to lack of availability of ethics committees to the researchers. It may also be due to the perception of researchers that interview forms and scales can be administered as a part of the study with informed consent alone, and without need of approval of an ethics committee. Many institutes insist on an ethical approval only for funded projects and drug trials, and not other non-funded, interview or assessment based studies.It is evident from the results that more than half of the published studies did not mention even a single word on consent. Although consent was obtained more often in drug trials and studies with invasive procedures but the consent was inadequately described. Written informed consent and ethical committee approval for drug trial funded by pharmaceutical company were reported in most trials.The reasons for the gross inadequacy of obtaining consent are difficult to explain. This may not indicate a deliberate attempt not to obtain consent on the part of clinician or researcher. Possible speculations include absence of monitoring, illiteracy of the patients, considering doctors as best judges by the patient and their relatives. In India1Clinician researchers whose Institutes do not have ethics committees and those who are not based in academic Institutes can approach Independent Ethics Committees, which are private organizations and offer to scrutinize ethical aspects based on ICMR guidelines, ICH guid7Informed consent was not reported in about 50% of the published studies in the 2000 about 20% of studies in 2007. It was inadequate and unwritten in majority of the studies. However, there is a gradual improvement in reporting of consenting process and ethical approvals over the years. Regulation of ethical principles and formulation of necessary guidelines/rules for research as well as for publications are necessary and desirable to ensure the safety of participants and good quality of research."} +{"text": "In motor learning, training a task B can disrupt improvements of performance of a previously learned task A, indicating that learning needs consolidation. An influential study suggested that this is the case also for visual perceptual learning Many studies of perceptual learning have shown that performance strongly improves during breaks, particularly when including sleep, indicating that perceptual learning undergoes consolidation The study was approved by the local institutional ethics committee .Forty-three na\u00efve participants from the Ecole Polytechnique F\u00e9d\u00e9rale de Lausanne (EPFL) joined the experiments after providing informed written consent. All participants had normal or corrected to normal vision as measured with the Freiburg visual acuity test We used the very same stimuli and procedure as previously described (see Dot Verniers were preDot Verniers consisted of three dots with a radius of 2\u2032 (arc min) and with a distance between the upper and lower dot of 20\u2032 . For aliFourteen participants took part in Experiment 1. At the start of each trial, participants fixated a central dot for 300 ms, which flashed to indicate the presentation of two dot Verniers presented in the right lower visual field . During the experiment, each offset size was presented for 20 consecutive trials before changing to another offset size. At each change, participants could rest their eyes. Each offset size was presented for 80 trials in one session . The order of offset sizes was determined randomly. The experiment consisted of five sessions performed on five consecutive days.Each day, seven participants first trained 400 trials with task A immediately followed by 400 trials with task B and session (session one or five) as factors. Percent correct was used as the dependent variable.Performance does not improve when bisection stimuli with different outer distances are presented interleaved trial by trial, i.e. roving Bisection stimuli were preBisection stimuli consisted of three vertical lines of length 20\u2032 (arcmin). For task A, the two outer lines were separated by 20\u2032 and task (task A or task B) as factors and baseline performance thresholds as dependent variables. Baseline performance was determined by calculating the mean of the estimated threshold in the two blocks.Change in performance for the control task was determined by comparing baseline performance before and after training. Two-tailed, paired In the control experiment, seven participants trained for five sessions and 400 trials per session with task A only , session (one or five) and condition (A-only or AB) was calculated with percent correct as dependent variable. There were main effects of offset size F\u200a=\u200a24.67, pHence, we failed to replicate the result of the study by Seitz et al. As a control, nine participants trained for two days with task A only . A two-way ANOVA with factors pre/post baseline thresholds (pre- or post-training) and group was conducted with performance threshold as the dependent variable. Only the effect of pre/post baseline thresholds was significant [F\u200a=\u200a29.15, pLong term consolidation is often important for perceptual learning. Many visual tasks often need sleep to improve performance We do not know why the results are different between ours and the study by Seitz et al. A recent study reported retrograde interference in a texture discrimination task Why is perceptual learning possible when interfering stimuli are presented in separate sessions, but not when presented randomly interleaved trial-by-trial, in so called roving conditions? Interestingly, in contrast discrimination tasks We previously tested if perceptual learning was possible if trials with bisection stimuli were clustered, for example, A-A-A-B-A-A-A-B or A-A-A-A-A-A-B. The learning was still disrupted when up to six stimuli were clustered"} +{"text": "Wolbachia has been shown previously to induce pathogen interference phenotypes in mosquito hosts. Here we examine an artificially infected strain of Aedes polynesiensis, the primary vector of Wuchereria bancrofti, which is the causative agent of Lymphatic filariasis (LF) throughout much of the South Pacific. Embryonic microinjection was used to transfer the wAlbB infection from Aedes albopictus into an aposymbiotic strain of Ae. polynesiensis. The resulting strain (designated \u201cMTB\u201d) experiences a stable artificial infection with high maternal inheritance. Reciprocal crosses of MTB with naturally infected wild-type Ae. polynesiensis demonstrate strong bidirectional incompatibility. Levels of reactive oxygen species (ROS) in the MTB strain differ significantly relative to that of the wild-type, indicating an impaired ability to regulate oxidative stress. Following a challenge with Brugia pahangi, the number of filarial worms achieving the infective stage is significantly reduced in MTB as compared to the naturally infected and aposymbiotic strains. Survivorship of MTB differed significantly from that of the wild-type, with an interactive effect between survivorship and blood feeding. The results demonstrate a direct correlation between decreased ROS levels and decreased survival of adult female Aedes polynesiensis. The results are discussed in relation to the interaction of Wolbachia with ROS production and antioxidant expression, iron homeostasis and the insect immune system. We discuss the potential applied use of the MTB strain for impacting Ae. polynesiensis populations and strategies for reducing LF incidence in the South Pacific.Heterologous transinfection with the endosymbiotic bacterium Wuchereria bancrofti. Elimination of LF in the South Pacific requires an approach integrating both mass drug administration and strategies targeting the primary mosquito vector, Aedes polynesiensis. Ae. polynesiensis is naturally infected with Wolbachia, an endosymbiotic bacterium that is a focus of novel control strategies, due to its ability to affect mosquito reproduction and interfere with pathogen development. Artificial Wolbachia infections are associated with increased levels of reactive oxygen species (ROS), which can alter immune gene expression and inhibit dengue proliferation. Here, we describe the generation of an Ae. polynesiensis strain that has been artificially infected with Wolbachia from Ae. albopictus. The infection is stably maintained and causes conditional sterility when crossed with the wild-type. The artificially infected strain exhibits different ROS levels than the wild-type, indicating a decreased ability to regulate oxidative stress. The number of successfully developing infective stage filarial worms was reduced in the artificially infected strain. In addition, survival of the artificially infected strain was significantly lower than the wild-type. The artificially infected Ae. polynesiensis strain is discussed in relation to ongoing mosquito-borne disease control efforts.Lymphatic filariasis (LF), the leading cause of morbidity in South Pacific regions, is caused by a filarial worm, Lymphatic filariasis (LF) affects 120 million people globally and has been a leading cause of morbidity in South Pacific regions Aedes polynesiensis is the primary vector of Wuchereria bancrofti, the filarial nematode that causes LF in the South Pacific Wolbachia, a maternally inherited endosymbiont that infects a broad range of invertebrates Wolbachia infection in mosquitoes can induce cytoplasmic incompatibility (CI), a form of conditional sterility that results in early embryonic arrest when a Wolbachia infected male mates with an uninfected female or one harboring a different Wolbachia type Wolbachia induced CI in important mosquito species, either to suppress the population through releases of incompatible males or to harness CI as a gene-drive mechanism for spreading useful phenotypes, such as disease resistance, into a targeted population Ae. aegypti population with an artificially infected mosquito Wolbachia infections, both natural and artificial, have been shown to interfere with pathogen development within the mosquito host. The presence of a naturally occurring Wolbachia infection in Drosophila protects flies from virus-induced mortality Wolbachia infection within Culex quinquefasciatus, which is associated with a significant reduction in West Nile virus dissemination and transmission rates Wolbachia infections have been observed to affect dengue, chikungunya, Plasmodium and filarial worms in Ae. aegyptiPlasmodium falciparum oocysts was inhibited in Anopheles gambiae that were somatically inoculated with WolbachiaWolbachia infections and increased expression of key mosquito immune factors such as defensins, cecropins and Toll pathway genes Wolbachia infections are associated with increased oxidative stress in the form of reactive oxygen species (ROS) Ae. aegypti are linked to the activation of the Toll immune pathway Although the mechanism underlying pathogen interference is unknown, a possible explanation is the association between artificial wAlbB infection from Ae. albopictus into Ae. polynesiensis. Prior transfer of the wAlbB infection into Ae. aegypti induced strong CI in the resulting strain Wolbachia infection decreased filarial competence in Ae. aegyptiwAlbB infection into Ae. polynesiensis might facilitate a similar immunological response and reduce the intensity of filarial worm infection. Unlike other mosquito vector species, relatively little genomic information and molecular tools are available for Ae. polynesiensis, making examination of immune gene expression difficult. However, ROS measurement methods are a relatively robust indicator of immune system activation and have been applied to numerous species, including mosquito vectors Ae. polynesiensis strains infected with different Wolbachia types. The results show an association between Wolbachia type and ROS levels. Comparisons of the Ae. polynesiensis strains show significant differences in their ability to support Brugia pahangi development. We discuss the results in relation to a possible interaction between Wolbachia infection type, ROS levels and filarial competency and the potential application to public health strategies targeting decreased LF incidence.In this study, embryonic microinjection Ae. polynesiensis was generated by microinjecting embryos of the aposymbiotic APMT strain with cytoplasm from naturally superinfected Ae. albopictus embryos (HOU strain). The mosquito strains used in the injection experiment are listed in 0 females and 12 G0 males survived to adulthood, seven and six of which were infected with Wolbachia, respectively.The MTB strain of 0 females were screened for specific infection type had a 60% infection rate, with the majority of females single-infected with wAlbB only. PCR testing and selection of the subsequent generations were unable to sustain the superinfection. Thus, the resulting MTB strain is infected with wAlbB only. Using PCR-guided selection, infected females were continuously outcrossed with APMT males until G6. Beginning at G7, MTB females were mated with MTB males. Subsequent to G7, periodic testing of the MTB strain confirmed that the infection is stable and maternal inheritance rates remain at 100% (data not shown).The seven PCR positive Gion type . A G0 feCrosses were performed to examine for CI. The results demonstrate bidirectional incompatibly between APM and MTB. High egg hatch was observed in crosses between similar males and females . In contWolbachiaWolbachia-mediated effect in Ae. polynesiensis, we compared ROS levels in young adult females of MTB, APM and APMT. In addition to examining females fed sucrose only, we provided females with a blood meal to examine for an effect of blood feeding on ROS levels, which has been observed in prior studies Wolbachia had been manipulated . Specifically, a model with strain and blood meal status as factors and ROS level as the variable was significant . Blood meal status was significant , while the overall strain effect was not significant . A significant interactive effect was observed for strain\u00d7blood meal status . Following a blood meal, ROS levels in the APM strain remained similar to those observed in sucrose fed females . However, significant decreases in ROS levels are observed for blood fed females of APMT and MTB . Post hoc Tukey HSD tests determined that after blood feeding, APM had significantly higher levels of ROS than APMT (p<0.05) and MTB (p<0.05), the latter of which were equivalent (p\u200a=\u200a0.9).ROS levels can be significantly affected in mosquitoes that are artificially infected with Plasmodium development in Anopheles gambiaeWolbachia infection can affect filarial worm development in Ae. aegyptiBrugia-infected blood meal and APMT . Equivalent worm loads were observed with APM and APMT .Prior studies have shown that changes in ROS levels can be detrimental to ood meal . MTB hadBrugia-infected blood. Therefore, a formal experiment was conducted to compare survivorship. Significant differences were observed in survivorship between strains fed Brugia-infected blood and strain . Despite the variation between replicates, the pattern between strains remained consistent . Similar to the pattern observed in the preceding experiment, APM was longer lived than APMT , which was longer lived than MTB . There was no significant interactive effect for blood meal type\u00d7strain .To examine for a role of the t Brugia . The res2\u200a=\u200a6.72, df\u200a=\u200a1, p<0.05) and MTB were longer lived when fed sucrose only. In contrast, APM females were longer lived following a blood meal .Comparing females fed either blood or sucrose only , the patwAlbB infection from Ae. albopictus to Ae. polynesiensis. PCR assays show that the infection is stable, with high maternal transmission. Overall mosquito survival after microinjection was high as compared to previous studies, which observed survival rates of less than 5% wPolA wAlbB Wolbachia types. Consistent with expectations, crosses between APM and MTB were bidirectionally incompatible. Although superinfected cytoplasm was injected, only the wAlbB Wolbachia infection was established. The separation of superinfected Wolbachia types after microinjection is consistent with prior reports wAlbB versus wAlbA within superinfected Ae. albopictusEmbryonic microinjection was used to transfer the Ae. polynesiensis strains is similar to that of previous reports, which have shown differing ROS levels resulting from transinfection with wAlbB Wolbachia infection in both adult female Ae. aegyptiAe. albopictus cell line 2O2 can be catalyzed via the Fenton reaction, along with excess labile iron Wolbachia-produced bacterioferritin can scavenge labile iron, with the potential for iron competition between Wolbachia and host Ae. aegypti) are not yet available with Ae. polynesiensis. Our results provide additional motivation for developing such tools and methods.The differing ROS levels observed in the artificially infected Wolbachia to upregulate dual oxidase (DUOX), which may influence the observed variation in ROS levels. A key component of innate immunity, the DUOX transmembrane protein is involved in ROS generation Wolbachia infection is recognized as foreign by its mosquito host Wolbachia genome retains genes for heme biosynthesis Wolbachia infections have been shown to buffer iron flux in insects, allowing iron homeostasis despite large influxes and limiting the deleterious effect of labile iron wAlbB did not restore MTB to the homeostasis phenotype observed in the wild-type APM strain, indicating that not all Wolbachia types are equivalent and that the wPolA infection in Ae. polynesiensis represents an evolved symbiosis.Particularly intriguing is the previously reported ability of 2O2 was reduced in flight muscles after blood feeding in Ae. aegyptiAn. gambiae hemolymph following a blood meal Ae. aegypti, blood feeding was associated with a significant decrease in ROS levels in midgut tissue through activation of a heme-mediated protein kinase C pathway Wolbachia in influencing iron metabolism and the mosquito immune system.Whole mosquitoes were examined in this study, which may mask a tissue-specific effect. For example, mitochondrial generation of HAe. polynesiensis differed significantly between the strains. Specifically, the wild-type Wolbachia infected APM and its aposymbiotic counterpart (APMT) had similar numbers of successfully developing, infective L3 worms. This is comparable to the result of previous studies in which the removal of naturally occurring Wolbachia in Ae. pseudoscutellaris had no effect on the mean number of worms Wolbachia was artificially introduced into Ae. aegyptiB. pahangi numbers were observed in the artificially infected strain, relative to the naturally uninfected Ae. aegypti. Using the substantial genomic information available for Ae. aegypti, the authors speculated upon an association between an observed constitutive up-regulation of immune genes and an observed inhibition of filarial worm development. With the future development of additional genetic tools for Ae. polynesiensis, a similar approach can be used downstream to examine for an impact of artificial Wolbachia infection.The number of infective stage filarial worms that developed within Wolbachia infections can detrimentally affect the fitness of their hosts Wolbachia infection types when fed on infected or uninfected blood. The decreased number of L3 filarial worms within MTB may be due, at least in part, to a reduced ability of MTB females to tolerate filarial worm infections and their premature deaths prior to dissection assays. The observed ROS variation is an additional potential explanation for the observed variation in filaria development. Recent studies show that changes in ROS levels can affect pathogen development. In An. gambiae, high levels of ROS were associated with increased melanotic encapsulation of Plasmodium parasites Ae. aegypti, increased ROS expression is associated with induction of the Toll pathway, which mediates the expression of antimicrobial peptides and antioxidants to balance oxidative stress and is associated with reduced dengue virus titer Plasmodium parasites Wolbachia infection in Ae. polynesiensis were to affect the regulation of iron, as previously discussed, then filarial worm development and survival may be affected in the MTB strain. However, a simple direct association with overall ROS levels cannot explain the pattern of differential filarial worm development that was observed here, since the overall ROS levels were lower in MTB, relative to wild type mosquitoes. Furthermore, the lower ROS levels observed in the aposymbiotic APMT strain was not observed to be associated with reduced filarial development.Artificial The variation observed between experimental replicates is not unexpected and is similar to prior reports Filarial worm infections in mosquitoes are not benign. They can cause damage to the midgut and flight muscles wPolA infection and Ae. polynesiensis, since increased survival of blood fed females is adaptive for both the anautogenous mosquito and the maternally inherited Wolbachia infection.Blood contains an important nutritional component for adult mosquitoes 2O2 levels of the University of Kentucky (Protocol number: 00905A2005).Aedes albopictus (HOU) and an aposymbiotic Ae. polynesiensis strain (APMT) were used as Wolbachia donor and recipient, respectively. The HOU donor strain is naturally super-infected with two Wolbachia types, wAlbA and wAlbB wPolA and exhibits a 100% infection rates in wild populations ad libitum, and a blood meal was given once a week with anesthetized mice.Unless otherwise specified, mosquitoes were maintained using standard insectary conditions at 28\u00b12\u00b0C, 75\u00b110%RH, and a photoperiod of 18\u22366 h (L\u2236D). Larvae were reared in optimal conditions, at low density in excess of 6% liver powder solution , until pupation. Adult mosquitoes were provided with a 10% sucrose solution Collection, preparation and microinjection of embryos were based upon successful techniques used for previous mosquito transfections Drosophila vials (Fisher Scientific) containing wet germination paper and allowed to oviposit. Recipient embryos (APMT) to be injected were collected, aligned on wet germination paper, briefly desiccated and covered with water-saturated halocarbon 700 oil (Sigma-Aldrich Co.). Donor HOU embryos were treated similarly, but not desiccated.Blood-fed APMT females were held in 0) and reared using standard maintenance conditions.Cytoplasm was withdrawn from the posterior of donor HOU embryos and injected using an IM 300 microinjector into the posterior of the recipient APMT embryos. Recipient embryos were injected up to 90 minutes post-oviposition. After injection, the embryos were incubated under standard conditions for approximately 40 minutes. Injected embryos were removed from oil and transferred to wet germination paper, where they were allowed to develop for 5 days. The eggs were hatched (G0) were isolated as virgins and mated with APMT males, yielding a new strain named MTB. After oviposition, G0 females and males were assayed for both presence of Wolbachia infection and type using PCR (see below) from infected G0 females were isolated as virgins and outcrossed with APMT males. All G1 females that oviposited were tested for Wolbachia infection by PCR. PCR-guided selection was performed for 6 generations (G1\u2013G6) , and PCR was used to monitor the frequency of infection periodically through the following generations.Females of the parent generation (Gee below). Females (G1\u2013G6) . At G7 tWolbachia specific primers and PCR. Adults were homogenized in 100 \u00b5l of buffer containing 10 mM Tris-HCl, 1 mM EDTA and 50 mM NaCl using a Mini-beadbeater , boiled for 5 minutes and centrifuged at 14,000 rpm for 5 minutes. Two \u00b5l of supernatant were used for each PCR reaction. PCR reactions were amplified in 50 mM KCl, 20 mM Tris-HCl (pH 8.4), 1.5 mM MgCl2, 0.25 mM dNTPs, 0.5 mM primers and 1 U Taq DNA polymerase in a total volume of 25 \u00b5l. Wolbachia infection in all strains was confirmed using general Wolbachia primers 438F (5\u2032CAT ACC TAT TCG AAG GGA TAG-3\u2032) and 438R (5\u2032AGC TTC GAG TGA AAC CAA TTC-3\u2032) and PCR cycling conditions of 94\u00b0C 2 minutes, 39 cycles of 94\u00b0C for 30 seconds, 55\u00b0C for 45 seconds and 72\u00b0C for 1 minute 30 seconds, followed by a final extension temperature of 72\u00b0C for 10 minutes. Infection type of all strains was confirmed using A-clade (136F and 691R) or B-clade (81F and 522R) specific primers All infection types were confirmed using Similarly aged egg papers from APM and MTB were hatched concurrently in dilute liver powder solution (\u223c0.6 g/L). One hundred first instar larvae were moved into a rearing container and fed optimally until pupation. Pupae were isolated in individual test tubes to ensure virginity. After eclosion, 20 virgin adults were introduced into a crossing cage at a 1\u22361 sex ratio and allowed to mate. A full factorial crossing design between APM and MTB was implemented, and four replicates were performed for each cross . An ovipTo determine ROS levels in mosquitoes fed sucrose only, whole bodies of seven-day-old APM, APMT and MTB were collected in 150 \u00b5l of 1\u00d7 PBS containing 2 mg/ml of the catalase inhibitor 3-amino-1, 2, 4-trizole. To determine ROS levels in blood fed mosquitoes, six-day-old APM, APMT and MTB were provided with a blood meal from an anesthetized mouse. Twenty-four hours after blood feeding, the midgut was dissected from the mosquito and the blood bolus was flushed from the midgut using 1\u00d7 PBS with catalase inhibitor. Mosquito carcasses and midgut tissues were collected in 1\u00d7 PBS with catalase inhibitor.For both treatments, samples were homogenized then centrifuged for 5 minutes at 10,000 g. The supernatant was filtrated through a 10 K molecular weight cutoff spin filter . The elution was collected and tested using a Hydrogen Peroxide Assay kit (BioVision) following manufacturer's instructions. The fluorescence intensity was detected with Excitation/Emission 544/590 using a fluorescence microplate reader . Five biological replicates, with three females for each strain were used for each treatment. A general linearized model with a normal distribution was used to determine if ROS levels differed between strain, feeding status or strain\u00d7feeding status. The sucrose treatment and the blood treatment were analyzed using separate ANOVAs with post hoc Tukey HSD comparisons.Brugia pahangi-infected dog blood was provided from the NIH/NIAD Filariasis Research Reagent Resource Center at the University of Georgia. Egg papers for APM, APMT and MTB were hatched concurrently and reared under standard maintenance conditions. Adult female mosquitoes were anesthetized using chloroform, and 75\u201390 mosquitoes were placed into cages. They were provided with a 10% sucrose solution and given 3 days to acclimate to the cage. Females aged 3\u20135 days were sucrose starved for 6 hours prior to blood feeding. They were given a Brugia-infected blood meal (10 microfilariae/\u00b5l) using sausage casing and a Hemotek membrane feeding system that maintained the blood at 37\u00b0C. All mosquito strains were allowed access to blood for 2 hours.Three replicates were performed to test for relative filarial susceptibility between strains. After feeding, females were allowed to rest for one hour before sorting. All mosquitoes were briefly anesthetized using chloroform and observed under a microscope for presence of a blood bolus. Blood fed and non-blood fed females were placed into separate cages. Ten days after feeding, surviving blood fed females were anesthetized on ice and dissected in sterilized Hank's balanced salt solution (Sigma-Aldrich). Individual mosquitoes were examined for L3 parasites by microscopy. The total number of filarial worms in each mosquito was recorded.To determine whether worm load data were normal, a Shapiro-Wilkes test was used . We built a general linearized model with a Poisson distribution to determine if mean worm load differed across replicates or between strains. Post-hoc contrasts were used to compare worm loads between strains. To correct for multiple comparisons we used the Benjamini-Hochberg correction with an \u03b1 value of 0.05 Brugia-infected blood only, 2) comparisons between mosquitoes fed uninfected and Brugia-infected blood meals, and 3) comparisons between mosquitoes that were blood fed or fed sucrose only.Mosquito rearing and blood feeding methods were the same as those described in \u201cfilarial susceptibility testing.\u201d We recorded the number of mosquitoes alive and dead ten days after feeding on different blood meal types to compare differences in survival between APM, APMT and MTB. Three separate experiments were performed: 1) comparisons between strains fed on For each of the above experiments, we built a general linearized model with a binomial distribution to determine if survivorship at day 10 differed between replicate, strain and blood meal type . Post-hoc contrasts were used to compare survivorship between strains. To correct for multiple comparisons we used the Benjamini-Hochberg correction with an \u03b1 value of 0.05"} +{"text": "Ficedula nestlings appear to have a better intrinsic adaptation to food limitation late in the breeding season compared with nestling collared flycatchers. We discuss possible implications for gene flow between the two species.Ecological speciation predicts that hybrids should experience relatively low fitness in the local environments of their parental species. In this study, we performed a translocation experiment of nestling hybrids between collared and pied flycatchers into the nests of conspecific pairs of their parental species. Our aim was to compare the performance of hybrids with purebred nestlings. Nestling collared flycatchers are known to beg and grow faster than nestling pied flycatchers under favorable conditions, but to experience higher mortality than nestling pied flycatchers under food limitation. The experiment was performed relatively late in the breeding season when food is limited. If hybrid nestlings have an intermediate growth potential and begging intensity, we expected them to beg and grow faster, but also to experience lower survival than pied flycatchers. In comparison with nestling collared flycatchers, we expected them to beg and grow slower, but to survive better. We found that nestling collared flycatchers indeed begged significantly faster and experienced higher mortality than nestling hybrids. Moreover, nestling hybrids had higher weight and tended to beg faster than nestling pied flycatchers, but we did not detect a difference in survival between the latter two groups of nestlings. We conclude that hybrid A major general goal in speciation research is to investigate the mechanisms leading to population divergence and reproductive isolation and pied flycatchers (F. hypoleuca), causes environmentally dependent selection on nestling hybrids. Collared and pied flycatchers are genetically separated and a recent study reveals genetic differentiation indicative of periods of allopatric divergence alternated with periods of secondary contact . Each year, records are kept of laying date, hatching date, clutch size, and fledging success of breeding birds. All breeding individuals and their nestlings are caught yearly whereupon we ring them, collect blood samples, and take morphological measurements. We estimated the breeding habitat composition for a subset of breeding records where both parents had been identified as either pied or collared flycatchers. The relative abundance of different tree species around the nest-boxes was estimated using a \u2018relascope\u2019, assigning individual trees into three categories based on trunk size and distance from the nest-box connected to Digital video cameras (JVC GR-D30). Recordings were made for two 1-h periods on two different mornings. Nestlings were marked individually with water-soluble white out just before recording. A digital videocassette recorder was used to analyze the videotapes. In total, 6468 begging events and 1148 feedings were recorded in for 107 offspring in 20 different nests: 10 attended by collared flycatcher parents and 10 attended by pied flycatcher parents . Wilcoxon signed-rank tests were used to compare the mean hatching dates of the experimental nests versus the natural nests, and to compare brood sizes between the experimental nests attended by the two parental species. At each feeding event, nestlings were ranked by the order of when they started to beg, so that the first nestling to beg was ranked nr 1 and so forth. Nestlings begging at the same time got the same ranking, and nestlings that did not beg at all got the last rank. To test whether begging rank influenced the chance of being fed, we used a generalized mixed model with being fed (1 or 0) as the response variable and begging rank as explanatory variable. Nestling identity (ring number), rearing nest identity, and year were added as random factors to control for repeated measures on the same individual and variation between nests and years. Logistic regression was applied to compare survival of the nestlings with survival (1 or 0) as the response variable and year as the explanatory variable. To compare the mean begging ranks and weights of hybrids when sharing nests with pied and collared flycatchers, respectively, we used a mixed effects linear model with species identity (hybrid or purebred) and hatching date as fixed effects, and year and rearing nest as random factors to account for variation between years and the non-independence of nestlings within the same nest. Finally, we used the same model to compare weights and growth rates between hybrids reared in the two types of environments. We standardized the hatching date using the residuals from an ANOVA with year as a factor and hatching date as the response variable from all breeding pairs. JMP 9 was used for analyzing the data except where otherwise noted.In 2007 and 2010, we performed artificial breeding experiments in aviaries to obtain F1 hybrid offspring for this study . Male piN = 144, df = 1, \u03c72 = 14.55, P < 0.001) and collared flycatcher pairs . There were no significant differences between heterospecific pairs and collared flycatchers , or between the two types of heterospecific pairs, that is, male collared flycatchers paired to female pied flycatchers and vice versa .Pied flycatcher breeding territories had a significantly lower proportion of deciduous trees as compared with the territories of heterospecific pairs and significantly earlier than pied flycatchers . There were no significant differences in the standardized timing of breeding between the two types of heterospecific pairs .Heterospecific pairs showed an intermediate standardized timing of breeding as compared to conspecific pairs: significantly later than collared flycatchers . As shown before, the reproductive success of collared flycatchers dropped steadily across the breeding season, while the reproductive success of pied flycatchers showed no such trends, rather the opposite , after removing the non-significant interaction term between breeding time and pairing type.We investigated how the seasonal decline in food availability influenced the reproductive success of collared flycatchers, pied flycatchers, and heterospecific pairs. Analyzes of long-term breeding data revealed a significant interaction between pairing type and timing of breeding on the reproductive success of these naturally breeding pairs . We compared the begging behavior, growth patterns, and survival of nestling hybrids sharing nest with purebred nestlings. There were no significant differences in brood size between experimental nests attended by collared or pied flycatcher parents , that is, the begging ranks were on a comparable scale. Begging rank significantly influenced the chance of being fed as revealed by a generalized mixed model , that is, nestlings with a lower begging score were more likely to be fed first. Nestling collared flycatchers begged significantly faster than hybrid nestlings when sharing nests attended by collared flycatcher parents , but no significant interaction between date and type of nestling, that is, both hybrids and collared flycatcher nestlings begged more later in the season. By contrast, there was a non-significant tendency for hybrids to beg faster than nestling pied flycatchers in nests attended by pied flycatcher parents .In order to investigate whether intrinsic differences between hybrid and purebred nestlings influenced their relative fitness during the nestling stage, we artificially created broods containing two types of nestlings. The experiments were carried out relatively late in the breeding season, as revealed by a comparison of the standardized hatching dates of experimental and natural nests or 12 days old , and there was a significant effect of year on survival . Nestling hybrids raised in pied flycatcher nests were significantly heavier than the nestling pied flycatchers both at the age of 3 days and 12 days . There was a significant effect of year on survival , but no significant differences in survival between these two types of nestlings . Hybrids reared in the two types of social environments did not differ in weight compared with each other at day 3 , or day 12 , and there was no difference in growth rate .The mass of nestling hybrids did not differ from the mass of nestling collared flycatchers when they were 3 days old , cactus finches (G. scandens), and their hybrids changes are depending on fluctuations in environmental conditions , we would expect a relatively high production of hybrid nestlings due to a higher tolerance to poor conditions in hybrid nestlings compared with collared flycatcher nestlings. Early in the breeding season (or in years with abundant food availability), we would expect the opposite. In Darwin's finches, selection on medium ground finches (In addition to pre-zygotic barriers such as plumage and song (Svedin et al. In summary, in this study, we have shown that a life-history divergence between two closely related species can induce environmentally dependent relative fitness of hybrid offspring. Thus, the impact of the environment on the direction and level of gene flow between parental species need not be limited to effects of pre-zygotic isolation once genetic incompatibilities are present."} +{"text": "In the cohort study, 107 patients undergoing LH were included. Sixteen percent of the total procedure time was spent on colpotomy (SD 7.8\u00a0%). BMI was positively correlated with colpotomy time, even after correcting for longer operation time. No relation was found between colpotomy time and blood loss or uterine weight. The surgical colpotomy step in laparoscopic hysterectomy should be simplified as this study demonstrates that it is time consuming and is considered to be more difficult than in other hysterectomy procedures. A vaginal approach to the colpotomy is proposed to achieve this simplification.New surgical techniques and technology have simplified laparoscopic hysterectomy and have enhanced the safety of this procedure. However, the surgical colpotomy step has not been addressed. This study evaluates the surgical colpotomy step in laparoscopic hysterectomy with respect to difficulty and duration. Furthermore, it proposes an alternative route that may simplify this step in laparoscopic hysterectomy. A structured interview, a prospective cohort study, and a problem analysis were performed regarding experienced difficulty and duration of surgical colpotomy in laparoscopic hysterectomy. Sixteen experts in minimally invasive gynecologic surgery from 12 hospitals participated in the structured interview using a 5-point Likert scale. The colpotomy in LH received the highest scores for complexity (2.8\u2009\u00b1\u20091.2), compared to AH and VH. Colpotomy in LH was estimated as more difficult than in AH (2.8 vs 1.4, New surgical techniques and technical equipment have attempted to facilitate laparoscopic hysterectomy (LH), after shortcomings of LH in comparison with vaginal hysterectomy (VH) and abdominal hysterectomy (AH) were demonstrated . New altThe aims of this study were to substantiate our hypothesis and to further evaluate the possibilities of a vaginal approach to colpotomy. The experienced difficulty, the duration of the surgical colpotomy step, and possible agents of change are evaluated. In addition, the idea of a vaginal approach to colpotomy is shaped into a new surgical instrument that may simplify colpotomy .Firstly, to investigate the difficulty of the colpotomy procedure, a structured interview was performed among experts in minimally invasive gynecologic surgery working at different hospitals throughout the Netherlands. The interview assessed the participants perception regarding the surgical step of the colpotomy. Furthermore, they were asked about their opinion regarding several features of the proposed facilitation of the colpotomy. Figure ParticipNext, a prospective cohort study was performed at two hospitals specialized in minimally invasive gynecologic surgery. From June 2010 to May 2014, LH procedures were timed to assess the duration of colpotomy. The total operating time (TOT) was defined as the time from the insertion of the Veress needle to the final stitches used for closing last trocar incision site. Colpotomy time (CT) was defined as the time from the first incision in the vaginal fornix until the complete separation of the cervix from the vaginal wall. An extrafascial technique was used to perform total laparoscopic hysterectomy. The vaginal wall was opened anteriorly at the vesicovaginal fold, after which the colpotomy was completed. All consecutive LH procedures were eligible for inclusion. This study was exempt from approval by the medical ethics committee. Procedures were performed by five gynecologists who perform LH on a regular basis and have experience in well over 100 TLH procedures. The number of participating gynecologists was chosen to enhance the external validity of the outcome. Inter-surgeon variability was minimized by using similar surgical procedure protocols. Furthermore, all surgeons received their training at the Leiden Residency Program. The Valtchev or Clermont Ferrand uterine manipulator was used. Bipolar and ultrasonic instruments were used for colpotomy. Basic patient characteristics were gathered. The uterine weight and the total amount of blood loss were measured in the operating room. Patients were excluded in case of missing colpotomy time. Complications were classified according to the severity of the complications on the basis of the framework set by the Dutch Society for Obstetrics and Gynecology (NVOG) [t test and a paired t test were used to compare experts versus residents and the type of hysterectomy, respectively. For the prospective study, t tests were used when applicable. A Pearson\u2019s correlation coefficient and analysis of variance (ANOVA) techniques were used to test any correlation between different variables and colpotomies. A generalized linear model was performed to assess the independent effect of certain parameters ) on the duration of colpotomy. All tests were performed at the .05 level of significance. SPSS 20 was used to analyze all data.Baseline characteristics were summarized by means and standard deviations and, when applicable, by numbers and percentages. For the structured interview, an independent sample p\u2009<\u2009.001). The same trend is seen for the difficulty of colpotomy in LH versus VH (2.8 vs 2.0); however, this difference was not significant (p\u2009=\u2009.08). With respect to the vaginal approach to simplify colpotomy, the following functions of the envisaged instrument were regarded as moderately important to important by the participants: the ability to manipulate the uterus , the presence of coagulation to stop bleeding during the colpotomy procedure , and the existence of markings on the device to help visualize the device by the camera .Sixteen experts from 12 hospitals were interviewed (Tables\u00a0p\u2009=\u2009.001), and the generalized linear model confirmed the identified correlation and proved that it was independent from the other variables were excluded due to missing colpotomy time. Patient characteristics and procedure data are shown in Table 5\u00a0% were This study demonstrates that the surgical colpotomy is a time-consuming step in the LH procedure, that is preceded by the hazardous dissection of the uterine arteries, bladder, and cervix, risking blood loss and ureter injuries. Colpotomy time comprises 16\u00a0% of the total operation time, even reaching 45\u00a0%. Albeit an extreme value, it does demonstrate the difficulty that can be experienced when performing this task. This is substantiated by our structured interview. In accordance with a previous study , our strA prototype for a vaginal colptomizer device has been assembled . AlthougIn all, the significance of the present study is the clinically driven approach to the innovating the difficult surgical colpotomy step. Experiences in the past have shown the need for a careful introduction of new technology in daily practice , 32. ConEarlier studies have taught us that LH has certain disadvantages with respect to patient safety when compared to VH and AH. Technical developments have already contributed to the enhanced safety of LH. However, further simplifying the LH is necessary, since reducing the operation time of LH may reduce health care costs and complication rates , 21. Our"} +{"text": "Cisplatin is one of the first-line drugs for urothelial bladder cancer (UBC) treatment. However, its considerable side effects and the emergence of drug resistance are becoming major limitations for its application. This study aimed to investigate whether matrine and cisplatin could present a synergistic anti-tumor effect on UBC cells.Cell viability assay was used to assess the suppressive effect of matrine and cisplatin on the proliferation of the UBC cells. Wound healing assay and transwell assay were applied respectively to determine the migration and invasion ability of the cells. The distribution of cell cycles, the generation of reactive oxygen species (ROS) and the apoptosis rate were detected by flow cytometry (FCM). The expressions of the relative proteins in apoptotic signal pathways and the epithelial\u2013mesenchymal transition (EMT) related genes were surveyed by western blotting. The binding modes of the drugs within the proteins were detected by CDOCKER module in DS 2.5.Both matrine and cisplatin could inhibit the growth of the UBC cells in a time- and dose-dependent manner. When matrine combined with cisplatin at the ratio of 2000:1, they presented a synergistic inhibitory effect on the UBC cells. The combinative treatment could impair cell migration and invasion ability, arrest cell cycle in the G1 and S phases, increase the level of ROS, and induce apoptosis in EJ and T24 cells in a synergistic way. In all the treated groups, the expressions of E-cadherin, \u03b2-catenin, Bax, and Cleaved Caspase-3 were up-regulated, while the expressions of Fibronectin, Vimentin, Bcl-2, Caspase-3, p-Akt, p-PI3K, VEGFR2, and VEGF proteins were down-regulated, and among them, the combination of matrine and cisplatin showed the most significant difference. Molecular docking algorithms predicted that matrine and cisplatin could be docked into the same active sites and interact with different residues within the tested proteins.Our results suggested that the combination of matrine and cisplatin could synergistically inhibit the UBC cells\u2019 proliferation through down-regulating VEGF/PI3K/Akt signaling pathway, indicating that matrine may serve as a new option in the combinative therapy in the treatment of UBC. Urothelial bladder cancer (UBC) is the 7th most common tumor worldwide in male and the 17th in females, and is one of the most fatal urothelial malignancies. Although almost three quarters of recently diagnosed UBCs are still not invasive . Once blCisplatin, one of the most famous chemotherapeutic drugs, has been applied for the treatment of many human cancers, including bladder, lung, head and neck, ovarian, and testicular cancers. However, the major limitations for its application were firstly its drug resistance, which was potentially caused by changes in cellular uptake and efflux of cisplatin, increasing biotransformation and detoxification in the liver, and increasing DNA repair and anti-apoptotic mechanisms, as well as its considerable side effects, e.g. severe kidney problems, allergic reactions, decrease immunity to infections, gastrointestinal disorders, hemorrhages, and hearing loss. Therefore, the combinative treatments of cisplatin and other anticarcinogens have caused concern for withstanding the drug-resistance, weakening the toxicity, and increasing the chemotherapeutic efficacy .Sophora flavescens, named \u2018Ku-Shen\u2019 in traditional Chinese medicine [15H24N2O with its molecular mass of 248.37\u00a0g/mol and its compound ID (CID) in PubChem Compound of 91466 . Matrine was dissolved in physiological saline to make a 100\u00a0mM stock solution and stored at \u2212\u00a020\u00a0\u00b0C for future use. Cisplatin was dissolved in physiological saline to make a 10\u00a0mM stock solution and stored at \u2212\u00a020\u00a0\u00b0C for future use.The UBC cell lines EJ, T24, BIU, 5637 were gifts from the State Key Laboratory of Oncology in South China. All cell lines were cultured in RPMI 1640 medium supplemented with 10% fetal bovine serum , 100 U/mL penicillin and 100 U/mL streptomycin in a humidified incubator at 37\u00a0\u00b0C with 5% CO3 cells each well) were seeded into 96-well plates. The cells were cultured with various concentrations of matrine and cisplatin respectively and concurrently. After treatment for 24, 48 or 72\u00a0h, cells were incubated for additional 90\u00a0min with 10\u00a0\u03bcL of CCK-8 solution. Finally, the optical density was measured at 450\u00a0nm by the microculture plate reader . The proliferative inhibition rate was calculated using the formula: proliferative inhibition rate\u00a0=\u00a0\u00a0\u00d7\u00a0100%. The 50% inhibitory concentration (IC50) value was calculated by nonlinear regression analysis using SPSS20.0 software.The anti-proliferative effects of matrine and cisplatin on UBC cell lines were detected by CCK-8 kit . EJ, T24, BIU, or 5637 cells (8.0\u00a0\u00d7\u00a010The combination index (CI) was determined by the isobologram analysis for the combination study which was based upon the Chou\u2013Talalay method. The data obtained from the cell viability assay were standardized to the control group and showed as % viability. Further, the data was converted to fraction affected and analyzed by the CompuSyn\u2122 program based upon the Chou\u2013Talalay method. The CI values represent the modes of the interaction between two drugs. The CI\u00a0<\u00a01 indicates synergism, CI\u00a0=\u00a01 indicates an additive effect and CI\u00a0>\u00a01 indicates antagonism.6/1\u00a0mL/well) in logarithmic phase were plated into 6-well plates. After 24\u00a0h, the adhesive cells were scratched along a straight line using a 200\u00a0\u03bcL pipette tip, then the scraped cells and cell debris were cleared with PBS for three times. Fresh serum free medium including various drugs were added to 6-well plates, and the cells were allowed to repair the scratches for 24\u00a0h. Pictures were taken at 0 and 24\u00a0h at the same place where was scratched. And then, Adobe Photoshop CS6 software was applied to measure the moving distance of cells.EJ and T24 cells (1\u00a0\u00d7\u00a0104) in logarithmic phase were resuspended in 500\u00a0\u03bcL serum-free medium containing different drug treatments and plated on the above compartment, and 800\u00a0\u03bcL full medium including 10% FBS was added to the nether compartment. The transwell filters were put in a humidified incubator at 37\u00a0\u00b0C with 5% CO2 for 24\u00a0h. Afterwards, the cells attached to the lower surface of membrane were fixed with 4% paraformaldehyde at room temperature for 30\u00a0min and stained with 0.5% crystal violet. The cells on the upper surface of the filter were removed by wiping with a cotton swab. Then the number of stained cells on the lower surface was counted using the microscope . A total of 5 fields were counted for each transwell filter.The transwell filters were coated with a thin layer of Matrigel Basement Membrane Matrix . EJ and T24 cells (3\u00a0\u00d7\u00a0105/1\u00a0mL/well) in logarithmic phase were plated into 6-well plates with full medium containing different drug treatments in a humidified incubator at 37\u00a0\u00b0C with 5% CO2 for 48\u00a0h. The treated cells were collected and then washed with cold PBS. Subsequently, 70% cold ethanol was applied to immobilize the harvested cells at 4\u00a0\u00b0C overnight. The cells were washed with cold PBS again and incubated with 100\u00a0\u03bcL RNase at 37\u00a0\u00b0C water-bath water for 30\u00a0min, and then labeled with 400\u00a0\u03bcL propidium iodide (PI) and incubated for 30\u00a0min at room temperature in the dark. For each detection, 50,000 cells were tested at least. ACEC NovoCyte flow cytometer equipped with Novoexpress was applied to detect the cell cycle.The cell cycle detection kit purchased from 4A Biotech Co., Ltd. was used to detect the cell cycle. EJ and T24 cells (5.0\u00a0\u00d7\u00a0105/1\u00a0mL/well) in logarithmic phase were plated into 6-well plates and treated with different drugs for 48\u00a0h. The treated cells were collected and then washed with cold PBS. Subsequently, cells were stained with DCFH-DA for 30\u00a0min at room temperature in the dark. The levels of ROS in treated cells were measured immediately after staining by using ACEC NovoCyte flow cytometer equipped with Novoexpress .The reactive oxygen species assay kit purchased from 4A Biotech Co., Ltd. was used to measure the level of reactive oxygen species (ROS) in treated cells. EJ and T24 cells (5.0\u00a0\u00d7\u00a0105/1\u00a0mL/well) in logarithmic phase were plated into 6-well plates and treated with different drugs for 48\u00a0h. The treated cells were collected and washed with cold PBS after treatment. In accordance with the manufacturer\u2019s instructions, the cells were stained with Annexin V-FITC and PI for 30\u00a0min at room temperature in the dark, the apoptosis rate of treated cells were determined immediately after staining by using ACEC NovoCyte flow cytometer equipped with Novoexpress .The Annexin V-FITC apoptosis detection kit purchased from 4A Biotech Co., Ltd. was applied to detect cell apoptosis. EJ and T24 cells . All the selected protein extracts of tested cells were resolved by SDS-PAGE and transferred to polyvinylidenedifluoride (PVDF) membranes . After blocking for 1\u00a0h in 5% skim milk, the PVDF membranes incubated overnight at 4\u00a0\u00b0C with primary antibodies . All the primary antibodies were obtained from Cell Signaling Technology . Following washed with Tris buffered saline including 0.1% Tween-20 (TBST) for three times, membranes were incubated with horseradish peroxidase (HRP)-conjugated secondary antibodies at room temperature for 1\u00a0h. After washing with TBST again, immunoreactivity of the membranes were detected using the Bio-Rad-Image-Lab with an electrochemiluminescence system (ECL) . The densitometry of the protein bands were measured using the ImageJ (NIH image software) and normalized to their relevant controls.EJ and T24 cells in logarithmic phase were treated with different drugs for 48\u00a0h. After treatment, the EJ, T24 cells were harvested and lysed with lysis buffer. The cell lysates were incubated on ice for 30\u00a0min and then centrifuged at 12,000http://pubchem.ncbi.nlm.nih.gov/), with the PubChem CID of 91466 and 441203 respectively. The 3D structure of PI3K (PDB-ID: 4J6I), AKT2 (PDB-ID: 2JDR), Caspase-3 (PDB-ID: 2XYH), Bcl-2 (PDB-ID: 4IEH), the targeted proteins, could be acquired from the database of Protein Data Bank (PDB http://www.rcsb.org/pdb/home/home.do). The procedures of virtual docking with DS 2.5 were as follows: firstly, the water molecules in the tested proteins were removed and refined with CHARMM on the targeted proteins and the selected ligands. Secondly, the possible active sites of tested proteins based on endogenous ligands were automatically found out with the algorithm. Thirdly, the drugs, and the selected ligands, were docked into the binding pocket of the tested proteins. And then, the docking model of the drugs and the tested proteins was examined by the module. Before performing the procedure, the calculation of root mean square deviation (RMSD) was carried out as the verification for the selection of the two modules (CDOCKER and Libdock) in DS 2.5.To understand the potential interactions between the tested drugs and the selected proteins, molecular docking algorithm was carried out in this study with Discovery Studio (DS) 2.5. The two dimensional (2D) structures of matrine and cisplatin were found in the database of PubChem , except the IC50 values of matrine for EJ, T24, BIU and 5637 cell lines were 5.09, 4.60, 3.87 and 4.48\u00a0mM respectively, those of cisplatin were 3.73, 3.60, 5.21 and 3.47\u00a0\u03bcM respectively for 48\u00a0h treatment. Compared with the individual drug, the drug combination produced a stronger suppressive effect on cells proliferation. The combination of matrine with cisplatin showed a synergistic inhibitory effect on T24 cells when Fa value was \u2265\u00a00.43 and 5637 cells when Fa value was \u2264\u00a00.97. The synergism of drug combination treatment was observed in EJ and BIU cells, whatever the Fa value was. The summary of CI and the concentration of separate drugs in combination at 50% Fa were shown in Table\u00a0In accordance with the ICTo identify the combination of matrine and cisplatin that achieved maximal biological function, we applied the wound healing assay and transwell assay to investigate the migration and invasion ability of EJ and T24 cells. Figure\u00a0After verifying the anti-proliferation effect of matrine and cisplatin, we applied FCM to analyze the cell cycle phases of the treated UBC cells. As shown in Fig.\u00a0Analysis of ROS of treated cells revealed that both single drug treatment and drug combination increased the generation of ROS in EJ and T24 cells. Furthermore, the combination treatment more potently induced the generation of ROS than single treatment in EJ and T24 cells Fig.\u00a0.Fig.\u00a05EfBoth single drug treatment and drug combination increased the proportion of early and late apoptosis in EJ and T24 cells. Furthermore, co-treatment more potently induced apoptosis compared with single treatment in EJ and T24 cells Fig.\u00a0.Fig.\u00a06EfTo explore the relevant signal pathway, we performed western blotting to measure the expression levels of Bax, Bcl-2, Cleaved Caspase-3, Caspase-3, p-Akt, Akt, p-PI3K, PI3K, VEGFR2 and VEGF in EJ and T24 cells which were treated by matrine, cisplatin alone and in combination. Results showed the expressions of Bax and Cleaved Caspase-3 were up-regulated in all the treatment groups, but the expressions of Bcl-2, Caspase-3, p-Akt, p-PI3K, VEGFR2, and VEGF protein were down-regulated, while the total Akt, PI3K, and GAPDH levels remained the same Fig.\u00a0. The effThe 3D crystal structures of tested proteins were presented in Fig.\u00a0As shown in Fig.\u00a0As shown in Fig.\u00a0Although herbal medicines have been used for thousands of years in many countries, few components from herbs have been applied on the oncotherapy as single dosage because of their weakness of anti-proliferation. In this study, we found that both matrine and cisplatin had inhibitory effect on the proliferation of the UBC cell lines in a dose- and time-dependent manner, but the inhibitory effect of matrine was much weaker than that of cisplatin. Our further studies showed that the combination of matrine with cisplatin could inhibit cell proliferation, weaken cell repair motility and invasive ability, induce cell cycle arrest, increase the generation of ROS and induce apoptosis in EJ and T24 cells in a synergistic way. Finally, we also revealed that the potential pro-apoptotic mechanisms of matrine and cisplatin on EJ and T24 cells might restrain the VEGF/PI3K/Akt pathway.Thus, our research indicated that matrine might improve the sensibility to cisplatin for UBC patients while weakening side effects through minimizing the dose of cisplatin.It is reported that the migration and invasion of cells were decreased after the matrine treatment in human pancreatic cancer and castration-resistant prostate cancer , 15. OurMore and more studies have shown that the proportion of G1 cells were significantly increased following the treatment of matrine in A549 cells , HepG2ceOne recent study showed that matrine increased the generation of ROS in a dose- and time-dependent manner, and then increased apoptosis rate in NSCLC cells , other sVascular endothelial growth factor (VEGF) is a major target for the inhibition of tumor vascularisation and tumour growth , 24. TheMolecular docking, a virtual platform optimized in our previous studies, could be applied to detect the interactions between drug and protein for the underlying mechanisms , 36\u201339. To provide further evidence to allow in-depth understanding of the probable mechanisms of the combination of matrine and cisplatin, further studies should be carried out, for example, the synergistic anti-cancer effect of the combination between matrine and cisplatin in vivo should be also verified.Our findings demonstrated that combination of matrine with cisplatin could synergistically inhibit the UBC cells through down-regulating VEGF/PI3K/Akt signaling pathway. In the combination, matrine could improve the sensibility of the UBC cell to cisplatin, reduce the dosage of cisplatin, thereby potentially weakening its side effects, indicating that matrine may serve as a new option in the combinative therapy in the treatment of UBC."} +{"text": "Perception relies on integrating information within and between the senses, but how does the brain decide which pieces of information should be integrated and which kept separate? Here we demonstrate how proscription can be used to solve this problem: certain neurons respond best to unrealistic combinations of features to provide \u2018what not\u2019 information that drives suppression of unlikely perceptual interpretations. First, we present a model that captures both improved perception when signals are consistent (and thus should be integrated) and robust estimation when signals are conflicting. Second, we test for signatures of proscription in the human brain. We show that concentrations of inhibitory neurotransmitter GABA in a brain region intricately involved in integrating cues (V3B/KO) correlate with robust integration. Finally, we show that perturbing excitation/inhibition impairs integration. These results highlight the role of proscription in robust perception and demonstrate the functional purpose of \u2018what not\u2019 sensors in supporting sensory estimation. Perception relies on information integration but it is unclear how the brain decides which information to integrate and which to keep separate. Here, the\u00a0authors develop and test a biologically inspired model of cue-integration, implicating a key role for GABAergic proscription in robust perception. By integrating signals, observers resolve ambiguities and judgments become more precise2.Our impression of the surrounding world is built upon fragmentary sensory information that is always incomplete and often ambiguous. To achieve perception, the brain combines a range of signals subject to different constraints. For instance, judging the shape of a nearby object may rely on fusing information from different visual cues and modalities 3. In particular, psychophysical work has shown that participants\u2019 precision improves near-optimally when integrating cues, closely matching the expectations of maximum likelihood estimation4. However, if cues originate from different objects, or specify different things, it no longer makes sense to integrate them. A new pair of glasses, for example, can suddenly mean that trusted cues (such as binocular disparity and texture) specify conflicting shapes5. If the brain nevertheless persisted in averaging the information together, observers could perceive something incompatible with either cue, leading to errors . This process of dealing with conflicting signals and deciding whether or not to integrate them has been described as one of causal inference6, and presenting such stimuli provides an ideal testbed for probing the mechanisms that underlie perceptual integration. Behavioural evidence7 suggests that integration degrades gracefully under cue conflict8. However, we have little understanding of how this is achieved by the human brain: neither in theory nor in practice.Understanding of integration has generally focused on the performance benefits that result from combination9; here we demonstrate its utility in a new model that combines different depth cues for robust shape perception.Here we develop and test a biologically inspired model of integration that captures improved performance when information is consistent, yet, unlike previous models, shows robust behaviour in the face of conflict. We propose a role for proscription in optimal sensory encoding and suggest that it makes sense for the brain to employ \u2018what not\u2019 detectors, i.e., neurons selective for stimuli that do not correspond to real objects. These units facilitate robust sensory estimation by driving suppression of unlikely interpretations of the local environment. We have recently provided evidence for this principle in encoding binocular disparity13. Under the proscriptive model, we hypothesised that suppressive processing in this region is associated with robust integration.The central premise of our model is that the brain uses what not detectors that respond best to discrepancies between two cues. These responses are useful because they increase suppression of certain perceptual interpretations. To test for neurobiological correlates of this process, we examined the relationship between suppressive processing in the human brain and perceptual integration. We focused on a region of the dorsal visual cortex (area V3B/KO) that is intricately involved in integrating three-dimensional (3D) cues to object shape16. We tested whether robust integration relates to GABA concentration around V3B/KO. We then used transcranial direct current stimulation (tDCS) to perturb the excitatory/inhibitory balance of underlying cortical tissue. We tested whether disrupting processing in this way would lead to reduced perceptual integration. We show that GABA concentrations around area V3B/KO correlate strongly with robust perceptual cue integration, and that tDCS applied over V3B/KO leads to impaired integration.We indexed suppression using non-invasive magnetic resonance spectroscopy (MRS) measures of the inhibitory neurotransmitter \u03b3-aminobutyric acid (GABA)19. Studies of alternating perceptual states have provided a useful tool to access conscious experience, but have been long divorced from models of routine perceptual processing. Our work shows that such phenomena reflect the operation of a generalized mechanism for sensory processing that exploits what not signals to effect perception.In line with the proscriptive model, our empirical results demonstrate the critical role of suppressive signals in shaping integration of 3D cues to object perception. Using detectors that respond to unrealistic combinations of features makes sense theoretically and has correlates with suppressive processing in the human brain. Finally, we show that the proscriptive framework provides a natural link to phenomena of rivalry and perceptual bistabilityS\u03b4 and texture S\u03c7, Fig.\u00a0S\u03b4\u2009=\u2009S\u03c7). Similar to previous work22, we can independently manipulate the cues to explore the effects on perception: moving away from the positive diagonal increases the degree of incongruence between the cues. By systematically manipulating incongruence, Girshick and Banks7 found that perception is initially biased away from the more reliable of the two cues, but then returns to the more reliable cue as conflict increases that respond best to incongruent cues (S\u03b4\u2009\u2260\u2009S\u03c7) to incorporate suppressive computations into the model. This allows the model to produce robust perceptual estimates that exceed single cues for congruent information, but revert to the most reliable source of information in the face of discrepancy.To implement a biologically plausible model of robust cue integration, we consider the estimation of surface slant\u2014a key percetual quantity that underlies multiple behaviours\u2014using binocular disparity and texture depth cues. In a significant departure from previous work26, although the functional significance of these units has hitherto been opaque.The front end of the model consists of a bank of filters that encode the slant of the surface from a single cue: one set for disparity and another for texture. A layer of combination units then integrates signals from the two cues Fig.\u00a0. It is n27, we assume that combination units perform a sum of their inputs that increases monotonically, but sublinearly, with stimulus intensity .Based on empirical evidenceity Fig.\u00a0. This no7 and bias increases above zero. However, increasing incongruence still further produces a robust reversion to \u03c7 in terms of both estimator bias and reliability.Following single-cue input, combination units are read out by a layer of output units, where readout weights are defined by a cosine function. Simulating a range of cue conflicts produces a pattern of robust estimates consistent with empirical observations7 Fig.\u00a0. In part29. We quantify bias as the difference in slant angle between the final estimate and the more reliable of the two cues. The input cue reliability is derived from the height of the peak produced in the absence of additional cues.To illustrate the model computations, consider a stimulus that indicates incongruent slants from the two cues Fig.\u00a0. We expr32. The intutition behind the model is that when two cues are present a maximum likelihood process indicates that the best evidence for an estimate is in between the two cues. However, the location where the evidence is strongest is not always a realistic interpretation, i.e., when cues are conflicting. Thus, as conflict between cues increases, activation in the cosinusoidal weight matrix turns from positive to negative, penalizing the midpoint between cue estimates. By penalizing the midpoint in cases of conflict, the evidence now maximally supports the estimate of the more reliable cue.The critical feature of the model is the form of the readout weights, which we implement as a basic cosine function. The only additional parameter models tonic inhibition within the model, which we assumed to be 5% below equilibriumTo test the predictions of our model, we assessed how sensitive participants were in discriminating the slant of a viewed object Fig.\u00a0, measurit17\u2009=\u20093.85, P\u2009=\u20090.001, Cohen\u2019s d\u2009=\u20090.91; texture, t17\u2009=\u20092.54, P\u2009=\u20090.02, Cohen\u2019s d\u2009=\u20090.60; Fig.\u00a0t14\u2009=\u20092.16, P\u2009=\u20090.048, Cohen\u2019s d\u2009=\u20090.51), but significantly better than that for single-cue slant defined by disparity . This shows that the presence of the less reliable cue impaired the participants\u2019 perceptual estimates, but that the falloff in performance was not completely catastrophic in that it remained above that of the less reliable cue. A possible concern might be that the texture cue was perceived at a smaller slant than specified33, resulting in the congruent-cue stimuli being perceived as incongruent, and vice versa. However, the sensitivity in these conditions, relative to that for single cues, confirms that this was not the case. That is, congruent-cue sensitivity is higher than\u00a0that for either single\u00a0cue and incongruent-cue sensitivity is not, as would be expected if the cues that comprise the stimuli in these conditions were congruent and incongruent, respectively. Further, observers\u2019 estimates were significantly biased towards the slant defined by the texture cue in the incongruent condition , consistent with robust estimation for incongruent cues. Using the sensitivities measured for single cues, we generated predictions for sensitivity in the incongruent-cue condition based on (i) the maximum likelihood model, (ii) Ohshiro et al.25 normalization model and (iii) our proscriptive integration model. Comparing Bayes factor scores, we found that the proscriptive integration model best accounted for incongruent-cue sensitivity, 14.2 times better than the maximum likelihood model and 4.3e5 times better than the normalization model . However, for incongruent stimuli, there was a strong positive correlation between the model\u2019s suppressive weights and sensitivity . readout weights Fig.\u00a0, left. W16. We tested for correlations between (at rest) concentrations of the main inhibitory neurotransmitter (GABA) within the participants\u2019 brains and robust perceptual judgements measured using psychophysics.Experimentally, we reasoned that differences between human observers in their sensitivity to incongruent cues might relate to differences in suppressive tone within the cortex. In particular, we employed MRS that has previously been used to link neurochemistry to visual processing13. In addition, we measured control voxels in the visual (V1) and motor (M1) cortices. We anticipated that greater potential for suppression would be associated with robust perceptual judgments for regions of the cortex associated with cue integration. Consistent with this prediction, we found a significant positive correlation between GABA concentration in the voxel centred on V3B/KO and perceptual sensitivity to incongruent stimuli work demonstrated that this area is intricately involved in cue integrationn\u2009=\u200918, disparity : Pearson\u2019s r\u2009=\u20090.03,\u2013\u20090.3, P\u2009=\u20090.92,0.26; texture : Pearson\u2019s r\u2009=\u20090.05,\u2013\u20090.49, P\u2009=\u20090.85,0.08; congruent: Pearson\u2019s r\u2009=\u20090.10, P\u2009=\u20090.68; Fig.\u00a0n\u2009=\u200914, V1: Pearson\u2019s r\u2009=\u20090.40, P\u2009=\u20090.15; n\u2009=\u200918, M1: Pearson\u2019s r\u2009=\u2009\u2013\u20090.21, P\u2009=\u20090.39; Supplementary Figure\u00a0In line with the model predictions Fig.\u00a0, left, wn\u2009=\u2009100, Pearson\u2019s r\u2009=\u20090.78, P\u2009=\u20091.7e\u221221; Supplementary Figure\u00a0n\u2009=\u200912, Pearson\u2019s r\u2009=\u20090.60, P\u2009=\u20090.04; Supplementary Figure\u00a034. Thus, although the cues were perceived as incongruent, the magnitude of this difference was reduced as a result of a small frontoparallel bias acting on the texture cue.In addition to sensitivity, we also used the measures of bias in the incongruent-cue simulations to calculate the weights assigned by the model to each cue. As with sensitivity, suppression in the model was highly correlated with the weight given to the more reliable cue and white matter (WM) content in the voxels. However, there was no relationship between incongruent-cue sensitivity and GM:WM voxel content .As GABA concentration is expressed with reference to H36 and produce systematic effects on visual judgments37. We applied anodal and cathodal stimulation montages to V3B/KO before measuring sensitivity to single-, congruent-, and incongruent-cue stimuli. To control for placebo effects, we contrasted the results with sensitivity following sham stimulation. We reasoned that if tDCS is targeting the process of robust integration (consistent with a V3B/KO locus) single-cue performance should be relatively unaffected, as this information is extracted earlier in the cortical hierarchy to the stage of combination. It should be noted that information from other cues is also present in the single-cue conditions21 . However, as this information is (i) less reliable than the dominant cue and (ii) common to the pairs of stimuli being judged, it is likely to have little influence on perception. In contrast, combining two reliable cues (in the combined-cue conditions) results in large perceptual changes, so disrupting integration in these conditions will produce a larger effect.To move beyond correlative evidence, we next sought to perturb the excitatory-inhibitory balance of the cortex, and then measure the consequences on perceptual judgments. To this end, we applied tDCS to perturb cortical excitability centred over area V3B/KO in 12 human participants. This technique has previously been shown to alter overall responsivity of the visual cortext11\u2009=\u20091.32, P\u2009=\u20090.21; disparity cathodal: t11\u2009=\u20090.58, P\u2009=\u20090.58; texture anodal: t11\u2009=\u20090.63, P\u2009=\u20090.54; texture cathodal: t11\u2009=\u20091.08, P\u2009=\u20090.30; Fig.\u00a0t11\u2009=\u20097.57, P\u2009=\u20091.1e\u22125, Cohen\u2019s d\u2009=\u20092.18; texture, t11\u2009=\u20095.67, P\u2009=\u20091.4e\u22124, Cohen\u2019s d\u2009=\u20091.63).In line with our reasoning, we found that sensitivity in the disparity- and texture- single-cue conditions was unaffected by the application of tDCS , whereas lower performance under anodal stimulation was not statistically significant and following the offset of stimulation (offline effects)F2,22\u2009=\u20093.22, P\u2009=\u20090.06; online: F2,22\u2009=\u20092.72, P\u2009=\u20090.09); however, the differences were small and the opposite direction for on- vs. offline stimulation. Moreover, the largest difference between on- and offline stimulation is between the sham conditions that provides the control baseline , offline: t23\u2009=\u20090.70, P\u2009=\u20090.49; texture: t23\u2009=\u2009\u22121.65, P\u2009=\u20090.11; congruent: t23\u2009=\u20090.33, P\u2009=\u20090.74; incongruent: t23\u2009=\u20090.75, P\u2009=\u20090.46; Fig.\u00a041. We fit these parameters using the tDCS effects measured in the single- and congruent-cue conditions . Our results made this unlikely as we found that tDCS had a specific effect on integration conditions and not on single cues. However, we had included trials in the experimental design to act as \u2018lapse\u2019 tests under the different stimulation conditions. Specially, we presented some easy trials for which performance should be close to 100% correct; we found no evidence for a change in general performance resulting from stimulation robust integration and (b) perceptual rivalry. Further, we provide evidence for neural correlates of proscription in the human brain. In particular, we find that (i) GABA measured in a brain region intricately involved in cue fusion (V3B/KO) is strongly correlated with robust perceptual integration and (ii) perturbing the excitatory/inhibitory balance with tDCS impairs perceptual integration.To understand the structure of the surrounding world, the brain integrates information from a range of sensory cues. Integration can improve perceptual estimates; however, it needs to be sensitive to the context: in some cases it is better to down-weight some signals.This process of deciding whether or not to integrate cues has been described as one of causal inferenceOur prosrciptive model demonstrates why it makes sense for the brain to employ what not detectors that respond best to stimuli that do not correspond to the features of real objects. By so doing, these detectors drive suppression of unlikely interpretations of the local environment. Although this may appear counterintuitive, there is evidence that what not neurons exist in the primate brain, although their functional purpose was previously unclear.45. Why should the brain develop such neurons? One possibility is that they are used as a veto46. Here we demonstrate that incongruent signals provide a key means of supporting robust integration: a single model explains cases when cues are combined to boost performance, when discrepant signals are down-weighted and cases of complete scission.Previous electrophysiological recordings have shown that although certain neurons are tuned to the same information specified by two cues (congruent neurons), many others respond best when there is a large conflict between the information provided by two cues (incongruent neurons)47 for a similar example).Our formulation also provides an architecture for processes of recalibration that are likely to constitute an important facet of perceptual integration. In particular, a change in the observer\u2019s state, such as wearing a new pair of glasses, or sustaining an injury, can necessitate that the information provided by two cues is recalibrated. A neural architecture that is specialized only for congruent signals requires a recalibration of the individual sensory estimates. However, within our model, recalibration could be achieved by simply changing the phase of the readout weight matrix , whereas real-world surfaces are typically parameterized as the combination of slant and tilt (orientation of the surface in the image plane). Extending our model to accomodate both slant and tilt should be feasible within the suggested architecture, simply necessitating an increase in the number of units to accommodate joint encoding of slant and tiltA central premise of the model is that incongruent neurons are used proscriptively to drive suppression of unlikely perceptual interpretations. In support of this, we identify neural correlates of suppression that predict robust perceptual behaviour. Specifically, we find baseline inhibitory neurotransmitter GABA concentration is correlated with robust perceptual estimates. Moreover, we find that the GABA associations were regionally specific to cortical areas associated with depth cue integration (V3B/KO); we find no correlation between robust perception and GABA measured at control regions (V1 and M1).16; however, a known limitation of the technique is its spatial resolution. Although we centred data acquisition on particular brain regions , the size of the voxels necessary for the technique (3\u2009\u00d7\u20093\u2009\u00d7\u20092\u2009cm) inevitably means that we sampled from neighbouring regions of the cortex . With this in mind, we selected the locations of our control sites to demonstrate a level of regional specificity. Moreover, extensive fMRI work has identified V3B/KO as a locus for depth cue integration13, supporting the interpretation that GABA measured in this area was the primary contributor to the relationship with robust integration.The application of MRS in humans has started to provide new insight into perceptual and cognitive processes49. However, the tonic cortical inhibition incorporated within the proscriptive integration model, which is maintained by extracellular GABA50, also facilitates robust estimates. Thus, the correlation between MRS-measured GABA and robust perception may also be driven by extracellular GABA.Another limitation of MRS is that it measures total concentration of neurochemicals within a localized region and cannot distinguish between intracellular and extracellular pools of GABA. This is relevant, because these pools are thought to have different roles in neuronal function. Here we show that the suppressive gain of the network is correlated with robust perception, suggesting that the relationship between MRS-measured GABA and robust perceptual behaviour is, at least partially, correlated with intracellular vesicular GABA, which drives neurotransmission23. We therefore envisage robust selection and perceptual rivalry as falling on a continuum of degrees of cue conflict, which is moderated by the relative reliability between cues. Within this framework, it makes sense that GABA concentrations have been linked to the perception of bistable stimuli. Specifically, the rate of swapping between bistable stimuli is correlated with GABA concentration in human visual cortex51. Moreover, previous theoretical\u00a0and empirical work indicates that incongruent neurons may have a key role in perceptual rivalry52. Here we propose proscription as a common mechanism (operating across a range of cue conflicts) to support robust integration by driving suppression of unlikely interpretations of the local environment.Our results show that GABA is linked to the robust selection that occurs when two cues are in conflict and one is perceived as more reliable. We also show that the proscriptive model can reproduce behaviour when conflicting-cue stimuli are presented that result in perceptual rivalryHaving discovered suppressive correlates of robust perceptual integration, we perturbed the cortical excitability around V3B/KO using tDCS. We found that following cathodal stimulation, estimates produced by both congruent and incongruent cues were impaired, whereas anodal stimulation produced a smaller, nonsignificant effect. We also demonstrate that these effects replicate in an independent sample of participants. Importantly, we showed that these effects were specific to the process of integrating cues rather than the processing of single cues per se. We could capture this behaviour within the proscriptive integration model by fitting two free parameters that attenuated the strength of positive/negative lateral connections within the combination layer to the results for congruent and single cues, and then using these (now fixed) parameters to predict the effect of stimulation on incongruent-cue estimates.53, modulate visual evoked potentials36 and affect visual perception37. The replication of the basic effect in an independent sample of participants is thus important in providing reassurance about the reliability of the findings we report. A principle limitation of tDCS is that its effects are spatially imprecise54. To address this limitation, we combined MRI functional localization of V3B/KO, neuronavigation and electric-field simulations to produce a tDCS montage that would most effectively target V3B/KO. Further, we repeated the experiment with a montage targeting V1 and found no effect.Recent meta-analyses have questioned the reliability of certain tDCS findings. However, tDCS has been shown to reliably change GABA concentrations38. With this in mind, here we used tDCS to perturb the balance of excitation and inhibition around V3B/KO, and designed our experiment with a range of controls that allowed us to make precise interpretations of the results. Interestingly, we found no evidence of polarity-specific directional effects, i.e., modulation in one direction for anodal and another for cathodal35, yet our results are consistent with evidence that cathodal stimulation is more effective than anodal in modulating the excitability of the visual cortex39 and may suggest that the morphology of the visual cortex is more amenable to the current produced by cathodal stimulation.The interaction between the current flow induced by tDCS and the unique morphology of the brain means that the effects of stimulation, even directly under the electrode, are too complex to characterize as either purely increasing or decreasing excitationTogether, our modelling and empirical results point to a central role for proscription in driving robust perceptual integration. Using neurons that respond to unrealistic combinations of features to drive robust perception makes sense theoretically and has correlates with suppressive processing in the human brain. This suggests a generalized mechanism for sensory processing that exploits what not information to facilitate perception and provides a natural foundation to explain phenomena associated with rivalry and perceptual bistability.14 and tDCS35 studies to achieve 90% power. Twenty observers participated in the MRS experiment; however, two were not included in the analysis: one withdrew mid-scan and a hardware fault stopped acquisition mid-scan for the other. Eighteen subjects completed MRS for V3B/KO\u00a0and M1, of whom 15 also returned for the (control) V1 scan. Twelve observers participated in each of the 5 tDCS experiments, for a total of 34 different observers . Experiments were approved by the University of Cambridge ethics committee; all observers provided written informed consent.Observers were recruited from the University of Cambridge and had normal or corrected-to-normal vision, and were screened for stereo deficits. A priori sample sizes were established using effect sizes from previous MRS56. Binocular presentation was achieved using a pair of Samsung 2233RZ LCD monitors viewed through mirrors in a Wheatstone stereoscope configuration. The viewing distance was 50\u2009cm and participants\u2019 head position was stabilized using an eye mask, head rest and chin rest. Eye movement was recorded binocularly at 1\u2009kHz using an EyeLink 1000 .Stimuli were generated in MATLAB using Psychophysics Toolbox extensions57. Each texture patch had on average 64 texture elements (textels); however, the actual number of textels varied between trials depending on their size. Each textel was randomly assigned a grey level and shrunk about its centroid by 20%, creating the appearance of \u2018cracks\u2019 between textels. The width of these cracks also varied as a function of surface slant, thus providing additional texture information. Texture surfaces were mapped onto a vertical virtual surface and rotated about the horizontal axis by the specific texture-defined angle, before a perspective projection consistent with the physical viewing geometry was applied. To isolate the disparity cue, a random-dot stimulus was generated using the same parameters as in the texture stimuli, i.e., an average of 64\u2009dots with randomized grey level assignment. In the single-cue disparity and two-cue conditions, binocular disparity was calculated from the cyclopean view and applied to each vertex/dot based on the specific disparity-defined slant angle.Stimuli were virtual planes slanted about the horizontal axis Fig.\u00a0. Two cue21.Surfaces were presented unilaterally (80% left and 20% right of fixation) inside a half-circle aperture (radius 6\u00b0) and a cosine edge profile to blur the appearance of depth edges. Stimuli were presented on mid-grey background, surrounded by a grid of black and white squares (75% density) designed to provide an unambiguous background reference. In the stereoscopic conditions, observers could theoretically discriminate surface slant based only on the difference in depth at the top/bottom of a pair of stimuli. Similarly, in the texture-only condition, observers could make judgements based on the difference in textel density at the top/bottom of a pair of stimuli. To minimize the availability of these cues, disparity-defined position was randomized by shifting the surface relative to the fixation plane (0\u00b0 disparity) to between \u00b1\u200910% of the total surface depth. Texture-defined position in depth\u2014which corresponded to average textel size\u2014was randomized for each stimulus presentation by increasing point spacing in the initial grid of points by \u00b1\u200910%We presented four cue conditions: 2\u00d7 single-cue (texture and disparity) and 2\u00d7 two-cue conditions (congruent and incongruent). Stimuli in the single-cue texture condition were presented monocularly (right eye), whereas all other stimuli were presented binocularly.58 (http://psignifit.sourceforge.net/) was used to fit psychometric functions to the data. Sensitivity to slant was derived from the slope of the psychometric function and the point of subjective equality (PSE) from the threshold.Observers performed a two-interval forced-choice discrimination task in which the reference and test stimuli were presented in randomized order Fig.\u00a0. Each stS\u03b4\u2009=\u2009S\u03c7\u2009=\u200940\u00b0). It is noteworthy that we chose this slant angle, as observers sensitivity to disparity and texture cues was similar . Ensuring similar cue reliabilities gave us the greatest potential to detect the improved performance associated with combination. Specifically, the maximum possible benefit for combining independent cues is a factor of \u221a2 for the case when the two cues have equal reliability; this benefit is smaller when the two cues differ in reliability.In the congruent-cue condition, reference stimuli consisted of consistent texture and disparity slant . Specifically, for the incongruent condition, we combined a smaller disparity slant (S\u03b4\u2009=\u200920\u00b0) with a larger texture slant (S\u03c7\u2009=\u200950\u00b0), yielding a stimulus whose component cue elements differed in reliability (approximately 2:1 ratio). We chose a 2:1 reliability ratio for the incongruent case, as this (i) could be achieved while holding all other stimulus parameters constant between congruent and incongruent conditions (except slant angle), and (ii) was predicted by the model to produce robust behaviour. In addition to the combined conditions, single-cue conditions were included, for each of the slant angles used in the combined stimuli . We also included a test stimulus with 0\u00b0 texture and disparity slant. This was intended to be easily discriminable from the reference stimuli and thus provide a generalized measure of psychophysical performance by capturing the lapse rate of the observers. In addition, we presented six trials with reference stimuli selected at random at the start of each block to refresh observers\u2019 familiarity with the task. Observers were regularly prompted to maintain fixation throughout the experiment.As we were testing the robustness of observers\u2019 perception, we designed the stimulus in the incongruent-cue condition such that one cue was more reliable than the other. To achieve this, we took advantage of the fact that sensitivity to texture-defined slant increases with slant angleIn the congruent- and single-cue conditions, the test stimuli were defined by congruent and single cues, within a range \u00b120\u00b0 of the reference stimulus (40\u00b0) over eight evenly spaced steps, i.e., \u00b1\u2009. For the incongruent-cue condition, the test stimuli were defined by congruent cues, within a range of \u00b1\u200925\u00b0 of the midpoint between the slants defined by the incongruent cues of the reference stimulus (35\u00b0) over eight evenly spaced steps, i.e., \u00b1\u2009. For participants who showed high precision in the incongruent condition during the initial familiarization stage, this range was adjusted to \u00b1\u200914\u00b0 to more closely assess their sensitivity. As an incongruent test stimulus was compared against consistent-cue reference stimuli, the PSE in the incongruent condition provides an assessment of the perceived shape of the incongruent stimulus in terms of congruent stimuli.Before brain imaging/stimulation experiments, participants performed a familiarization session in the laboratory. This was used to introduce participants to viewing the stimuli in the stereoscope and ensure they could perform the slant discrimination task.For the MRS experiment, participants took part in two further sessions. One session was used to acquire MRS measurements inside the MRI scanner while the participants were at rest . The other session measured psychophysical performance on the slant discrimination task under the different experimental conditions. The two sessions were separated by 24\u201348\u2009h and the order of sessions was counterbalanced across participants. For each condition, observers underwent two blocks of 214 trials. Condition order was randomized.For the tDCS experiments, participants took part in three experimental sessions . Each session was separated by at least 36\u2009h and the order of sessions was counterbalanced across participants. During the initial familiarization session, reference stimuli for the ipsilateral control trials were drawn at random from the pool of reference slants used in the main experiment. During stimulation sessions, the control reference slant was set to that which individual observers could discriminate at 80% performance. Calibration of the eye tracker was performed immediately before the onset of each block in tDCS sessions. Condition order was counterbalanced across simulation sessions and subjects.Magnetic resonance scanning was conducted on a 3T Siemens Prisma equipped with a 32-channel head coil. Anatomical T1-weighted images were acquired for spectroscopic voxel placement with an \u2018MP-RAGE\u2019 sequence. For detection of GABA, spectra were acquired using a macromolecule-suppressed MEGA-PRESS sequence: echo time\u2009=\u200968\u2009ms, repetition time\u2009=\u20093000\u2009ms; 256 transients of 2048 data points were acquired in 13\u2009min experiment time; a 14.28\u2009ms Gaussian editing pulse was applied at 1.9 (ON) and 7.5 (OFF) p.p.m.; water unsuppressed 16 transients. Water suppression was achieved using variable power with optimized relaxation delays and outer volume suppression. Automated shimming followed by manual shimming was conducted to achieve approximately 12\u2009Hz water linewidth.60.Spectra were acquired from three locations; a target (V3B/KO) and two control (V1 and M1) voxels (30\u2009\u00d7\u200930\u2009\u00d7\u200920\u2009mm) , a MATLAB toolbox designed for analysis of GABA MEGA-PRESS spectra, modified to fit a double-Gaussian to the GABA peak. Individual spectra were frequency and phase corrected before subtracting \u2018ON\u2019 and \u2018OFF\u2019, resulting in the edited spectrum in institutional units were produced. The fitting residual for water and GABA were divided by the amplitude of their fitted peaks to produce normalized measures of uncertainty. The quadratic of these was calculated to produce a combined measure of uncertainty for each measurement63. This combined fitting residual was relatively low across all participants for all voxel locations, from 3.8% to 9.4% (mean: 6.6%\u2009\u00b1\u20090.2%).Spectral quantification was conducted with GANNET 2.064 to the GABA/H2O measurements with the following equation:http://www.fil.ion.ucl.ac.uk/spm/). The DICOM of the voxel location was used as a mask to calculate the volume of each tissue type for both visual and sensorimotor voxels.To ensure that variation in GABA concentrations between subjects was not due to differences in overall structural composition within the spectroscopy voxels, we performed a segmentation of voxel content into GM, WM and cerebrospinal fluid (CSF). This was then used to apply a CSF correctionDirect current stimulation was applied using a pair of conductive rubber electrodes held in saline-soaked synthetic sponges and delivered by a battery-driven constant current stimulator . For seven participants, functional anatomical scans were used to identify areas V3B/KO in the right hemisphere and then neuronavigational equipment was used to locate the closest point to the centre of mass of this region on subjects\u2019 scalp was used to reconstruct head models from anatomical scans and SimNIBS (http://simnibs.de) used to simulate electric field density resulting from stimulation and its slant angle in radians (\u03b8). The slant receptive field for each primary unit was modelled as a one-dimensional von Mises distribution\u03b8cue_pref indicates cue slant preference. Arbitrarily, \u03b8cue_pref takes n\u2009=\u200937 evenly distributed values between kcue, was chosen to be 2; producing a slant tuning bandwidth of approximately 10 degrees. The response of each primary unit was assumed to scale linearly with cue intensity, Acue.Each primary input to the model is specified by its intensity (\u03b4 or \u03c7), such that there are 37\u2009\u00d7\u200937\u2009=\u20091396 combination units. Based on previous empirical evidence27 we assume that combination units perform a summation of their inputs that increases monotonically, but sublinearly, with stimulus intensityE denotes the activity of the combination unit with disparity slant preference \u03b8\u03b4 and texture slant preference \u03b8\u03c7. The nonlinearity models sublinear response functions of the combination units, which could be mediated by means of synaptic depression or normalization28.Combination units in the model were generated by drawing input from all possible pairs of unimodal units, as denoted a subscript were used to assess estimate reliability and (slant) positionA\u03b4\u2009=\u20091 and A\u03c7\u2009=\u20098 were used to achieve a 1:3 ratio of sensitivity to match previous work7. For the simulations in Fig.\u00a0A\u03b4\u2009=\u2009A\u03c7\u2009=\u20091 (single and congruent) and A\u03b4\u2009=\u20091, A\u03c7\u2009=\u20094 (incongruent) were used to match the sensitivity ratios engineered for the behavioural stimuli . To simulate variable suppression, an additional parameter (\u03b2) was used to attenuate the negative readout weights. For each simulation, \u03b2 was set to a value drawn at random from a Gaussian distribution . To simulate individual variability in sensitivity to cues, cue intensity was drawn from a Gaussian distribution .To compare between simulations, we calculated the reliability of the single/combined cue signals relative to one another. For the simulation of Supplementary Figure\u00a0\u03c3\u2009=\u2009.For the simulation of Fig.\u00a0t11\u2009=\u20094.3, P\u2009=\u20090.001, online, t11\u2009=\u20093.7, P\u2009=\u20090.003). This is likely to be because disparity and texture were not fully isolated in the single-cue conditions, and that these \u2018latent\u2019 cues acted to reduce sensitivity to the \u2018single\u2019 cue . Thus, to simulate the effects of tDCS with the model through lateral connections with weights defined by a half-wave rectified cosine function. The dynamics of the output units are further defined by slow adaptation (\u03b1) and stochastic variability (\u03c3)S[Xi] denotes a sigmoidal transformation (using a Naka-Rushton function) of the activity of Xi, W corresponds to Gaussian noise and A\u03b8 represents adaptationTo simulate perceptual rivalry, the response of the output units A\u03b4\u2009=\u20091 and A\u03c7\u2009=\u20091 were used to produce the constant activity in the combination layer F(\u03b8). Timescales of \u03c4\u2009=\u20091 and \u03c4A\u2009=\u2009125 were used to define the temporal dynamics of inhibition and adaptation \u03b3\u2009=\u2009\u03b1\u2009=\u20097, and the SD of noise was assumed to be \u03c3\u2009=\u20090.005.For the simulation in Fig.\u00a0Maximum likelihood predictions in Figs.\u00a0\u03c3 denotes SD of the estimate, S denotes slant angle, r indicates reliability and w denotes weight. Bias is produced by taking the difference between the combined slant estimate (S\u03b4\u03c7) and the more reliable single cue slant estimate. SD (\u03c3) is converted to sensitivity with the following equation:Here, 25 and the same method of behavioural decoding used in our model to convert firing rate to bias and reliability.To simulate the normalization model predictions in Figs.\u00a023 using WebPlotDigitalizer (http://arohatgi.info/WebPlotDigitizer). The reliability data in Figs.\u00a0t83\u2009=\u20095.6, P\u2009<\u20090.001). This is thought to reflect the influence of latent cues present in single-cue stimuli that lead to underestimation of single-cue sensitivity12. Thus, to account for this phenomenon, we transformed the reliability data by setting the quadratic prediction as the maximum value of averaged data points.The psychophysical data in Figs.\u00a0The bistability data in Fig.\u00a0t-test; all tests were two-sided. We first used RM ANOVAs to test for main effects and interactions, we then followed up with t-tests as appropriate to determine the precise relationship between conditions. For control/replication experiments, t-tests were used to test a priori comparisons. The normality and sphericity assumption was tested with the Shapiro\u2013Wilk test of normality and the Mauchly\u2019s test of sphericity. For the majority of comparisons, no evidence was found for violation of the assumption of normality or sphericity. For comparisons where normality was violated (n\u2009=\u20092), we applied a transformation to the data to normalize the distribution, then re-tested. For all comparisons, the same pattern of results was found following normalization; thus, for simplicity, we reported the non-transformed comparisons. For comparisons where spherecity was violated (n\u2009=\u20092), we used the Greenhouse\u2013Giesser corrected F-value. To compare the fit of models to behaviour data . The normality assumption was tested with the Henze-Zirkler test of normality; no evidence was found for assumption violation of the data. The boxplot rule, which relies on the interquartile range68, was used to reject bivariate outliers; outliers are shown in figures.To determine the significance of relationships between brain metabolites and behavioural performance we used the Pearson\u2019s correlation, implemented with a correlation analysis MATLAB toolboxBefore analysis, eye movement data were screened to remove noisy and/or spurious recordings. Owing to the bespoke experimental setup and the time-sensitive nature of brain stimulation , the eye tracker would occasionally fail to track participants\u2019 eyes for an entire block. Of the 28 blocks (19%) that were omitted from the analysis, 27 had <\u20091% of data collected. We omitted the remaining block because of variability in eye position signals that indicated noisy tracking performance. Finally, before averaging trials, we removed points exceeding the radius of the stimulus (4.5\u00b0).The simulation results shown in Figs.\u00a0Supplementary Information"} +{"text": "Using this strategy, multiple sites in side chains, including aromatics, become site-selectively labeled and suitable for relaxation studies. Here we systematically investigate the use of site-selectively 13C-enriched erythrose as a suitable precursor for 13C labeled aromatic side chains. We quantify 13C incorporation in nearly all sites in all 20 amino acids and compare the results to glucose based labeling. In general the erythrose approach results in more selective labeling. While there is only a minor gain for phenylalanine and tyrosine side-chains, the 13C incorporation level for tryptophan is at least doubled. Additionally, the Phe \u03b6 and Trp \u03b72 positions become labeled. In the aliphatic side chains, labeling using erythrose yields isolated 13C labels for certain positions, like Ile \u03b2 and His \u03b2, making these sites suitable for dynamics studies. Using erythrose instead of glucose as a source for site-selective 13C labeling enables unique or superior labeling for certain positions and is thereby expanding the toolbox for customized isotope labeling of amino-acid side-chains.NMR-spectroscopy enables unique experimental studies on protein dynamics at atomic resolution. In order to obtain a full atom view on protein dynamics, and to study specific local processes like ring-flips, proton-transfer, or tautomerization, one has to perform studies on amino-acid side chains. A key requirement for these studies is site-selective labeling with The online version of this article (doi:10.1007/s10858-017-0096-7) contains supplementary material, which is available to authorized users. Proteins are dynamic entities. They continuously undergo all kinds of dynamic processes on various time scales, like conformational rearrangements of the backbone, side chains and loops, ring-flips, proton transfers, changing conformations to alternative states, unfolding, domain reorientation, etc. While it is of fundamental interest to understand intrinsic protein dynamics, many of these processes are also directly linked to function that are not affected by coupling with their neighbours.NMR spectroscopy is a powerful technique to study such dynamic processes on various time-scales at atomic resolution Palmer . While tAromatic residues are bulky and form a substantial part of protein hydrophobic cores. They are also over-represented in binding sites has made it possible to routinely perform advanced heteronuclear studies of dynamics in aromatic side chains, its 13C incorporation yields are far from optimal, typically reaching 20\u201350%. Furthermore, it is controversial whether additional deuteration is needed in combination with unlabeled glucose. This approach is very close to standard 13C labeling using glucose. The only modification is the additional presence of erythrose. Further, we quantify the 13C incorporation in aromatic side-chains and all other positions of the 20 amino acids for the first time and compare it to that achieved with glucose-based labeling. Erythrose labeling leads to a slight enhancement of 13C levels for Phe and Tyr \u03b4, and roughly to a doubling for all proton-bound carbons in the six-ring moiety of Trp. Further the method efficiently labels Phe (and Tyr) \u03b6 and Trp \u03b72 (2-13C erythrose) and thus makes these positions available for studies of dynamics for the first time. Especially Phe \u03b6 is of great potential interest in order to separate the effects of motions around chi-2 and chi-1 dihedral angles. Additionally, His \u03b2 becomes significantly 13C-labeled, and Ile \u03b2, Lys \u03b2 and \u03b2 and Arg \u03b2 become isolated 13C labeled. Finally, we show that the erythrose-based approach for site-selective 13C labeling can be easily combined with the glucose approach, allowing for more custom labeling.Here we present an easy and robust approach using selectively labeled erythrose or 2\u00a0g/l (in case of Trp) erythrose are usually used. Up to now erythrose is only competitive in costs for desired Phe and Tyr \u03b5* labeling (1\u00a0g/l 1-13C erythrose to 2\u00a0g/l 2-13C glucose) with similar 13C incorporation levels. Labeling of all other positions is more expensive with erythrose but can be justified by significantly higher 13C incorporation or effectively labeling positions not labeled by 1-13C or 2-13C glucose .All isotopes were purchased from cortecnet. Typical prices per gram are: 1-An optimised coding sequence for human FK506 binding protein 12 was synthesised and sub-cloned into the plasmid pNIC28-Bsa4 . In the case of erythrose labeling, site-selective 13C enriched erythrose was additionally present at the beginning at a concentration of 2\u00a0g/l, unless otherwise indicated. Protein expression was induced by addition of 1\u00a0mM IPTG at an OD600 of ~0.8. Protein expression was carried out for 18\u00a0h at 25\u2009\u00b0C. The protein was purified on a His-trap column. Afterwards the His-tag was cleaved by Tobacco Etch Virus (TEV) protease. The protein was dialysed, and collected as the flow through of another His-trap column. At the end the buffer was exchanged to NMR buffer and the protein was concentrated to ~12\u00a0mg/ml.Recombinant FKBP12 containing an N-terminal 6x His-tag tag was expressed in M9 minimal medium with 1\u00a0g/l 2O at 25\u2009\u00b0C and a static magnetic field strength of 14.1\u00a0T. For each sample, a 1H\u201315N plane of an HNCO, non-constant time 1H\u201313C HSQCs for the aliphatic and aromatic regions, and a 1D spectrum on 13C were recorded for quantification of 13C incorporation. Intensities of different samples (with possible slightly different concentration) were referenced to the averaged intensities of a 1H\u201315N HSQC. Assignments were checked using standard 3D experiments. Aromatic 13C relaxation studies were performed using L-optimized TROSY detected relaxation experiments D13C-enriched reference sample, volumes from both peaks split by the 13C\u201313C 1J coupling were added. All positions of interest described in this article resulting from erythrose labeling (and glucose labeling for comparison) were isolated and showed no signs of any 13C\u201313C 1J coupling. Intensities were normalized to the fully 13C enriched sample and expressed in %. By analysing multiple signals of the same kind, the relative error in the intensities of 13C covalently bound to 1H could be estimated to 1%. Errors for 13C not bound to 1H were estimated to 3%.The analysis was restricted to well resolved signals that only arise from the same kind of atom (residue type and position). For the fully 13C labeling of aromatic side chains in proteins was added together with unlabeled glucose to the minimal medium, ensuring that the growth rate of E. coli is essentially the same as for standard minimal media conditions. Furthermore, this approach allows for combined 13C labeling by erythrose and glucose. Preliminary tests showed that adding the erythrose at the very beginning does not lead to any scrambling in the aromatic side chains compared to the result obtained when adding it shortly before induction. Since the level of 13C incorporation is slightly higher when erythrose is added at the start this procedure was followed in all experiments. The level of 13C incorporation was monitored for all aromatic side-chains, with exception of Tyr \u03b3, His \u03b3, and Trp \u03b42 and \u03b52, as well as for all other carbon sites in the 20 amino acids. All the missing positions do not have any attached proton. The resulting data provides information on background labeling, scrambling, and unexpected selective incorporations, as described below.Erythrose is a precursor that enters the metabolic pathways closer to the amino-acid product than does glucose, which is of great advantage for achieving site-selective 13C labeling occurs at the expected positions , 2\u00a0g/l erythrose were used for the following study. However, if one is only interested in Phe and Tyr, 1\u00a0g/l should be enough.The above mentioned erythrose labeling strategy leads to following general observations. In aromatic side-chains isolated ose Fig.\u00a0. Phe and13C incorporation levels in Phe, Tyr and Trp using differently labeled erythrose or glucose are summarized in Table\u00a013C labeled positions do not show any signs of 13C\u201313C couplings in the spectra in agreement with the low 13C incorporation for neighbored positions (Table\u00a013C) labeling leads to a higher incorporation yield in position \u03b4. Additionally position \u03b6 becomes accessible (with 2-13C erythrose), which is potentially very useful to differentiate fluctuations around chi-2 from fluctuations around chi-1, or ring flips from general conformational exchange. For the \u03b5 position the 13C incorporation level is very similar for the two carbon sources . As for Trp, position \u03b41 is not labeled at all, which is expected. In case of Trp \u03b53, \u03b63 and \u03b62, erythrose yields at least twice as high 13C incorporation. Additionally \u03b72 becomes efficiently labeled by 2-13C erythrose. Since His \u03b42 is not labeled , erythrose (1-13C and 3-13C) labeling allows for studies on Tyr \u03b5 without potential disturbance of His \u03b42, which shares the same spectral region is higher than for the \u03b5 (20% of \u03b51 and \u03b52) and z (40% of only one \u03b6). It is unclear if the additional non expected 13C , R2 and {1H-}13C NOE for identical positions between erythrose- and glucose- (1-13C and 2-13C) labeled samples, we observe an excellent agreement , the relaxation data are slightly different, which can be explained by the higher uncertainty of the glucose-based probe (due to the lower S/N).Both erythrose and glucose labeling lead to site-selective 13C do not play a role and potentially any method resulting in isolated 13C is equally well suited for relaxation studies. 13C relaxation dispersion experiments both for CPMG . However, a few positions are worth mentioning, which become efficiently labeled with isolated 13C. First, in histidine the \u03b1 and \u03b2 positions are significantly labeled approach as well, but not free from 13C\u201313C couplings. Furthermore, Lys \u03b4 and Arg \u03b3 are labeled at 22 and 16%, respectively, in an isolated fashion (SI Table 2), by 4-13C erythrose. These might be of interest as additional positions for dynamics studies in long and charged side-chains.Labeling with erythrose is more selective then glucose-based labeling, since it is a precursor closer to the aromatic side-chain end products. Therefore it is not surprising that the level of 13C-labeled erythrose in addition to unlabeled glucose, it is straightforward to combine site-selective labeling from both sources in order to get more positions per sample labeled or to increase 13C labeling of some sites. This strategy was verified by two approaches.Since the general labeling protocol presented here is based on site-selectively 13C1-glucose, which labels Phe and Tyr \u03b4, His \u03b51 and \u03b42 and Trp \u03b41 and \u03b53 \u03b6 and Trp \u03b72 labeling. The combined approach , they are of the same signal strength as the Phe \u03b6 signals. The only real drawback is observed for Trp \u03b53, whose labeling is rather poor in the glucose approach but even worse combined with erythrose. However, the combined approach is ideal to study the \u03b4 and \u03b6 positions of Phe and Tyr in a single sample, because the spectral regions are well separated.First, we combined 1- \u03b53 Fig.\u00a0a, black, \u03b72 Fig.\u00a0a, blue, ach Fig.\u00a0a, red gi13C1-glucose, which labels Phe and Tyr \u03b5, His \u03b42 and Trp \u03b41, \u03b63 and \u03b62 or 30% (Phe and Tyr \u03b5). As expected the 13C level in Trp \u03b62 decreases. This approach leads to results similar to that observed when using 2-13C1-glucose only, but with a moderate increase in 13C levels for Phe and Tyr \u03b5 and a large increase for the Trp \u03b63 (3-13C1-erythrose) or Trp \u03b62 (1-13C1-erythrose).Second, we combined 2- \u03b62 Fig.\u00a0b, black, \u03b63 Fig.\u00a0b, blue. ose Fig.\u00a0b, red la13C-13C neighbors and the combined erythrose\u2013glucose approaches described above, one can estimate to what extent a certain amino acid is built from glucose and erythrose precursors side-chains based on site-selectively 13C-enriched erythrose together with unlabeled glucose enables similar growth of cells as that resulting from growth on glucose only, and similar or improved 13C incorporation with a higher selectivity. However, labeling yields are far from 100%, which leaves room for further improvement. One way to increase the labeling yield would be to use cells with improved erythrose uptake. This will likely shift the ratio of amino acid biosynthesis more to the erythrose-based side. However, this would most likely come at the price of reduced selectivity. A more straightforward approach would be to use doubly 13C-enriched erythrose, which unfortunately does not appear to be commercially available at present. As long as the two 13C sites are separated in the erythrose they will lead to isolated 13C sites in the aromatic side-chains with the same level of incorporation as that obtained with the singly 13C-labeled erythrose. 1,3-13C2-erythrose would double the 13C incorporation of Phe and Tyr \u03b5 and label Trp \u03b63 and \u03b62 at the same time. 2,4-13C2-erythrose would label Phe and Tyr \u03b4 and \u03b6, and Trp \u03b53 and \u03b72 at the same time. 1,4-13C2-erythrose would label Phe and Tyr \u03b4 and \u03b5, but in this case the 13C sites are not expected to be isolated. Since the 13C incorporation in Phe and Tyr \u03b4 for 4-13C erythrose is higher for proton-bound carbons in the six ring moiety of Trp. Further Phe (and Tyr) \u03b6 and Trp \u03b72 become available for measuring dynamics for the first time. Labeling of Phe \u03b6 make it possible to separate the effects of motions around chi-2 and chi1 dihedral angles. His \u03b2 becomes significantly 13C labeled via erythrose, and isolated 13C appears in the Ile \u03b2, Lys \u03b2 and \u03b4, and Arg \u03b3 sites. Finally, we have shown that the present approach for site-selective 13C labeling can be easily combined with the glucose-based approach, to yield labeling patterns optimized for specific purposes.We have shown that erythrose as a source for site-selective Below is the link to the electronic supplementary material.Supplementary material 1 (DOCX 4198 KB)"} +{"text": "Ascites, the fluid accumulation in the peritoneal cavity, is most commonly seen in patients with end-stage liver disease (ESLD). Evaluating ascites or providing symptomatic relief for patients is accomplished by performing a paracentesis. Ascites leak from a paracentesis site can be a complication of the procedure and is associated with increased morbidity. Currently, the best options for these patients include medical management or surgical abdominal wall layer closure. Utilizing a blood patch provides an alternative approach to managing such patients. A two-center prospective case series was performed evaluating the efficacy of the blood patch in patients with significant persistent ascites leak following a paracentesis. About 30 mL of the patients\u2019 peripheral blood was used for the blood patch. Subjects were recruited over a period of one year and followed for 30 days after the procedure. A total of six patients were recruited for this study. Subjects underwent placement of autologous blood patch at the site of the ascites leak and 100% had resolution of the leak within 24 h. None of the subjects developed any complications of the procedure. This study shows that an autologous blood patch is an effective, low-risk treatment method for ascites leaks following a paracentesis. It is a simple bedside procedure that can reduce morbidity in patients with end-stage liver disease. Ascites is the accumulation of fluid within the peritoneal cavity. Though there are numerous etiologies of ascites, the most common cause in the United States, accounting for approximately 80% of cases, is cirrhosis . The evaAlternatively, autologous blood patches have widely been used in settings of persistent cerebral spinal fluid (CSF) leaks following a lumbar puncture, persistent air leaks in pneumothorax, as well as post-amniocentesis amniorrhea . No studAn Institutional Review Board (IRB)-approved, two-center prospective case series was performed. Patients with persistent drainage from non-closing paracentesis tracts were recruited for one year from the inpatient setting at Rhode Island Hospital and The Miriam Hospital in Providence, Rhode Island. Subjects were included in the study if they had ascites diagnosed on physical exam or radiographic imaging, presence of an ascites leak from a recent therapeutic or diagnostic paracentesis site with failure of improvement with conservative management, and persistence of leakage for three or more consecutive days. Patients were excluded if there was presence of an overlying skin infection, bacteremia, severe coagulopathy or thrombocytopenia , or ascitic leak secondary to other etiologies aside from non-closing paracentesis tract, such as umbilical hernia rupture, leaking trocar sites, or abdominal surgical site. An IRB-approved consent form was reviewed with included subjects prior to the procedure. Patients were followed for response and clinical status immediately post-procedure, as well as at 24 h, 7 days, and 30 days following the intervention. Subjects were monitored for risks including site infection, allergic reaction, ascitic fluid leakage from blood patch site, and peritonitis.The procedure was performed at the bedside under sterile technique with sterile gloves, masks, and antiseptic solution, as described by Thomsen et al. regarding the performance of paracentesis . Approxi2. The average number of days of ascites leakage was 6 \u00b1 1 with the amount of fluid leak ranging from 50 to 1900 mL per day.A total of six patients who met the inclusion and exclusion criteria were recruited for this study. The subjects were all male with a mean age of 58.7 \u00b1 10.7 years . Most haAll six subjects underwent the placement of an autologous blood patch. Due to issues with peripheral venous access, the amount of venous blood used for the patch in two patients was 17\u201318 mL. The remainder of subjects received a 30 mL blood patch. Only one patient experienced ascites leakage immediately following the placement of the blood patch . All patAscites leak following either therapeutic or diagnostic paracentesis may resolve spontaneously with conservative management. However, autologous blood patch should be considered in patients with persistent significant leakage. The current standard of treatment is medical management. In ESLD patients with significant ascites leak, including from paracentesis sites and ruptured umbilical hernias, surgical closure of the abdominal wall layers becomes necessary . HoweverThis study also demonstrated the excellent efficacy of a blood patch for a persistent ascites leak. In this study, treatment with a blood patch resulted in 100% resolution of ascites leak by 24 h in all subjects. One patient had persistent leakage immediately following the procedure; however, this resolved at 24 h.The use of autologous blood patch for post-procedural leaks have been described in the setting of CSF leaks following lumbar punctures and spinal anesthesia as well as persistent air leaks in pneumothoraces. There are several underlying mechanisms that may explain the effect of a blood patch. Injection of blood adjacent to the leak causes the displacement of volume and compression of the subcutaneous tissues ,9,10. ThThe recommended blood volume in an epidural blood patch is controversial and has ranged from 2 to 20 mL ,11. In pThis is a preliminary study to suggest the feasibility and advantages of utilizing a blood patch for persistent ascites leak. Although ongoing medical management may have eventually resulted in resolution of the ascitic leaks, each patient was referred for this procedure after several days of persistent leak despite conservative treatment. In addition, medical management is a significantly more expensive approach to this problem since a protracted leak can lead to prolonged hospital stays. Furthermore, persistent fluid collection or a damp dressing against the skin places the patient at risk for skin breakdown at the site. This procedure reduces patient risks and discomfort.Limitations of this study include a small sample size consisting entirely of males. This is likely due to the rarity of ascites leaks seen in the inpatient setting. A larger sample size could be possible in outpatient settings where ESLD patients requiring repeated paracenteses are seen more frequently.This study demonstrated that the blood patch for persistent leaks is cost-effective with a high efficacy and safety profile. Additional studies are needed to further refine the intervention, such as the assessment of appropriate blood volume. Though controlled trials could be performed, they would not be blinded and would serve only to document how much longer the fluid leak persists in those treated medically. Studies may also be considered to compare the blood patch with a normal saline injection as a control group, which may help further elucidate the underlying mechanism and efficacy of a blood patch, or alternative therapeutic options. A prospective trial studying epidural blood patch with epidural saline infusion did show effectiveness with saline, though there was reduced efficacy in the saline group when compared to the blood patch ."} +{"text": "We present SubMachine, a collection of web\u2010based tools for the interactive visualization, analysis, and quantitative comparison of global\u2010scale data sets of the Earth's interior. SubMachine focuses on making regional and global\u2010scale seismic tomography models easily accessible to the wider solid Earth community, in order to facilitate collaborative exploration. We have written software tools to visualize and explore over 30 tomography models\u2014individually, side\u2010by\u2010side, or through statistical and averaging tools. SubMachine also serves various nontomographic data sets that are pertinent to the interpretation of mantle structure and complement the tomographies. These include plate reconstruction models, normal mode observations, global crustal structure, shear wave splitting, as well as geoid, marine gravity, vertical gravity gradients, and global topography in adjustable degrees of spherical harmonic resolution. By providing repository infrastructure, SubMachine encourages and supports community contributions via submission of data sets or feedback on the implemented toolkits. Web\u2010based tools for the interactive visualization, analysis, and quantitative comparison of global\u2010scale, volumetric (3\u2010D) data sets of the subsurfaceFocus on global seismic tomography models from body waves, surface waves, and normal modes (>30 models currently implemented)Additional tools for interacting with related data sets: plate tectonic reconstructions, topography, geoid, marine gravity, normal mode observations, etc Applied on a planetary scale, it uses seismic waves, generated by tens to thousands of moderate to large earthquakes, to sample and estimate the 3\u2010D spatial distribution of heterogeneities in the crust and mantle. Such heterogeneities cause seismic waves to propagate at slightly faster or slower velocities than average ambient mantle or crust, the structure of which is reasonably well known of the Incorporated Research Institutions for Seismology (IRIS), or by contacting their authors. For the SubMachine portal, we have assembled more than 30 global body wave, surface\u2010wave, and normal mode models and have processed them into a common format. In this first release of the SubMachine web portal, software tools to visualize and explore the models\u2014individually, side\u2010by\u2010side, or through statistics and averaging tools\u2014are provided. The appearance of SubMachine's home page is shown in Figure The full, volumetric parameter data sets of seismic tomography models can usually be obtained freely from published online supplements, from the Earth Model Collaboration (EMC) website , global\u2010scale Earth data sets. The implementation of the IRIS EMC in 2011 was a major step toward the collection, homogenization, and dissemination of seismological processing outputs, and this community\u2010supported repository of mainly tomography models has introduced a uniform model and metadata format has been influential as a pioneering effort in serving the community with a collection of global tomographic models in a homogenized format, on a well\u2010maintained website, and with basic visualization scripts.In the preportal era, the tomographic model comparison by Becker and Boschi (http://http://www.geomapapp.org/), the OneGeology Portal , and the GPlates Portal , a logical layer , and a presentation layer . The user interface, written in HTML, PHP, and JavaScript collects user inputs and sends them to the logical layer, which creates variables based on the user inputs and passes them to the visualization and statistical analysis tools. The codes of the logical layer, written mainly in Python and PHP, interact with the data layer to extract slices or other subsets of the volumetric and surface data sets, and to generate and store the plots and other outputs.SubMachine's current data holdings take up \u223c20 GB of storage on a server at the University of Oxford. Tomography models are data sets in three spatial dimensions, as are tomography vote maps, which are one\u2010bit thresholded stacks of several tomography models. Surface data sets are in two spatial dimensions, e.g., plate reconstructions, geoid, gravity, or topography. Data sets can have an additional time property, e.g., plate reconstructions evolving over geologic time. By defining a mapping function, time\u2010dependent data sets can be linked to tomography, for example, by mapping geologic time to mantle depth when considering the sinking rates of subducted slabs. Thus, plate reconstruction models or hotspot locations can be combined with tomography models or vote maps to produce spatiotemporal comparisons between surface dynamics and the Earth's interior structure.SubMachine puts an emphasis on facilitating user\u2010defined model comparisons. Tomography models can be homogenized for display against the same reference Earth models. Multiple 3\u2010D and 2\u2010D data sets can be plotted with the same map projections and coloring schemes. Vote maps using adjustable voting criteria are another comparison tool, as are tools to compute and plot model statistics.http://submachine.earth.ox.ac.uk). These tabs are tomography \u201cDepth slices,\u201d \u201cCross sections,\u201d \u201cVelocity histograms,\u201d and \u201cVelocity\u2010Depth profiles\u201d (section Each of the subsections that follow discusses one webpage (\u201ctab\u201d) of the SubMachine portal for North America. Table The 36 models were accessed in many different original parameterizations. Horizontally, these can be regular or irregular localized grids, or spherical harmonic basis functions; in the third dimension, regular or irregular depth layers, possibly interpolated by spline functions. We have linearly interpolated each model on a regular horizontal grid of 0.5\u00b0 \u00d7 0.5\u00b0 using Generic Mapping Tools , our interpolation extracted uniform depth increments of 50 km. Moreover, for each discontinuity in the background model, two depths closely bracketing that discontinuity were extracted and stored in the data set, e.g., depth slices at 650 and 670 km depth in case of a 660 km discontinuity. If a user requests data that do not coincide with points on this fine, precomputed grid, SubMachine interpolates linearly on the fly.The \u201cTomography\u201d tab includes subpages for \u201cDepth slices,\u201d \u201cCross sections,\u201d \u201cVelocity histograms,\u201d and \u201cVelocity\u2010Depth profiles,\u201d the functionalities and outputs of which are illustrated in Figures P wave and S wave tomography models. Plotting two or more models side\u2010by\u2010side facilitates model comparison and currently sets SubMachine apart from other online visualization tools. This comparison reveals for example overwhelming agreement on two large\u2010scale, low\u2010velocity structures beneath the Pacific and Africa , and on fast\u2010velocity anomalies beneath Eastern Asia and the Americas.Figure The user can default to viewing all tomography models in their originally published forms, i.e., relative to the spherical reference Earth models used in their generation. Alternatively, models can be homogenized and displayed relative to a single reference model, one of \u201cPREM,\u201d \u201cIASP91,\u201d or \u201cAK135\u201d , color palette , and color mapping .SubMachine currently contains two tools for the statistical analysis of tomographic models: \u201cVelocity histograms\u201d , and \u201cVelocity\u2010Depth profiles,\u201d computed over many such slices. Figure P wave and S wave models. It demonstrates that most current tomography models show low\u2010velocity material (presumably hot upwelling) in a continuous connection from the core\u2010mantle boundary to the surface. They also agree on many details of its geometry, which is tilted rather than a straight vertical plume conduit.Figure The engine of the \u201cTomography\u2010Cross\u2010sections\u201d page is written in Python. Python VTK libraries Quammen, are empl2.2k\u2010means cluster analysis, with a focus on slow\u2010velocity regions with a pixel if its seismic velocity is found to be faster/slower than ambient mantle. Five implemented threshold metrics permit to choose a lower or higher bar for what \u201cconfidently\u201d means. In the example of a high\u2010velocity vote map, the \u201czero\u201d metric includes all areas that are seismically fast in that depth slice (dv/v\u2009>\u20090). The stricter \u201cmean\u201d metric includes only regions of dv/v\u2009>\u2009v0, where v0 is the average value of all occurrences of dv/v\u2009>\u20090. Similarly, \u201cstd,\u201d \u201crms,\u201d or \u201cmedian\u201d include only regions of dv/v\u2009>\u2009v1, where v1 is the standard deviation, root mean square or median of a model's dv/v histogram at that depth, respectively. Figure A vote map is based on depth slices of Regions with higher vote counts indicate stronger agreement across models about the presence of anomalous mantle . Figures The rationale for computing vote maps is to increase the confidence in tomographically imaged structure by \u201cpolling\u201d different models that are at least partially decorrelated due to using different methods or data. Since there is a large overlap in earthquake locations and seismic stations across different global\u2010scale inversions, the resulting models will necessarily be correlated to some extent, and hence artifacts in vote maps need not average out. Regions of high vote count do not automatically mean that an anomaly is actually present in the Earth, and low vote counts do not automatically mean that an anomaly is absent.In more favourable situations, it is possible to average tomography models that were computed from more decorrelated data sets. The prime example is body wave versus surface\u2010wave models in the upper 300 km of the mantle. Although a \u201ctrue\u201d artifact is likely to be present only in a subset of models and thus not rise above a moderately low count, the judicial choice of constituent models for a vote map remains the user's responsibility\u2014as uncorrelated as possible, e.g., by including models made from different types of data. Areas of moderate vote count invite further scrutiny regarding the kinds of artifacts typically produced by different imaging methods. This requires studying the original publications of a vote map's constituent models, including resolution tests and other measures of model uncertainty, where provided.33.1To facilitate the linking of mantle structure with plate motion histories, SubMachine provides the functionality to overlay reconstructed plate boundaries and/or coastlines on seismic tomography models and vote maps. These reconstructions present different, relative and absolute plate motion histories, and their corresponding publications are listed in Table This initial release of SubMachine implements two kinds of mapping functions between depth in a tomographic model and time in a plate reconstruction. For any given depth slice, the user can manually specify a time for the superimposed plate reconstruction. Alternatively, SubMachine can calculate the reconstruction time automatically according to a user\u2010specified sinking rate (for a subducting slab). This can either be a single rate, if slab sinking is assumed to be uniform throughout the mantle, or two separate rates for the upper and lower mantle. Recent literature has proposed whole\u2010mantle slab\u2010sinking rates between \u223c10 and 20 mm/yr of the underlying heterogeneity sampled by a particular normal mode below each point.00S21\u201330, 01S11\u201314, 02S15\u201317, 02S25, and 03S26) are supported, based on Koelemeijer et al. . Currently, 19 normal modes that can be saved and shared.Disadvantages of web portals include access limitations when the website is under maintenance or when the server goes down. The speed of computations on a server, although acceptable, is somewhat decreased compared to local machines.SubMachine and other tomography portals , and the values 0 or 2 in areas where they agree.SubMachine supports both quantitative and qualitative comparisons of tomography models in various ways. It offers instant side\u2010by\u2010side comparison of any slice through any number of tomography models, in a uniform, customizable format. The raw dv/v values underlying the plots can be downloaded and processed by the user, in order to visually or computationally highlight features of interest. Tomography models can be queried and compared statistically via the histograms and velocity\u2010depth profiles. Vote maps can be constructed by combining two or more tomography models. In particular, vote maps for By contrast, model comparisons in the literature typically feature raw velocity data from only a few, readily accessible models, or resort to the qualitative comparison of graphics reproduced from original publications, which often use different reference models, section locations, and color schemes.4.3S wave models, only the visualization of the isotropic (Voigt) average is currently supported, but we plan to extend this to the visualization of anisotropy. Depending on community interest and contributions, SubMachine could also host and serve global\u2010scale magnetotelluric models, or outputs of mantle convection simulations. Work is underway on adding new functionalities, including the superimposition of seismic event locations retrieved from various catalogues ; model sequences along an L\u2010curve , which permits artifacts to be identified more readily; or entire resolution matrices, if available can be made available upon request and where we have their authors' permission to share them."} +{"text": "Nucleic Acids Research, 2017, 45(9): 5555\u20135563, https://doi.org/10.1093/nar/gkx139.In the purification protocol, the step of tag cleavage was inadvertently omitted. The Authors wish to make the following corrections to their article.Current:\u2026 A total of 10 ml of Hepes pH 7.2, 150 mM KCl, 200 mM Imidazole was used to elute the bound protein. The elution was applied onto a 5 ml HiTrap Q HP column and subsequently washed with 4 column volumes of wash buffer\u2026New:The imidazole concentration in theeluate was diluted to 50 mM.Sumo-protease to a sample concentration of 0.02 mg/mL and 1mM DTT was added to the protein solution and incubated for 16 h at 4\u00b0C. The cleaved protein sample was applied onto a 5 ml HiTrap Q HP column and subsequently washed with 4 column volumes of wash buffer\u2026\u2026 A total of 10 ml of Hepes pH 7.2, 150 mM KCl, 200 mM Imidazole was used to elute the bound protein. These corrections do not affect the results or conclusion of the article.The published article has been updated."} +{"text": "Clinical application of rivaroxaban and apixaban does not require therapeutic monitoring. Commercial anti-activated factor X (anti-FXa) inhibition methods for all anti-FXa drugs are based on the same principle, so there are attempts to evaluate potential clinical application of heparin-calibrated anti-FXa assay as an alternative method for direct FXa inhibitors. We aimed to evaluate relationship between anti-FXa methods calibrated with low molecular weight heparin (LMWH) and with drug specific calibrators, and to determine whether commercial LMWH anti-FXa assay can be used to exclude the presence of clinically relevant concentrations of rivaroxaban and apixaban.Low molecular weight heparin calibrated reagent was used for anti-FXa activity measurement. Innovance heparin calibrated with rivaroxaban and apixaban calibrators was used for quantitative determination of FXa inhibitors.Analysis showed good agreement between LMWH calibrated and rivaroxaban calibrated activity (\u03ba = 0.76) and very good agreement with apixaban calibrated anti-Xa activity (\u03ba = 0.82), respectively. Low molecular weight heparin anti-FXa activity cut-off values of 0.05 IU/mL and 0.1 IU/mL are suitable for excluding the presence of clinically relevant concentrations (< 30 ng/mL) of rivaroxaban and apixaban, respectively. Concentrations above 300 ng/mL exceeded upper measurement range for LMWH anti-FXa assay and cannot be determined by this method.Low molecular weight heparin anti-FXa assay can be used in emergency clinical conditions for ruling out the presence of clinically relevant concentrations of rivaroxaban and apixaban. However, use of LMWH anti-FXa assay is not appropriate for their quantitative determination as an interchangeable method. Direct oral anticoagulants (DOACs) have been increasingly used for the prevention and treatment of thromboembolic diseases in recent years. Compared to vitamin K antagonists (VKAs), clinical application of DOACs does not require routine coagulation monitoring. However, according to the present expert opinions, there are special clinical situations in which laboratory measurement of DOACs in plasma should be performed, including bleeding or thromboembolic events (acute stroke), emergency surgical or invasive procedures, extremes of body weight, renal and/or liver failure resulting with reduced drug elimination and suspected non-compliance or overdose and activated partial thromboplastin time (APTT). Knowing the impact of DOACs on the results of screening coagulation tests is a precondition for the correct interpretation of these assays. However, PT and APTT are not appropriate for direct FXa inhibitors, neither for quantifying drug concentration and reliable assessment of their anticoagulant effect, nor to exclude the presence of clinically relevant drug concentrations in the circulation due to the high differences in sensitivities of individual commercial PT and APTT reagents .Samples from patients taking rivaroxaban and apixaban were collected from July to December 2018 at the Department of Neurology and Department of Cardiovascular Diseases, Sestre Milosrdnice University Hospital Center. A total of 61 samples from patients taking rivaroxaban (31 peak and 30 trough) and a total of 53 (30 peak and 23 trough) samples from patients taking apixaban were used in the study. Blood samples were taken from the same patients and on the same day to obtain both, trough (immediately prior the next drug dose) and peak (two hours after drug administration) concentrations of rivaroxaban and apixaban in plasma. All patients were treated with standard and equal drug doses for non-valvular atrial fibrillation (NVAF) clinical indication containing 3.2%-trisodium citrate (volume 3.5 mL). All samples were centrifuged at room temperature for 10 minutes at 1800xg to obtain platelet poor plasma, aliquoted into labelled tubes and stored at - 20\u00b0C until analysis. Samples for the conducted study have been chosen in order to cover as much as possible the whole measurement range (up to 500 ng/mL) for both rivaroxaban and apixaban concentrations.All coagulation assays were performed on Behring Coagulation System XP (BCSXP) analyser . Low molecular weight heparin anti-FXa activity was determined in all samples by chromogenic method using original manufacturers\u00b4 reagent kit and LMWH calibrator . Results were expressed in anti-FXa heparin equivalent international units (IU/mL). The concentrations of rivaroxaban and apixaban were measured using specific chromogenic anti-FXa assay , calibrated with specific calibrators for rivaroxaban and apixaban . Concentrations of rivaroxaban and apixaban were expressed in ng/mL.et al. analysis was done to determine cut-off values of LMWH calibrated anti-FXa activity which corresponds to rivaroxaban and apixaban values < 30 ng/mL and < 50 ng/mL. Those cut-off values were used as suggested for the treatment of patients with excessive bleeding and perioperative management by Levy The rivaroxaban concentrations obtained by chromogenic anti-FXa method with drug specific calibrators ranged from 62 to 433 ng/mL for peak and from 4 to 83 ng/mL for trough concentrations. The apixaban peak and trough concentrations ranged from 73 to 415 and from 13 to 98 ng/mL, respectively.Results of rivaroxaban and apixaban peak and trough concentrations as well as LMWH anti-FXa activites are presented in Kappa statistics showed good agreement between LMWH calibrated and rivaroxaban calibrated FXa activities (\u03ba = 0.76) . FurtherAs presented in In this study, we evaluated relationship between two chromogenic anti-FXa methods, one calibrated with LMWH and the other with drug specific calibrators for rivaroxaban and apixaban. The main two questions that we wanted to answer were whether LMWH-calibrated anti-FXa activity assay can be used: 1) for excluding clinically relevant concentrations of direct anti-FXa drugs in circulation and 2) for quantifying rivaroxaban or apixaban concentrations in plasma as an alternative method to the chromogenic assays calibrated with specific drug.et al. who reported that anti-FXa response is not the same at the concentrations of 30 and 50 ng/mL of rivaroxaban and apixaban of the International Society on Thrombosis and Hemostasis (ISTH) recommended administration of the reversal agent in the perioperative setting, if plasma concentration of direct FXa inhibitor is above 30 ng/mL, in order to ensure adequate haemostasis , whereas recommended measurement unit for direct FXa inhibitors is ng/mL. Further, there is an evidence of substantial variability between different commercial kits and calibrators for LMWH-calibrated anti-FXa assays used in anti-FXa assays intended for different anti-FXa drugs, a group of authors have recently proposed a new concept that could contribute in solving this problem. These authors suggested a concept for a single anti-FXa based laboratory assay for all drugs that directly or indirectly inhibit FXa. The initiators of the idea of so called \u201eDa-Xa inhibition assay\u201c suggest a new test that would report inhibitory activity rather than drug concentration or IU/mL (in vitro. Those results are significantly different from the values in the plasma of patients treated with DOACs (The strength of this study relies on the fact that the results are obtained in plasma of patients treated with both direct anti-FXa inhibitors, rivaroxaban and apixaban, unlike several previous studies that reported results of drug concentrations in plasma samples where the certain concentration of DOACs was added The limitation of our study is that it was restricted to NVAF as the only clinical indication included. However, this clinical condition still represents the most common indication for introduction of DOACs in patient management, thus being the real representative patient population for the purpose of this research. Furthermore, a possible limitation could be the fact that we compared only one LMWH-anti-FXa assay with one drug-calibrated anti-FXa assay for rivaroxaban and apixaban available in the market. However, our intention was to compare the two methods that are in use at our laboratory in order to apply results in management of patients treated at our institution.In conclusion, the findings of this study will improve the understanding in terms of both possibilities and limitations of LMWH-calibrated chromogenic anti-FXa assays in patients treated with direct anti-FXa inhibitors. Our results strongly suggest that LMWH calibrated anti-FXa assay has a potential as an alternative first line method for excluding the presence of significant levels of rivaroxaban and apixaban if laboratory has no available specific chromogenic anti-FXa assay calibrated with particular drug. However, in case of positive result suggesting the presence of an oral anti-FXa inhibitor in plasma, specific chromogenic assay for particular anti-FXa drug should be performed for quantitative determination of these drugs. The use of LMWH calibrated anti-FXa assay to quantify rivaroxaban and apixaban concentrations could not be recommended in routine clinical practice as the only method for quantification of anti-FXa medications. It is of crucial importance that both laboratory experts and clinicians who treat patients substantially understand the opportunities and limitations of heparin-calibrated anti-FXa assays in patients treated with anti-FXa inhibitors."} +{"text": "Burkholderia pseudomallei is a gram-negative, facultative intracellular bacterium, which causes a disease known as melioidosis. Professional phagocytes represent a crucial first line of innate defense against invading pathogens. Uptake of pathogens by these cells involves the formation of a phagosome that matures by fusing with early and late endocytic vesicles, resulting in killing of ingested microbes. Host Rab GTPases are central regulators of vesicular trafficking following pathogen phagocytosis. However, it is unclear how Rab GTPases interact with B. pseudomallei to regulate the transport and maturation of bacterial-containing phagosomes. Here, we showed that the host Rab32 plays an important role in mediating antimicrobial activity by promoting phagosome maturation at an early phase of infection with B. pseudomallei. And we demonstrated that the expression level of Rab32 is increased through the downregulation of the synthesis of miR-30b/30c in B. pseudomallei infected macrophages. Subsequently, we showed that B. pseudomallei resides temporarily in Rab32-positive compartments with late endocytic features. And Rab32 enhances phagosome acidification and promotes the fusion of B. pseudomallei-containing phagosomes with lysosomes to activate cathepsin D, resulting in restricted intracellular growth of B. pseudomallei. Additionally, Rab32 mediates phagosome maturation depending on its guanosine triphosphate/guanosine diphosphate (GTP/GDP) binding state. Finally, we report the previously unrecognized role of miR-30b/30c in regulating B. pseudomallei-containing phagosome maturation by targeting Rab32 in macrophages. Altogether, we provide a novel insight into the host immune-regulated cellular pathway against B. pseudomallei infection is partially dependent on Rab32 trafficking pathway, which regulates phagosome maturation and enhances the killing of this bacterium in macrophages. Burkholderia pseudomallei is a gram-negative intracellular bacterium and the etiological agent of melioidosis. Little is known about the host innate immune system, which is engaged in a continuous battle against this pathogen and may contribute to the outcomes of melioidosis. Recently, Rab32, a Rab GTPase was shown to be a critical regulator of a host defense pathway against intracellular bacterial pathogens. However, the exact mechanism of how Rab32 contributes to the restriction of intracellular pathogens is not completely understood. In this study, we determined that the infection of macrophages with B. pseudomallei resulted in the upregulation of Rab32 expression through the inhibition of miR-30b/30c expression. Subsequently, Rab32 is recruited to the B. pseudomallei-containing phagosomes and promotes the fusion of the phagosomes with lysosomes, which results in the increased exposure of B. pseudomallei to lysosomal acid hydrolases CTSD, thus limiting the intracellular growth of B. pseudomallei at an early phase of infection in macrophages. Our findings establish for the first time that Rab32 plays an important role in suppressing the intracellular replication of B. pseudomallei by modulating phagosome maturation in macrophages, providing a new insight into the host defense mechanisms against B. pseudomallei infection. Host innate immune cells, particularly professional phagocytes, possess a wide range of antimicrobial defense mechanisms to eliminate the invading microbes . PhagocyBurkholderia pseudomallei is a facultative intracellular pathogen that causes the fatal infectious disease melioidosis, which has broad-spectrum clinical manifestations including pneumonia, localized abscesses, and septicemia [B. pseudomallei are cutaneous inoculation, ingestion, and inhalation [B. pseudomallei can invade and survive in both phagocytic and non-phagocytic cells [B. pseudomallei have been elucidated [B. pseudomallei can escape from the phagosome into the cytosol of phagocytic cells where it replicates and acquires actin-mediated motility, avoiding killing by the autophagy-dependent process [B. pseudomallei adapt to the intraphagosomal environment and manipulate the phagocytic process remains unknown. Therefore, to identify host cell molecules and pathways utilized by B. pseudomallei for intracellular survival, we initially investigated the localization and expression of 19 Rab GTPases, which are critical regulators of membrane trafficking pathways. Using overexpression of EGFP-tagged Rab GTPases, we observed considerable localization of Rab32 with B. pseudomallei-containing phagosomes, and increased Rab32 expression.pticemia . Melioidhalation , 11. B. ic cells , 13. Somucidated \u201318. More process \u201322. HoweLegionella-containing vacuoles and appear to promote the intracellular growth of L. pneumophila [S. Typhi and L. monocytogenes [Rab32 is a multifunctional protein, depending upon its cellular localization and the cell type. It is well established that Rab32 is involved in the biogenesis of lysosome-related organelles (LROs) such as melanosomes, T cells, and platelet-dense granules , 24. In umophila . Some reytogenes , 27, andB. pseudomallei infection is largely unknown. In this study, we aimed to explore the role of Rab32 in host-dependent immune mechanisms against B. pseudomallei infection. We found that B. pseudomallei upregulates the expression of Rab32 in infected macrophages by downregulating the expression of miR-30b/30c. Moreover, Rab32 is a functional GTPase that is required for limiting intracellular replication of B. pseudomallei by promoting the fusion of phagosomes with lysosomes.In addition to Rab GTPases can regulate phagosome maturation, increasing evidence indicates that microRNAs (miRNAs) are not only crucial regulators involved in modulating host innate immune responses to pathogens \u201330, but B. pseudomallei from replicating within the host cells, we used RAW 264.7 macrophages infected with B. pseudomallei and measured the expression of Rab32 by using quantitative real-time-PCR (qRT-PCR) and western blot analyses. As shown in B. pseudomallei infection as compared with an uninfected control in RAW264.7 cells. Consistent with the observed Rab32 which is upregulated in time course experiments; similar results were observed when MOI dependency was tested at 2 h post-infection -Rab32 was strongly recruited to the B. pseudomallei-containing phagosomes 1 h to 6 h post-infection (p.i.) i.e., after bacterial internalization , while 121 miRNAs were significantly downregulated (P < 0.05). B. pseudomallei infection at 4 h. MiRNAs negatively regulate the expression of target genes mainly by interaction in their 3' untranslated region (UTR). Thus, we screened for miRNAs whose expression downregulated after B. pseudomallei infection by using target prediction tools: TargetScan and miRDB, as candidate miRNAs for the increased Rab32 expression specific. We found members of the miR-30 family were predicted to target the 3' UTR of Rab32 mRNA and other Gram-negative bacteria, like Salmonella typhimurium and Escherichia coli as controls. We found no significant changes in the expression levels of miR-30b/30c after infection with S. typhimurium, B. thailandensis, and E. coli and located at different genomic positions . To conft manner . We thene miRNAs . The obsagosomes . Moreove E. coli . Taken tMiRNAs are small non-coding RNAs that negatively regulate post-transcriptional expression of target genes, which guide the binding of the miRNA-induced silencing complex (miRISC) to regions of partial complementarity located mainly within 3' untranslated region (3'UTR) of target mRNAs, resulting in mRNA degradation and/or translational repression , 36. To B. pseudomallei infection upregulates Rab32 expression and recruits Rab32 to the bacterium-containing vacuole and late (Rab7) endosomes with B. pseudomallei phagosomes were assessed from 0.5 to 4 h after infection and declined afterward colocalizations with agosomes . The peragosomes . Moreoveagosomes . We obseagosomes . Taken tB. pseudomallei infection, we investigated the effects of knockdown of Rab32 expression on B. pseudomallei phagosomes. We used a small interfering RNA (siRNA)-mediated knockdown of Rab32 expression in RAW264.7 macrophages. To test the silencing efficiency, the expression levels of Rab32 were analyzed by qRT-PCR and Western blot assays. We found that the mRNA and protein levels of endogenous Rab32 were significantly decreased in RAW264.7 cells transfected with Rab32 siRNA . Similarly, the percentage of LysoTracker-positive B. pseudomallei phagosomes were also obviously lower in Rab32 siRNA transfected macrophages than that in control siRNA transfected macrophages , we further observed the transport of B. pseudomallei phagosomes in Rab32-depleted RAW264.7 cells. At 2 h p.i., TEM results showed that about 80% of the B. pseudomallei were intact and surrounded by the single-membrane phagosomes in control siRNA-transfected macrophages and GTPase activating proteins (GAPs) that influence their subcellular localization and functions [B. pseudomallei in RAW264.7 cells and EGFP-Rab32-Q83L significantly increase the recruitment of Rab32 to B. pseudomallei-containing phagosomes respectively, but not EGFP-Rab32-T37N (p = 0.001) or EGFP-Rab32-Q83L showed significant enhancement in the association between B. pseudomallei phagosomes and the acidotropic probe Lysotracker as compared to the EGFP-Rab32-T37N groups, respectively and examined whether miR-30b/30c also regulate the phagosome maturation in B. pseudomallei infection. Firstly, we further established the specificity of miR-30b/30c via overexpression of miR-30b/30c mimics or inhibitors in BMDMs. Consistent with previous observations, qRT-PCR and western blot analysis demonstrated that transfection of BMDMs with miR-30b/30c mimics decreased Rab32 mRNA and protein expression and CTSD in BMDMs, when compared to the miR control. Conversely, the colocalization of Lysotracker and CTSD was markedly increased in B. pseudomallei-infected BMDMs transfected with miR-30b/30c inhibitors is crucial for vesicle escape before the bacteria can be degraded [B. pseudomallei escape from the Rab32-positive compartments as an alternate fate of this pathogen but not Rab5 and EEA1 (early endosome markers) on its phagosomes. In addition, we found that B. pseudomallei not only specifically recruits Rab32 on bacterial phagosomes but also retains them in a compartment with late endocytic features, positive for LAMP1, LAMP2, and Lysotracker. Given the Rab GTPases has been demonstrated to regulate the fusion of phagosomes with lysosomes. Therefore, retention of Rab32 on B. pseudomallei-containing phagosomes might promote the constitutive fusion of bacterial phagosomes with lysosomes. We observed that the knockdown of Rab32 caused a significant decrease in the association of LAMP1 and Lysotracker with B. pseudomallei-containing phagosomes. For further evaluation of phagosome maturation, we determined the degree of phagosomal acidification and the recruitment of cathepsin D to the phagosome, because both events were critical importance for the antimicrobial activity of macrophages [B. pseudomallei-containing phagosomes to late endosomes to degradative lysosomes is limited.We further investigated the functional significance of Rab32 upregulation and recruitment to the turation . Indeed,rophages , 62. OurB. pseudomallei replication in macrophages, as Rab32 knockdown or overexpression of EGFP-Rab32-T37N (inactive GDP-bound mutant) resulted in increased B. pseudomallei growth. We speculated that this was due to a defect in lysosome fusion, which ultimately disrupted the biogenesis of B. pseudomallei-containing phagolysosomes with complete degradative capacity. Previous studies have shown that intracellular survival of bacteria requires the halt of phagosome-lysosome fusion. Nonetheless, how phagosome-lysosome fusion is regulated in B. pseudomallei infection is still poorly understood. In the present study, our results demonstrate that Rab32 may regulate the delivery of B. pseudomallei-containing phagosomes to lysosomes, facilitating phagosome maturation and subsequent bacterial clearance.The acidic and reducing environment of lysosomes is optimal for CTSD activity. Similarly, we also demonstrated that the overexpression of EGFP-Rab32-WT or EGFP-Rab32-Q83L can enhance CTSD activation in macrophages. Altogether, these observations are consistent with the role of Rab32 in increasing the biogenesis of phagolysosomes. We also demonstrated that Rab32 activity is required for inhibiting B. pseudomallei infection. In this study, we found that the association of Lysotracker and CTSD with phagosomes was increased by the inhibition of miR-30b/30c expression, whereas these were inhibited by overexpression of miR-30b/30c. Additionally, inhibition of miR-30b/30c expression resulted in marked increase in the levels of mature lysosomal CTSD. Importantly, this was associated with an effective intracellular growth limitation of B. pseudomallei. Previous studies demonstrated that miR-30 family members play a crucial role in the regulation of autophagy [B. pseudomallei by targeting Rab32 in host innate immune cells.Numerous studies have explored the role of miRNA regulation in the immune response against bacteria. Several deregulated miRNAs in infected host cells such as miR-146a/b, miR-155, miR-24, miR-4270, miR-27b, miR-17, miR-4458, miR-20a, and miR-144-3p have been shown to regulate cell inflammatory response, macrophage polarization, cell death/survival, and autophagy . These futophagy , 64, andutophagy , 66. HowB. pseudomallei infection. Our data indicate that B. pseudomallei upregulates the expression of Rab32 in infected macrophages by downregulating the expression of miR-30b/30c. Subsequently, B. pseudomallei resides, at least in the early phase of infection, in a Rab32-positive compartment, and more importantly, Rab32 promotes the fusion of B. pseudomallei-containing phagosomes with lysosomes that likely result in increased exposure of B. pseudomallei to lysosomal acid hydrolases, CTSD, and enhances the killing of B. pseudomallei by macrophages. We also demonstrate the previously unrecognized role of miR-30b/30c in modulating phagosome maturation in the host innate immune cells.In conclusion, to the best of our knowledge, this is the novel report of miRNA-mediated Rab32 involved in modulating phagosome maturation, which at least partially exerts its antimicrobial activity by promoting phagosome maturation against All animal experiments were performed in accordance with the Regulations for the Administration of Affairs Concerning Experimental Animals approved by the State Council of People\u2019s Republic of China. All efforts were made to minimize animals' suffering. All studies were approved by the Laboratory Animal Welfare and Ethics Committee of the Third Military Medical University (Permit Number: SYXK-20170002).B. pseudomallei strain used in all experiments is BPC006, a virulent clinical isolate from a melioidosis patient in China[E. coli K12 (29425), S. typhimurium (14028) and Burkholderia thailandensis E264 (700388) were purchased from American Type Culture Collection . Bacteria were grown in Luria-Bertani (LB) broth for 18 h at 37\u00b0C. After washing twice with phosphate buffered saline , the number of bacteria was estimated by measuring the absorbance of the bacterial suspension at 600 nm. In general, an absorbance of 0.33 to 0.35 was equivalent to approximately 108 CFU/ml of viable bacteria. The number of viable bacteria used in infection studies was determined by retrospective plating of serial 10-fold dilutions of the inoculum to LB agar. Live B. pseudomallei was handled under standard laboratory conditions (biosafety containment level 3). For experiments using heat-inactivated B. pseudomallei, bacteria were suspended in PBS, incubated at 70\u00b0C for 20 min and stored at -70\u00b0C until use.Murine macrophages RAW264.7 cell line (Cat. TIB-71) and human embryonic kidney HEK293 (Cat. CRL-1573) cell line were obtained from American Type Culture Collection, Manassas, Virginia. RAW264.7 cells were grown in high glucose DMEM medium containing 10% fetal bovine serum without addition of antibiotics. HEK293 cells were routinely cultured in RPMI 1640 medium supplemented with 10% FBS and 100 U/ml penicillin/streptomycin . Primary bone marrow\u2013derived macrophages (BMDMs) were isolated from C57BL/6 mice and cultured in DMEM for 3\u20135 d in the presence of M-CSF . All the above cell lines were cultured at 37\u00b0C in 5% CO2. For all experiments, the in China. And E. B. pseudomallei and rabbit polyclonal anti-B. pseudomallei antibody were obtained from immunized mice and rabbits. All secondary antibody used for immunofluorescence studies conjugated with Alexa Fluor 405, 488 and 647 were purchased from Molecular Probes, All HRP-conjugated secondary antibody were purchased from Jackson ImmunoResearch Laboratories.The EGFP-Rab32 plasmid construct was kindly provided by Dr. Ying Wan . The EGFP-Rab32-T37N and EGFP-Rab32-Q83L mutant were generated by PCR mutagenesis from the EGFP-Rab32 plasmid. The primary antibody used in this work as follows: Mouse anti-Rab32 (sc-390178) and anti-cathepsin D (sc-377299) were purchased from Santa Cruz Biotechnology. Rabbit anti-Rab5 (46449), anti-Rab7 (9367), anti-EEA1 (3288) and anti-\u03b2-actin (4970) antibody were obtained from Cell Signaling Technology. Rat anti-LAMP-1 (25245) and anti-LAMP-2 (13524) were obtained from Abcam. Mouse polyclonal anti-Samples were collected and the cell pellet was lysed in RIPA lysis buffer and protease and phosphatase inhibitor cocktails (Roche) for 10 min at RT and then incubated at 95\u00b0C for 5 min. Protein concentration was determined by BCA Protein Assay according to the instructions of the supplier (Thermo Fisher Scientific). Equal amounts of protein in 1x Laemmli buffer were denatured at 95\u00b0C for 5 min and subjected to standard SDS-PAGE and western blotting. A commercial protein marker was used for identification of protein size. Membranes were developed using ECL plus on ECL Hyper film , scanned, and evaluated using ImageJ. \u03b2-actin was used as loading control.Total RNA was extracted using TRIzol (Invitrogen Life Technologies) according to the manufacturer\u2019s instruction. RNA quality was assessed by using the Agilent 2100 bioanalyzer (Agilent Technologies), and only samples with RNA integrity number >8 were used. MicroRNA microarray Assay was done using miRbase version 21.0 by LC Sciences . Array experiments were conducted according to the manufacturer\u2019s instructions. Briefly, the miRNAs were labeled with Agilent miRNA labeling reagent (Agilent Technologies). Then, dephosphorylated RNA was linked with pCp-y3 and the labeled RNA was purified and hybridized to the miRNA microarray. Images were scanned with the Agilent array scanner (Agilent Technologies) using a grid file and analyzed with Agilent feature extraction software version 10.10.GeneSpring software V12 (Agilent Technologies) was used for summarization, normalization, and quality control of miRNA microarray data. The miRNA array data were calculated by first subtracting the background value and then normalizing the signals by locally weighted regression. The express levels of miRNAs were designated as statistically significant when the 2-tailed P value was \u22640.05. And signals <500 were interpreted as false-positive result. The statistically significant messenger RNAs were selected based on the fold change and adjusted P value \u22640.05.Spe I, Apa I and Hind III restriction enzyme digestion sites, respectively. All of the sequences are shown in Supplementary information, Luciferase reporter construct was made by cloning mouse Rab32 sequence containing the potential miR-30b/c binding site into pMIR-Report construct . The DNA oligonucleotides containing wild-type (WT) or mutant (Mut) 3\u2019UTR of Rab32 were synthesized with flanking qRT-PCR assays for miR-30b and miR-30c were performed by using TaqMan miRNA assays (Ambion) in a Bio-Rad IQ5 . The reactions were performed using the following parameters: 95\u00b0C for 2 min followed by 40 cycles of 95\u00b0C for 15 s and 60\u00b0C for 30 s. U6 small nuclear RNA was used as an endogenous control for data normalization. Relative expression was calculated using the comparative threshold cycle method. Quantitative RT-PCR analyses for the mRNA of Rab32 was performed by using PrimeScript RT-PCR kits (Takara). The mRNA levels of \u03b2-actin were used as an internal control. The primers were shown in Supplementary information, 6 per well in imaging dishes or standard 6-well culture plates for RNA or protein extraction in antibiotic-free DMEM and were incubated overnight. Cells were transfected with Lipofectamine 3000, Opti-MEM , and 50 nM Silencer Select Rab32 siRNA for 24 h. The effects of Rab32 siRNA were compared with those of a nontargeting control siRNA . For plasmid DNA transfections, RAW264.7 macrophages were seeded at 5 \u00d7 105 per well 1 d before the transfection according to the manufacturer\u2019s protocol. Cells were transfected 16\u201320 h before further experiments. All experiments were performed in triplicate.For miRNA transfections, miR-30b and miR-30c mimic, miR-30b and miR-30c inhibitor are obtained from RiboBio . The sequences are as follows: miR-30b mimic, 5\u2032-UGUAAACAUCCUACACUCAGCU-3\u2032 and miR-30c mimic, 5\u2032-UGUAAACAUCCUACACUCUCAGC-3\u2032; miR-30b inhibitor, 5\u2032-AGCUGAGUGUAGGAUGUUUACA-3\u2032 and miR-30c inhibitor, 5\u2032- GCUGAGAGUGUAGGAUGUUUACA-3\u2032. mimic Negative Control, 5\u2032- UUUGUACUACACAAAAGUACUG-3\u2032 and inhibitor Negative Control, 5\u2032- CAGUACUUUUGUGUAGUACAAA-3\u2032. RAW264.7 cells or BMDMs were seeded in 24-well plates and co-transfected with miR-30b or mi-30c mimic (30 nM), inhibitor (30 nM) and NC control oligo (30 nM) using Lipofectamine 3000 according to the manufacturer\u2019s instructions. After 24 h, cells were harvested and the expression levels of Rab32 mRNA or protein were detected by qRT-PCR and Western blotting as described above. For siRNA transfections, RAW264.7 macrophages were seeded at 1\u00d710B. pseudomallei phagosomes. RAW264.7 or BMDMs were incubated with 50 nM Lysotracker-DND99 for 1 h prior to infection. The cells viewed using a laser-scanning confocal microscope .For immunofluorescence studies, Samples were washed with PBS prior to fixation with 4% paraformaldehyde for 10 min. Cells were washed three times with PBS and permeablized with 0.05% Saponin (Sigma), 1% bovine serum albumin in PBS for 10 min at room temperature. Subsequently, samples were incubated in 1% BSA/PBS for 5 min prior to incubation with primary and secondary antibody in 1% BSA/PBS for 1 h. Three washing steps with PBS for 5 min followed each antibody incubation. Finally, the nuclear stain DAPI was applied for 10 min at room temperature. Glass coverslips were mounted on glass slides (Thermo Scientific) using Fluorescent mounting medium (Dako Cytomation). For analysis of association of acidic compartments with RAW264.7 cells were treated as indicated and were fixed in 2.5% glutaraldehyde at 4\u00b0C overnight and postfixed with 2% osmium tetroxide for 1.5 h at room temperature. After fixation, cells were embedded and stained with uranyl acetate/lead citrate. The sections were examined under a transmission electron microscope at 60 kV.B. pseudomallei at an MOI of 10:1. One hour after infection, cells were washed twice with phosphate-buffered saline (PBS), and 2 ml of fresh culture medium containing 250 \u03bcg of kanamycin per ml was added, and the preparation was incubated to kill the extracellular bacteria. After the indicated time points, cells were washed three times with PBS and lysed with 1 ml of 0.1% Triton X-100 (Sigma) after infection. Diluted cell lysates were plated on Luria broth plates. Colonies were counted after 36 h. Experiments were performed at least three times in triplicates.Bacterial invasion of RAW264.7 cells or BMDMs was investigated by using the method described by Elsinghorst, except for the following modifications . Cells wB. pseudomallei for 0.5 h, washed, and chased for 0.5 h as described and processed for imaging. Confocal images were made from consecutive fields, until 100 transfected cells were imaged. The transfected cell containing B. pseudomallei were counted and divided by the total number of the imaged cells.Macrophages silenced for Rab32 or expressing control siRNA, or transfected with pEGFP, pEGFP-Rab32, pEGFP-Rab32-T37N and pEGFP-Rab32-Q83L were incubated with B. pseudomallei was measured by automated analysis of the mean relative fluorescent marker intensity in a 2-pixel wide ring around bacteria or by counting the percentage of B. pseudomallei associated with a marker. At least 250 or 100 bacteria per biological replicate were analysed B. pseudomallei the automated analysis or manual count respectively. The results are expressed as the mean \u00b1 SD of at least three separate experiments performed in triplicate. The differences between the groups were determined with the SPSS 13.0 software. Student\u2019s t-test was used to analyze the data. The differences were considered significant at P<0.05. Statistically significant differences are indicated by asterisks .All images were analyzed by ImageJ software . Images of the samples were acquired with blinding of the experimental conditions. The association of different markers with S1 FigB. pseudomallei (MOI = 10:1) and imaged at the indicated time points: 1 to 6 h and stained with anti-Rab32 antibody (green), anti-B. pseudomallei antibody (red). Images show maximum-intensity projections of confocal Z-stacks. Scale bar is 5 \u03bcm. (B and C) RAW264.7 cells were infected with live or heat-killed (HK) B. pseudomallei, at MOI of 10 for 4 h. The expression levels of Rab32 were analyzed by qRT-PCR and Western blot. (D and E) Representative images of RAW264.7 cells were infected with live or heat-killed B. pseudomallei for 2 h, and stained with anti-Rab32 antibody (green), anti-B. pseudomallei antibody (red) or DAPI (blue). Quantification showing the percentage of the cells containing B. pseudomallei. The average \u00b1 SD. is shown for three independent experiments. Scale bar is 10 \u03bcm. . ns, no significant difference.(A) RAW264.7 cells were infected with heat-killed (TIF)Click here for additional data file.S2 Fig(A and B) Confirmation of microarray results by qRT-PCR. qRT-PCR analysis of the expression levels of miR-30b, miR-30c, miR-30d, and miR-30e in RAW264.7 cells infected with heat-killed B. pseudomallei (MOI = 10) for 0, 1, 2, 4, 6, and 8 h, or at MOI = 0, 1, 10, 20, 50, and 100 for 4 h. Experiments performed in triplicates showed consistent results.(TIF)Click here for additional data file.S3 Fig(A and B) After transfected with miRNAs control, mimic or inhibitor for 24 h, the expression of miR-30b and miR-30c was performed by using TaqMan miRNA assays. Data are representatives of at least three independent experiments, * P< 0.05, ** P< 0.01.(TIF)Click here for additional data file.S4 Fig(A-C) RAW264.7 cells were infected with B. pseudomallei, at an MOI of 10 for indicated time point. Cells were stained with anti-EEA1, anti-Rab5, and anti-Rab7 antibodies (green), or anti-B. pseudomallei antibody (red) and colocalization was determined by confocal microscopy. Scale bar is 5 \u03bcm. (D and E) RAW264.7 cells expressing EGFP-Rab32 were infected with B. pseudomallei for indicated time point, afterwards cells were subjected to immunofluorescence for LAMP1 or LAMP2 (red) and stained with an anti-B. pseudomallei antibody (blue). Scale bar is 5 \u03bcm. (F) RAW264.7 cells expressing EGFP-Rab32 were incubated with 50 nM Lysotracker (red) for 1 h before infection with B. pseudomallei for indicated time point. Cells were stained with anti-B. pseudomallei antibody (blue) and colocalization was determined by confocal microscopy. Scale bar is 5 \u03bcm. All results are representative of three independent observations.(TIF)Click here for additional data file.S5 Fig(A) RAW264.7 cells were transfected with Rab32 siRNA or control siRNA for 24 h, then infected with B. pseudomallei at an MOI of 10:1 for 1 h. Quantification showing the percentage of the cells containing B. pseudomallei. (B) Quantification of the total percentage of cells containing B. pseudomallei, comparing RAW264.7 cells transfected with pEGFP, pEGFP-Rab32 WT, pEGFP-Rab32 T37N or pEGFP-Rab32 Q83L. The numbers of internalization bacteria were quantified in confocal microscopic images. Approximately 200\u2013300 cells were sequentially sampled for each experiment. The data shown represents the mean value \u00b1 SD based on three independent experiments. ns, no significant difference.(TIF)Click here for additional data file.S6 Fig(A) RAW264.7 cells were transfected with pEGFP, pEGFP-Rab32-WT, pEGFP-Rab32-T37N or pEGFP-Rab32-Q83L, and 24 h later, cells were infected with B. pseudomallei (MOI = 10: 1). The infected RAW264.7 cells were stained with anti-B. pseudomallei antibodies (red) and DAPI (blue). Scale bar is 5\u03bcm. (B) Quantification showing the percentage of association of EGFP-Rab32 to B. pseudomallei containing phagosomes. Data show mean \u00b1 SD of the percentage of bacteria recovered compared with control cells from two independent experiments. . ns, no significant difference.(TIF)Click here for additional data file.S7 Fig(A and B) BMDMs were transiently transfected with miR30b mimic, miR30c mimic, miR30b inhibitor, miR30c inhibitor or control for 24 h. The mRNA and protein level of Rab32 were determined by qRT-PCR and western blot. Data are representative of three independent experiments (** P< 0.01).(TIF)Click here for additional data file.S1 Table(DOCX)Click here for additional data file.S2 Table(DOCX)Click here for additional data file."} +{"text": "The findings showed that FTIR spectroscopy combined with multivariate analyses has a considerable capability to detect and quantify adulterants in lemon essential oil.Essential oils are high-valued natural extracts that are involved in industries such as food, cosmetics, and pharmaceutics. The lemon essential oil (LEO) has high economic importance in the food and beverage industry because of its health-beneficial characteristics and desired flavor properties. LEO, similar to other natural extracts, is prone to being adulterated through economic motivations. Adulteration causes unfair competition between vendors, disruptions in national economies, and crucial risks for consumers worldwide. There is a need for cost-effective, rapid, reliable, robust, and eco-friendly analytical techniques to detect adulterants in essential oils. The current research developed chemometric models for the quantification of three adulterants in cold-pressed LEOs by using hierarchical cluster analysis (HCA), principal component regression (PCR), and partial least squares regression (PLSR) based on FTIR spectra. The cold-pressed LEO was successfully distinguished from adulterants by robust HCA. PLSR and PCR showed high accuracy with high R Essential oils are natural lipidic substances extracted from fruits, vegetables, and spices, and they are used in many sectors throughout the whole world due to their unique pure and characteristic functional properties . Flavor,Lemon essential oil is highly demanded in the food and beverage industry because of its health-beneficial characteristics and intrinsic flavor properties . Lemon oFrom an economic point of view, lemon essential oil as a high-value natural product has high economic importance worldwide with the exports and imports between countries. The price of authentic lemon essential oil is relatively high; thus, authentic lemon essential oil is prone to be adulterated through economic motivations. Adulteration causes unfair competition between vendors, disruptions in national economies, and crucial risks for consumers throughout the world . DetermiThe aim of this research was to detect three different adulterants in cold-pressed lemon essential oil by using FTIR spectroscopy in combination with chemometrics of HCA , PLSR , and PCR . To the best of our knowledge, this study is the first attempt for the determination of sweet orange oil, BnOH , and IPM (isopropyl myristate) adulteration in a lemon essential oil utilizing FTIR technique combined with multivariate statistics. The safety assessment of benzyl alcohol and isopropyl myristate was reported in previous publications; isopropyl myristate and benzyl alcohol were reported to be safe as cosmetic ingredients ,19. Howe\u22121. FTIR spectrometer had an ATR accessory with a diamond crystal. The commercial spectral library of Bruker was used for identity confirmation of used chemicals.FTIR spectra were obtained by using Bruker Tensor 27 FTIR spectrometer (Bruker-Germany) in the spectral range of 400\u20134000 cmn = 3) and cold-pressed orange essential oils (n = 3) were purchased from reliable (well-known) producer companies in Turkey. Benzyl alcohol (BnOH) and isopropyl myristate (IPM), with purity higher than 99%, were obtained from Zag Industrial chemicals (Turkey). Ethyl alcohol was used for cleaning diamond ATR crystal. Essential oils were stored at 4 \u00b0C prior to the FTIR analyses.The original cold-pressed lemon essential oils were spiked with BnOH and IPM at the concentrations of 1%, 5%, 10%, 20%, 40%, and 50%. In total, fifty-four adulterated samples were prepared for FTIR analyses. Samples were stored in amber glass vials (1.5 mL) prior to the spectral measurements. Additionally, an authentic cold-pressed lemon essential oil was purchased from the producer and separately spiked with OEO1, BnOH and IPM at the concentration of 1%, 4%, 8%, 16% and 32% (v/v) to test calibration models.Three different cold-pressed lemon essential oils were coded as LEO\u22121 and 16 scans, respectively. OPUS program Version 7.2 (Bruker Gmbh) was used for instrument control and data acquisition. Each sample was placed on a diamond ATR crystal with the help of a Pasteur pipette. The ATR crystal was cleaned with ethanol (80% v/v) prior to each spectral acquisition. The background air spectrum was scanned before each acquisition.All samples were kept at room temperature (25 \u00b0C) for 30 min prior to the FTIR analyses. An ATR accessory (single bounce) was used in all spectral acquisition. Spectral measurement parameters of resolution and accumulation were selected as 4 cm\u22121, 1528\u20131485 cm\u22121, 1772\u20131701 cm\u22121, and 2878\u20132812 cm\u22121 were selected to discriminate original lemon essential oils from other samples in hierarchical cluster analysis.Original cold-pressed lemon essential oils were discriminated from adulterated samples, orange essential oils, and chemicals on the basis of their FTIR spectra by using chemometrics of hierarchical cluster analysis. HCA analysis was conducted by using the chemometrics software OPUS Version 7.2 . First derivative spectra of all samples were used for HCA through Ward\u2019s algorithm and Euclidean distance. Spectral ranges of 1387\u2013507 cmThe quantities of adulterants in the composition of lemon essential oil were predicted by the employment of the Grams IQ software for the adulteration levels of 1%, 5%, 10%, 20%, 40%, and 50%.\u22121, 560\u2013777 cm\u22121, and 1716\u20131755 cm\u22121 were selected for OEO, BnOH, and IPM, respectively.The first derivative and second derivative spectra were included in the PLSR and PCR multivariate analysis. Cross-validation curves were built at the concentration range of 0% and 100% for each adulterant. Cross-validation curves were built on the basis of selected spectral ranges of FTIR spectra. The spectral range should include information describing the concentration variation of the analyte or other matrix constituents . In the 1, LEO2, and LO3) are presented in \u22121. The vibrational bands around ~2900 cm\u22121, ~1700 cm\u22121, and ~1100 cm\u22121 may include spectral features arising from C-H, C=O, and C-O stretching vibrations of terpenoid components, respectively [\u22121 corresponds to the \u2013CH3 asymmetric and symmetric stretching vibrations [\u22121, 1680 cm\u22121, 1643 cm\u22121, 1437 cm\u22121, 1376 cm\u22121, 1154 cm\u22121, 886 cm\u22121, and 797 cm\u22121 correspond to the C-H stretching vibrations of alkanes, C=O stretching vibrations, C=C stretching vibrations of alkanes, C-H bending vibrations of alkanes, O.H. bending vibrations of phenols, C-O stretching vibrations of tertiary alcohols, C-H stretching vibrations of aromatics and C=C bending vibrations of alkanes, respectively [\u22121.Overlapped FTIR spectra of cold-pressed lemon essential oils has a strong capability to obtain clear classification patterns; thus, Ward\u2019s algorithm was used for lightening a lot of challenging adulteration and authenticity problems in previous studies [n = 3) and OEO (n = 3) samples were significantly discriminated from other samples on the dendrogram. LEO and OEO samples were marked by using a yellow rectangle and a red rectangle, respectively. Although OEO and LEO samples have similar FTIR spectra, a clear classification of these samples was obtained without any false agglomeration. Additionally, chemicals of IPM and BnOH were classified distinctly from essential oils and adulterated samples on the right side of the HCA dendrogram. As can be seen, the highest BnOH and IPM spiked samples (40% and 50%) were classified closer to the IPM and BnOH samples. Overall, it can be concluded from the dendrogram that authentic cold-pressed lemon essential oil samples could be accurately distinguished from cold-pressed orange essential oil, adulterated samples, BnOH, and IPM.Hierarchical cluster analysis (HCA) was performed for discrimination of authentic cold-pressed lemon essential oils from cold-pressed OEOs, adulterated samples , BnOH, and IPM. HCA provides an opportunity for visualization of the hidden relationship between investigated samples by using 2-D plots (dendrograms), which presents a cluster pattern of investigated elements . HCA was studies ,28,29,30Chemometrics of partial least squares regression (PLSR) and principal component regression (PCR) were used for quantification of cold-pressed OEO, BnOH, and IPM in cold-pressed lemon essential oil. Multivariate calibration methods are used to extract interesting information from high throughput analytical data . PLSR an\u22121, 560\u2013777 cm\u22121, and 1716\u20131755 cm\u22121 were selected for the quantification of the adulterants OEO, BnOH, and IPM, respectively. Concentration levels were 0%, 1%, 5%, 10%, 20%, 40% and 50% (v/v) for each adulterant. The spectral range should include information describing the concentration variation of the analyte or other matrix constituents [2, LOGPress, SECV , and bias are presented in 2 and SECV values; the model that has the highest R2 values and lowest SECV values has the highest ability to describe the relationship between actual adulterant concentration and predicted adulterant concentration. The SEC is defined as the standard error of calibration and SEC is formulated as the square root of the residual variance divided by the number of degrees of freedom. The SECV value is defined as the standard error of cross-validation (prediction) [PLSR and PCR, robust multivariate techniques, were successfully employed for quantification of various adulterants in complex food-related matrices in terms of evaluation of food authenticity and traceability . PLSR andiction) . In the 2 values were obtained in all cross-validation models. The determination coefficient (R2) changed at the range of 0.9902\u20131 and 0.9906\u20130.9999 in the calibration models and cross-validation models of the raw, first derivative, and second derivative FTIR spectra of samples, respectively. The determination coefficient (R2) normally changes between \u201c0\u201d and \u201c1\u201d. The closeness of the (R2) value to the \u201c1\u201d supports the reliability of the model since (R2) is a statistical measure of how close the data are to the fitted regression line. Additionally, calibration and cross-validation model equations are presented in 2, SECV, and bias values. Results from the current research showed that developed PLSR and PCR models could be effectively used for the prediction of cold-pressed OEO, IPM, and BnOH adulteration in cold-pressed LEOs. Additionally, an authentic cold-pressed lemon essential oil was purchased from the producer and separately spiked with OEO1, BnOH and IPM at concentrations of 1%, 4%, 8%, 16% and 32% (v/v). Developed FTIR-PLSR and PCR models (raw spectra) were used for the quantification of adulterants. OEO1 concentrations were determined as 1.15%, 4.18%, 8.22%, 16.15% and 32.25% (v/v) by the PLSR model. BnOH concentrations were determined as 1.08%, 4.10%, 8.08%, 16.12% and 32.05% (v/v) by the PLSR model. IPM concentrations were determined as 1.18%, 4.31%, 8.14%, 16.22% and 32.16% (v/v) by the PLSR model. In addition, adulterant concentrations were quantified as 0% for authentic cold-pressed lemon essential oil by the PLSR model. Quite similar results were obtained by using the PCR models. These results showed the efficiency of FTIR spectroscopy in combination with chemometrics models of PLSR and PCR.Bias could be defined as the systematic error of the calibration or cross-validation and calculated as the average difference between the reference and predicted values . As it c\u22121 and similar spectral properties were observed in the FTIR spectrum of lemon essential oil in previous research [In the current research, PLSR and PCR techniques were successfully employed for quantification OEO, BnOH, and IPM by using a robust vibrational technique, FTIR spectroscopy. To the best of our knowledge, this study is the first attempt for quantification of adulterants in LEO by using FTIR spectroscopy combined chemometrics of PLSR and PCR. Previous studies presented valuable results for the characterization of citrus essential oils such as lemon and orange oil . In the research . Additioresearch . In anotresearch . A previresearch . Techniqresearch . Previouresearch ,42,43,44research . No prevv/v) by using PLSR and PCR. Additionally, HCA was effectively employed for discrimination of LEOs from adulterated samples by the 2-D plots (dendrograms) in which separate clusters and sub-clusters were clearly visualized on the basis of FTIR spectra. PLSR and PCR showed high accuracy with high R2 values (0.99\u20131) and low SECV values (0.58 and 5.21) for cross-validation results of the raw, first derivative, and second derivative FTIR spectra. Essential oils as natural extracts are prone to be adulterated through economic motivations. There is a need for cost-effective, rapid, reliable, robust, and easy-to-operate methodologies to maintain the quality of essential oils and tremendous products in which they involved. FTIR spectroscopy, in combination with multivariate analyses of PLSR, PCR, and HCA, showed high potential for detection of investigated adulterants and discrimination of natural cold-pressed lemon essential oil. Additionally, integration of the developed methodology to the hand-held FTIR spectrometers may help detection of frauds in the essential oil industry and whole supply chain. The findings from the current research may shed light on various adulteration incidents in which essential oils, foods, natural extracts, and high-value products are deteriorated.The current research presented an application of FTIR spectroscopy combined with chemometrics for quantification of adulterants in LEOs. Results showed that adulterants were successfully quantified at the concentration range of 0\u201350% ("} +{"text": "In this study, we investigated a bottleneck effect on an RRV population that may drastically affect the viral population structure. RRV populations were serially passaged under two levels of a bottleneck effect, which exemplified human-to-human transmission. As a result, the genetic diversity and specific growth rate of RRV populations increased under the stronger bottleneck effect, which implied that a bottleneck created a new space in a population for minor mutants originally existing in a hidden layer, which includes minor mutations that cannot be distinguished from a sequencing error. The results of this study suggest that the genetic drift caused by a bottleneck in human-to-human transmission explains the random appearance of new genetic lineages causing viral outbreaks, which can be expected according to molecular epidemiology using next-generation sequencing in which the viral genetic diversity within a viral population is investigated. RNA viruses form a dynamic distribution of mutant swarms (termed \u201cquasispecies\u201d) due to the accumulation of mutations in the viral genome. The genetic diversity of a viral population is affected by several factors, including a bottleneck effect. Human-to-human transmission exemplifies a bottleneck effect, in that only part of a viral population can reach the next susceptible hosts. In the present study, two lineages of the rhesus rotavirus (RRV) strain of rotavirus A were serially passaged five times at a multiplicity of infection (MOI) of 0.1 or 0.001, and three phenotypes were used to evaluate the impact of a bottleneck effect on the RRV population. The specific growth rate values of lineages passaged under the stronger bottleneck (MOI of 0.001) were higher after five passages. The nucleotide diversity also increased, which indicated that the mutant swarms of the lineages under the stronger bottleneck effect were expanded through the serial passages. The random distribution of synonymous and nonsynonymous substitutions on rotavirus genome segments indicated that almost all mutations were selectively neutral. Simple simulations revealed that the presence of minor mutants could influence the specific growth rate of a population in a mutant frequency-dependent manner. These results indicate a stronger bottleneck effect can create more sequence spaces for minor sequences.IMPORTANCE In this study, we investigated a bottleneck effect on an RRV population that may drastically affect the viral population structure. RRV populations were serially passaged under two levels of a bottleneck effect, which exemplified human-to-human transmission. As a result, the genetic diversity and specific growth rate of RRV populations increased under the stronger bottleneck effect, which implied that a bottleneck created a new space in a population for minor mutants originally existing in a hidden layer, which includes minor mutations that cannot be distinguished from a sequencing error. The results of this study suggest that the genetic drift caused by a bottleneck in human-to-human transmission explains the random appearance of new genetic lineages causing viral outbreaks, which can be expected according to molecular epidemiology using next-generation sequencing in which the viral genetic diversity within a viral population is investigated. RNA viruses form a dynamically distributed mutant swarm, termed a \u201cquasispecies\u201d , 2, becaInfinite incrementation of genetic diversity within an RNA virus population is restricted by external factors in the environment, such as natural selection (advantageous genes are fixed in the population or deleterious mutations are removed from the population) and bottleneck effect . Such a dS) and nonsynonymous (dN) substitution rates were also calculated to examine the exertion of any selective pressures on RRV genome segments. The evolutionary rate of 11 genome segments was estimated using the BEAST2 software package using rhesus rotavirus (RRV) as a model virus. Rotavirus, a double-stranded RNA (dsRNA) virus with 11 genome segments, exists as a quasispecies , is tran12\u2013 package . Finally6 to 107.5 PFU/ml, and only the 0.1MOI-1_2, 0.1MOI-1_3, and 0.1MOI-1_5 lineages showed significant increases in infectious titer from that of the original population (P\u2009<\u20090.01). The values of specific growth rates of 0.001MOI_5 populations estimated by the modified Gompertz model were also significantly higher than those of initial populations (P < 0.05).RRV populations were serially passaged five times at different multiplicities of infection . Infectious titers ranged from 10ctively) . (We denctively) . Fewer vctively) and d. Actively) . The 1stAverage coverage per site of 240 out of 242 samples was more than 250, which is a reference value recommended previously , but theP\u2009=\u20090.058), by using analysis of variance (ANOVA). This result indicated that RRV populations passaged repeatedly under a stronger bottleneck expanded the mutant swarm.Using the frequency of SNPs, the nucleotide diversity of each genome segment was calculated as an indicator of the genetic diversity of an RRV population. Nucleotide diversity is defined as the proportion of nucleotide difference observed when two copies of the genome are sampled, and the transitions of nucleotide diversity through serial passages were expressed by the change of bubble plot size . Among tx axis or the abundance of minor sequences increases. Since rank abundance curves were formed in a divergent shape from a rank of 0 to 100, all RRV populations included the mutant swarm for a number of minor sequences, which indicates that the RRV population exists as a quasispecies. The behavior of curves of 0.001MOI_5 populations became similar to each other, and both the rank and the abundance of minor sequences of 0.001MOI_5 populations increased more than those of other populations. These results indicated that minor mutant swarms of 0.001MOI lineages expanded during the serial passages.Nucleotide diversity estimation in x deviation of mean location of 0.001MOI_5 populations was larger than that of other populations . SNPs in the entire genomic region and their frequency (converted by base 2 logarithm) were shown as a heatmap of a Circos plot, and the change of color in the heatmap expressed the transition of SNP frequency during serial passages . Unlike dN values were comparable to the dS values. The substitutions of all genome segments occurred randomly and thoroughly, although the spike head (\u223c65- to 224-amino-acid sequence) and antigen domain (\u223c250- to 480-amino-acid sequence) regions of the VP4 genome segment showed high variability.Nonsynonymous substitution causes amino acid replacement, while synonymous substitution just changes the nucleotide sequence. Observed nonsynonymous and synonymous substitutions were plotted as positive and negative values, respectively, in the order of the 5\u2032 to 3\u2032 ends of each genome segment . The numdN) and synonymous (dS) substitution is used as an indicator of the existence of natural selection at a certain region of the genome . In dS and dN of each genome segment were compared, and plots above the dN = dS line indicate the existence of positive selection, while negative selection can be found when plots were below the dN = dS line. Plots on the dN = dS line indicate that the genome segment is affected by genetic drift. The dN and dS of the VP1 genome segment of the 0.001MOI lineages remained the same, although plots of the 0.1MOI lineages became slightly dispersed from the dN = dS line. In NSPs, genome segments in 0.1MOI lineage plots also tended to be dispersed from the dN = dS line, and the dN of NSPs tended to be zero. The dN and dS of VP2, VP3, VP4, VP7, NSP1, NSP2, NSP4 (at cycle 5), and NSP5/6 genome segments of the 0.001MOI lineages showed no difference, but the values of both substitution rates increased during serial passages. These results indicated that genetic drift was the main driver for the population structure change in 0.001MOI lineages.The difference between rates of nonsynonymous (\u22124 substitutions/site/cycle). On the other hand, VP1 , VP2 , VP3 , and VP4 genome segments showed \u223c10\u22125 substitutions/site/cycle. Middle-sized genome segments were different between the MOI settings. The evolutionary rate of all genome segments of 0.1MOI lineages were higher than those of 0.001MOI lineages.Evolutionary rate was calculated for every genome segment using BEAST2 software . The NSPThe quasispecies structure of RRV was estimated based on haplotype analysis to confirm the effect of low-frequency subpopulations on the specific growth rate as a population by haplotype frequency, which was regarded as the proportion of the subpopulation including minor mutants. The number of subpopulations and its frequency in initial and cycle 5 populations are displayed in dN and dS of 0.001MOI lineages were close to each other increased during the latter part of serial passages and 2. Tch other . The evoch other , and thech other .dN values of V184A being higher than dS values (A common amino acid replacement VP4:V184A) was present in both initial populations at lower frequency and then became dominant among all lineages during serial passages . This ca4A was prS values and 6 alS values . HoweverA bottleneck effect randomly eliminates genome sequences from a population and promThe concept of sequence space providesAn expansion of mutant swarms appears to lead to an increase in the specific growth rate without minor sequences being dominant in the population. Serial passages are known to change viral phenotypes. For example, the rabies virus adapted to a new environment without the replacement of the master sequence during serial passages . The infQuasispecies reconstruction implied that the stronger bottleneck expanded the size of subpopulations (lineages including minor mutants) while smaller populations still remained in 0.1MOI lineages , which sRotavirus outbreaks are still being reported even after the implementation of vaccine programs. Some cases of rotavirus in California were reported despite a vaccination program and hygienic interventions, such as hand washing and disinfection . Zeller l-glutamine, 1% penicillin-streptomycin (GIBCO by Life Technology), and 1.125 g/liter sodium bicarbonate in a T75 flask. Average cell numbers were approximately 5.0\u2009\u00d7\u2009106 cells/T75 flask in this study. We possessed two lineages of RRV (genotype G3P[3]) derived from the same ancestral population, and two descendant populations were used for serial passages, called initial populations.MA104 cell lines were grown in Eagle\u2019s minimal essential medium (MEM) containing 10% fetal bovine serum (FBS), 2\u2009mM 6 to 107 PFU/ml) was diluted with serum-free MEM to adjust the MOI to 0.1 (5.0\u2009\u00d7\u2009105 PFU/ml) or 0.001 (5.0\u2009\u00d7\u2009103 PFU/ml), and 4\u2009\u03bcl of 1\u2009\u03bcg/\u03bcl trypsin from porcine pancreas was added. This mixed suspension, in a 1.5-ml tube, was put in an incubator at 37\u00b0C with 5% CO2 for 30\u2009min. After incubation, the medium in a T75 flask was removed and washed twice with Dulbecco\u2019s phosphate-buffered saline (\u2212) , and then 1\u2009ml of RRV suspension was inoculated onto the confluent MA104 cells. The flask was incubated at 37\u00b0C with 5% CO2 for 60\u2009min. After incubation, 32\u2009ml of serum-free Eagle\u2019s MEM was added to the MA104 cells, and the cells were reincubated at 37\u00b0C with 5% CO2 for 2 or 3\u2009days. The freeze-melt cycle was done three times after incubation. The suspension that had been moved into a 50-ml tube from the flask was centrifuged at 12,600\u2009\u00d7\u2009g for 10\u2009min at 4\u00b0C and filtered with a 0.2-\u03bcm filter to remove the cell fractions. The collected RRV suspension was inoculated into the new MA104 cell again. The RRV population was passaged five times , and this series of experiments was conducted by using two different initial populations (the 1st and 2nd lineages). We denote the RRV populations obtained from the 1st and 2nd serial passages at an MOI of 0.1 as 0.1MOI-1 and 0.1MOI-2 and those at an MOI of 0.001 as 0.001MOI-1 and 0.001MOI-2. The number following each population code name is the passage number, e.g., 0.1MOI-1_5 is the 1st population lineage passaged five times at an MOI of 0.1.The RRV suspension and then overlaid with 2.5% agar, including 2% FBS, 2% penicillin-streptomycin, 4\u2009mM l-glutamine, 2.25 g/liter NaHCO3, and 4\u2009\u03bcg/ml trypsin from porcine pancreas. The 6-well plates were incubated for 2\u2009days and then dyed with 0.015% neutral red for 3\u2009h. After 1 or 2\u2009days, the plaque numbers were counted.The infectious titer (PFU/ml) of each passage was measured in triplicate by plaque assay. According to our previous report , seriallEx Taq and a TaqMan probe in an Applied Biosystems 7500 real-time PCR system. The sequences of forward and reverse primers targeted to the NSP3 region were suggested by Pang et al. (tG) of virus particles attached to cell surfaces were compared to those inoculated into cells (0G), and then binding efficiency was calculated as log(t/G0G).A cell binding assay was conducted in triplicate according to the steps outlined in previous reports , 27. MA1CCCC-3\u2032) . The seqn\u2009=\u20093).RRV growth curves of each population were confirmed to obtain the specific growth rate. By following the recommendations from our previous report , MA104 ctN)], and the slope at the logarithmic phase (from 6 to 24\u2009h in this study) shows the virus growth rate, \u03bc is the logarithmic value of virus infectious titer (PFU/ml) before inoculation, A is the asymptotic value [log(\u221eN/0N)], \u03bc is the specific growth rate (per hour), e is the Napier\u2019s constant, and \u03bb is the lag period (hours). These parameters were determined by the least-squares method, which minimized the distance between observed and simulated log(tN/0N) at each sampling point.The specific growth rate was estimated in two ways by calculating the slope of one-step growth curves and by applying the modified Gompertz model. The one-step growth curve has been used to check virus growth by connecting observed log(PFU/ml) of virus at each sampling point . The transition of total population sizes under each scenario was used to estimate the specific growth rate of a total population based on the modified Gompertz model under the assumption that the lag period (\u03bb = 6.0 h) and the asymptote (A\u2009=\u20093.5 [log(\u221e/N0N)]) were identical for all strains.We artificially reconstructed the quasispecies by TenSQR software , which et test was performed for infectious titer, cell binding ability, and specific growth rate between the original and the cycle 5 populations. ANOVA was conducted to confirm whether a stronger bottleneck affected nucleotide diversity. All statistical tests were performed using R software 3.5.0 (https://www.r-project.org/).A Student's DRA006847 and DRA008653.Rotavirus sequence data from MiSeq were deposited in the DDBJ database under accession numbers"} +{"text": "Electrochemical testing was assessed by corrosion potential (Ecorr) according to the ASTM C-876-15 standard and a linear polarization resistance (LPR) technique following ASTM G59-14. The compressive strength of the fully substituted GC decreased 51.5% compared to the control sample. Improved corrosion behavior was found for the specimens reinforced with AISI 304 SS; the corrosion current density (icorr) values of the fully substituted GC were found to be 0.01894 \u00b5A/cm2 after Day 364, a value associated with negligible corrosion. The 50% RCA specimen shows good corrosion behavior as well as a reduction in environmental impact. Although having lower mechanical properties, a less dense concrete matrix and high permeability, RCA green concrete presents an improved corrosion behavior thus being a promising approach to the higher pollutant conventional aggregates.Novel green concrete (GC) admixtures containing 50% and 100% recycled coarse aggregate (RCA) were manufactured according to the ACI 211.1 standard. The GC samples were reinforced with AISI 1080 carbon steel and AISI 304 stainless steel. Concrete samples were exposed to 3.5 wt.% Na Traditionally, the world\u2019s most widely used building material is hydraulic concrete that, when combined with AISI 1018 carbon steel (CS) rebars, forms a system known as reinforced concrete. Reinforced concrete structures are known for their long-lasting service life and low-maintenance requirements. However, due to the corrosion of the steel reinforcement, billions of dollars are spent in the repair and maintenance of bridges, tunnels, roads and docks, among others, by each country ,3,4,5. T2 emissions, a value that can increase up to 15% in the near future [Presently, the use of ordinary Portland cement (OPC) is responsible for 10% of global COr future . As a sor future ,20. Inter future ,24. After future . Regardir future ,27. Althr future ,30,31,32r future ,36,37,38r future ,40,41,42Furthermore, the recycling of concrete is considered a key process in the current sustainable development trends. This is because concrete is widely used as a construction material. Its manufacturing consumes a large amount of nonrenewable natural resources: aggregates (80%), OPC (10%), SCM (3%) and water (7%). The natural aggregates (NA) used in the manufacturing of concrete are inert granular materials such as sand, gravel, or crushed stone. Gravel and natural sand are generally obtained from a well, river, lake, or seabed . CurrentFor the aforementioned reasons, recycled coarse aggregate (RCA) as a replacement for natural coarse aggregate (NCA), in addition to replacing OPC by 20% with SCBA, represents a substantial reduction in the environmental impact of concrete manufacturing . This to2 emissions substantially [The aim of this work was to study the effect of the substitution of NCA by the environmentally friendly RCA on the GC embedding AISI 1018 carbon steel (CS) and AISI 304 SS rebars. This GC was also partially substituted with SCBA to further decrease the environmental impact of the traditional OPC concrete. Furthermore, the mechanical strength of the new GC was investigated to describe its future real-world applications. Five different concrete mixtures were prepared according to the ACI 211.1 standard , two reiantially .3) [Three different concrete mixtures were made: a conventional concrete control mixture (MC) made with 100% OPC following the standard for Portland blended cement , natural3) , absorpt3) , maximum3) . Figure The design of concrete mixtures for MC and GC created according to the standard ACI 211.1 . This st 214R-11 ). Table For the evaluation of the physical properties of fresh-state concrete mixtures, tests of slump , freshlyTo determine the mechanical strength 2 present in the RCA experience a pozzolanic reaction that increases the rate of concrete strength development over time [2 is also present in the SCBA according to previous results [The compressive strength decreased as the content of recycled coarse aggregate (RCA) present in GC increased. The GC mix with 50% RCA and 20% SCBA was substituted for the cement CPC 30R (M50) and showed a compressive strength of 11.54 MPa at 28 days. This represents a decrease of 42% with respect to the MC, and a decrease of 51.5% for GC with 100% RCA and 20% SCBA replacing cement CPC 30R, reporting a compressive strength of only 9.66 MPa at an age of 28 days. The decrease in compressive strength in GC mixes is related to the incorporation of RCA. This behavior agrees with that reported in various investigations. Ali et al. found in their investigation of glass fibers incorporated in concrete with RCA that when RCA completely replaces NCA, it reduces the compressive strength, split tensile strength and flexure strength by about 12%, 11% and 8%, respectively . Kurda ever time . The SiO results , thus be results . HoweverThe MC and the two mixtures of GC (M50 and M100) were made with a water/cement ratio of 0.65. The specimens were prisms with dimensions of 15 \u00d7 15 \u00d7 15 cm. In all the specimens, AISI 304 SS and AISI 1018 CS rebars were embedded with a length of 15 cm and a diameter of 9.5 mm; the AISI 304 SS and AISI 1018 CS rebars were cleaned to remove any impurities . In addi2SO4 solution for 364 days, simulating a sulfate aggressive medium such as contaminated soils, marine and industrial environments [2SO4 solution.The specimens were manufactured in accordance with the standard ASTM C 192 and the ronments ,79. The 2SO4 solution is shown in MC, M50 and M100 indicate the concrete mixture ;W indicates exposed DI-water (control medium);2SO4 solution (aggressive medium);S indicate exposed to 3.5 wt.% Na18 for rebars of AISI 1018 CS;304 for rebars of AISI 304 SS.The nomenclature used for the electrochemical monitoring of AISI 304 SS and AISI 1018 CS embedded in the MC and the two GC (M50 and M100) exposed in a control medium (DI-water) and 3.5 wt.% Na2SO4 solution, for a period of 364 days. The corrosion behavior was characterized by corrosion potential (Ecorr) and corrosion current density (icorr) measurements. The electrochemical cell setup used was AISI 304 SS or AISI 1018 CS rebars with a diameter of 9.5 mm for working electrodes (WE). AISI 314 SS rebars were used as counter electrodes technique. The sweep potential range was \u00b120 mV with respect to the Ecorr and the sweep rate was 10 mV/min according to standard ASTM-G59 [Ecorr and icorr were monitored every four weeks and all experimental measurements were performed in triplicate.MC and GC specimens were exposed to two different media, the control medium (DI-water) and 3.5 wt.% Na CE; see and stanASTM-G59 . ElectroASTM-G59 ,82. All icorr and the corrosion rate (vcorr) were estimated from the LPR technique using the Stern and Geary relation (see Equation (1)) [B is the proportionality constant equal to 26 and 52 mV/dec for active and passive corrosion state rebars, respectively, and Rp is the polarization resistance [The ion (1)) :(1)icorrsistance ,85.Ecorr was used to assess the degree of deterioration of reinforced concrete specimens according to ASTM C-876-15 [Ecorr values with the probability of corrosion for embedded steel specimens made with MC and GC and interpretation of the corrosion state were performed using the criteria presented in Half-cell potential monitoring , M50-W-304 (GC with 50% RCA and 80% CPC-20% SCAB) and M100-W-304 (GC with 100% RCA and 80% CPC-20% SCAB) specimens were reinforced with AISI 304 SS steel rebars. The Day 196 ,89, presCSE on Day 7 to \u221295 mVCSE on Day 28, presenting a small activation on Day 56 with an Ecorr value of \u2212143 mVCSE. From this point to the present, a stage of stability in the Ecorr values is observed from Days 84 to 364, in the range of \u221290 and \u2212120 mVCSE, interpreted according to the ASTM C-876-15 as a 10% corrosion risk. The M100-W-304 specimen, presented a similar behavior to the two MC-W-304 and M50-W-304 specimens in the curing stage, showing an Ecorr value of \u2212183 mVCSE on Day 7 and \u221297 mVCSE on Day 28. From this point on, the Ecorr values become \u221282 and \u2212124 mVCSE until the end of the testing, indicating according to the ASTM C-876-15 as a 10% corrosion risk. The behavior in the Ecorr values is less than \u2212200 and congruent with the nonaggressive medium of exposure, which is also interpreted as the passivity of the AISI 304 SS steel used as reinforcement in GC and MC.The M50-W-304 specimen behaves similarly to the control MC-W-304, with corrosion potentials in the curing stage ranging from \u2212218 mVEcorr monitoring of the specimens when exposed for 364 days to 3.5 wt.% Na2SO4 solution (aggressive medium). The evaluated specimens were MC-S-18 , M50-S-18 (GC with 50% RCA and 80% CPC-20% SCAB) and M100-S-18 (GC with 100% RCA and 80% CPC-20% SCAB). The MC-S-18 specimen in the curing stage presented an Ecorr value of \u2212217 mVCSE on Day 7 and \u2212180 mVCSE for Day 28. These Ecorr values indicate, according to the ASTM C-876-15, a 10% corrosion risk. Later, the specimen presents Ecorr values in the range from \u2212173 to \u2212159 mVCSE after Day 112, from this point to the present, an activation occurs with Ecorr values from \u2212203 to \u2212256 mVCSE from Day 140 to 224, which would indicate intermediate corrosion risk according to ASTM C-876-15. For Days 252 and 280, Ecorr values are lower than \u2212200 mVCSE, which would be associated with a passivity stage or a 10% corrosion risk; however, after Day 280, there is a trend towards more negative values of \u2212200 mVCSE, reaching \u2212239 mVCSE on the last day of monitoring. The M50-S-18 specimen presents more negative values of Ecorr in the curing stage than those presented by the control MC-S-18 specimen, with an Ecorr value of \u2212261 mVCSE on Day 7 and \u2212218 mVCSE for Day 28, showing from Days 56 to 140 Ecorr values that ranged between \u2212189 and \u2212243 mVCSE. Then, the specimen shows a decreasing trend towards lower values until the end of the testing, with values reaching \u2212284 mVCSE, after Day 140 and until the end of monitoring, Day 364, the Ecorr values for the M50-S-18 specimen when exposed in 3.5 wt.% Na2SO4 solution (aggressive medium) indicate intermediate corrosion risk according to the ASTM C-876-15.2SO4 solution (aggressive medium) was M100-S-18, presenting a tendency to lower Ecorr values with an Ecorr value of \u2212193 mVCSE on Day 7 of the curing stage and \u2212233 mVCSE for Day 28, continuing with the negative trend throughout the entire exposure period, reaching a potential of \u2212348 mVCSE on Day 336 and ending on Day 364 with a corrosion potential of \u2212369 mVCSE. This indicates a <90% corrosion risk according to the ASTM C-876-15 standard. This behavior of more negative corrosion potentials (Ecorr) coincides with that reported in other investigations when evaluating AISI 1018 steel in sustainable concrete made with SCBA and exposed to sulfates [The specimen that presented the worst performance when exposed to 3.5 wt.% Nasulfates . However2SO4 solution (aggressive medium). The MC-S-304 specimen presented an Ecorr value of \u2212157 mVCSE on Day 7 of the curing stage and \u2212202 mVCSE for Day 28, from this point, the specimen presents a trend towards higher Ecorr values, related to the passivity of AISI 304 SS steel, and reached a minimum Ecorr of \u221292 mVCSE on Day 224 of exposure. Then, the specimen showed Ecorr values in the range from \u2212108 to \u2212138 mVCSE until the end of the monitoring period, all the Ecorr values of the MC-S-304 specimen during the entire period of exposure to the aggressive medium were less than \u2212200 mVCSE, thus indicating a 10% corrosion risk according to the ASTM C-876-15. The M50-S-304 specimen presented a behavior similar to MC-S-304, with corrosion potentials in the curing stage with a decreasing trend. The M50-S-304 specimen displays an Ecorr value of \u2212178 mVCSE on Day 7 and \u2212213 mVCSE for Day 28, then increases and become more passive to \u2212138 mVCSE by Day 168 and remains stable in the range of \u2212135 and \u2212149 mVCSE until the final measurement, maintaining Ecorr values below \u2212200 mVCSE throughout the exposure period, thus indicating, according to ASTM C-876-15, a 10% corrosion risk. Finally, the M100-S-304 specimen presents a similar behavior to the two previous specimens in the curing stage, with corrosion potentials ranging from less to more negative, with an Ecorr value of \u2212151 mVCSE on Day 7 and \u2212247 mVCSE on Day 28. Unlike the MC-S-304 and M50-S-304 specimens, the M100-S-304 specimen presents Ecorr values less than \u2212200 mVCSE until Day 112, which would indicate intermediate corrosion risk according to the ASTM C-876-15. Thereafter, the specimen shows a trend towards higher Ecorr values, reaching an Ecorr value of \u2212110 mVCSE for Day 224 and remaining stable in the range between \u2212136 and \u2212113 mVCSE until the end of the testing. Like the previous specimens, the M100-S-304 specimen presented Ecorr values less than \u2212200 mVCSE during almost the entire exposure time to 3.5 wt.% Na2SO4 solution (aggressive medium), which indicates a 10% corrosion risk according to ASTM C-876-15. The previous results agree with those reported in the literature, where the excellent corrosion resistance of stainless steel grades AISI 304, AISI 316, etc., has been demonstrated when used as reinforcement in conventional concrete, sustainable concrete, green concrete, and when exposed to aggressive environments such as marine, sulfated and industrial environments [The specimens with AISI 304 SS steel were MC-S-304 , M50-S-304 (GC with 50% RCA and 80% CPC-20% SCAB) and M100-S-304 (GC with 100% RCA and 80% CPC-20% SCAB), exposed for 364 days to 3.5 wt.% Naronments ,92.corr results of the AISI 304 SS and AISI 1018 CS reinforcement in MC and both GC mixtures (M50 and M100) exposed to control medium (DI-water) and 3.5 wt.% Na2SO4 solution were interpreted according to the criterion of The iicorr results of the conventional concrete and GC specimens reinforced with AISI 1018 CS and AISI 304 SS steel exposed in water as a control medium. The MC-W-18 specimen presents an icorr value of 0.67 \u00b5A/cm2 for Day 7 of the curing stage, decreasing on Day 28 to a value of 0.21 \u00b5A/cm2. For Day 56, a passive icorr value of 0.095 \u00b5A/cm2 was observed, and subsequently, values remained less than 0.091 \u00b5A/cm2 until the end of monitoring in the range of 0.09 to 0.05 \u00b5A/cm2. The icorr values obtained from the MC-W-18 specimen indicate passivation of the reinforcing steel and, according to icorr values from the curing stage, presenting on Day 7 an icorr value of 0.58 \u00b5A/cm2 and 0.29 \u00b5A/cm2 for Day 28. From Day 56 to the end of monitoring, icorr values were below 0.1 \u00b5A/cm2 in the range of 0.07 to 0.04 \u00b5A/cm2, indicating a negligible level of corrosion. The M100-W-18 specimen had a similar behavior to the two previous specimens with an icorr on Day 7 of 0.64 to 0.26 \u00b5A/cm2 for Day 28 and presenting an icorr value of 0.067 \u00b5A/cm2 until Day 140. From Day 168 until the end of monitoring, icorr values were in the range of 0.144 to 0.214 \u00b5A/cm2, indicating a low level of corrosion according to CSE, indicating corrosion uncertainty according to ASTM C-876-15. With the LPR test, the icorr could be determined, confirming the activation of the system with the presence of a low level of corrosion from Day 196 for the M100-W-18 specimen in a nonaggressive environment. The corrosion present in the M100-W-18 specimen exposed to a nonaggressive medium is related to the less dense and more permeable matrix of green concrete (M100), as indicated by the low compressive strength at 28 days with icorr of the other two specimens, MC-W-18 and M50-W-18, indicated a negligible level of corrosion (passivity).th tests . The behicorr value of 0.0043 \u00b5A/cm2 on Day 7 with a trend towards more passive values, presenting an icorr value of 0.0031 \u00b5A/cm2 on Day 28. A trend to lower icorr values is observed until Day 224 with an icorr value of 0.0018 \u00b5A/cm2. Then, the specimen exhibits a small increase of icorr to 0.0028 \u00b5A/cm2 for Day 252 and from icorr values of 0.0021 \u00b5A/cm2 on Day 280 to 0.0023 \u00b5A/cm2 for the last monitoring on Day 364. All icorr values of the MC-W-304 specimen indicate a negligible or null corrosion level according to that indicated in icorr values, followed by the M50-W-304 specimen, which presented icorr values of 0.0085 \u00b5A/cm2 on Day 7 to 0.0041 \u00b5A/cm2 for Day 28, then continues with a decrease in icorr until Day 168 with a value of 0.0023 \u00b5A/cm2. Subsequently, the icorr increases from 0.0026 to 0.0032 \u00b5A/cm2 from Days 196 to 364, respectively. Finally, the M100-W-304 specimen (100% RCA and 20% SCBA) presented the highest icorr values, presenting an icorr value of 0.0045 \u00b5A/cm2 on Day 28, decreasing to 0.0024 \u00b5A/cm2 on Day 168. Following, icorr increases from 0.0027 \u00b5A/cm2 on Day 196 to a value of 0.0040 \u00b5A/cm2 for the last day of monitoring, Day 364. A clear difference is observed in the icorr values presented by the three studied specimens, the lowest icorr values are shown for the MC-W-304 specimen, followed by the M50-W-304 specimen, and finally the M100-W-304 specimen, the icorr range of the three specimens is more than 10 times less than 0.1 \u00b5A/cm2, which indicates that all the specimens present a negligible level of corrosion throughout the period of exposure to the control medium according to The MC-W-304 specimen in the curing stage showed an vcorr and icorr results of the specimens with AISI 304 SS and AISI 1018 CS steel bars embedded in MC and GC exposed to 3.5 wt.% Na2SO4 solution (aggressive medium) for a period of 364 days. The vcorr and icorr of the control specimen, MC-S-18, decreased from an icorr value of 0.2435 \u00b5A/cm2 on Day 7 to an icorr value of 0.1144 \u00b5A/cm2 for Day 28. This behavior is attributed to being in the curing stage where the icorr values tend to decrease due to the formation of the passive layer and the increase in the protection of the concrete. The icorr values decrease until Day 140 of exposure with a value of 0.0729 \u00b5A/cm2, indicating a negligible level of corrosion or passivity according to icorr values greater than 0.1 \u00b5A/cm2 on Day 196 with an icorr value of 0.1656 \u00b5A/cm2 and reaching 0.2148 \u00b5A/cm2 at the end of monitoring. This indicates that, as of Day 196, the MC-S-18 specimen presented corrosion at a low level due to the exposure to sodium sulfate solution as an aggressive medium. In the case of the M50-S-18 specimen, the curing stage showed decreasing icorr values, reporting 0.3375 \u00b5A/cm2 on Day 7 to 0.1844 \u00b5A/cm2 for Day 28. This trend continued to decrease until Day 56, reaching an icorr value of 0.1506 \u00b5A/cm2. However, after Day 84, the icorr values begin to increase, becoming more active due to exposure to the aggressive environment and a decreased matrix density and increased permeability because it contains 50% of RCA. The values increase to 0.2779 \u00b5A/cm2 and remain stable in an icorr range of 0.2419 and 0.3386 \u00b5A/cm2 until the end of monitoring. From Day 84, the M50-S-18 specimen presents icorr values that indicate a low level of corrosion according to corri values in the curing stage, displays an icorr value of 0.4175 \u00b5A/cm2 on Day 7 and 0.2482 \u00b5A/cm2 for Day 28. For Day 86, the activation of the system with an increase in its icorr is shown, reaching a value of 0.3417 \u00b5A/cm2. On Day 140, an icorr value of 0.519 \u00b5A/cm2 indicates a moderate level of corrosion according to icorr increases for the M100-S-18 specimen continued irregularly from Day 168 to 308, ending on Day 364 with an icorr value of 0.7389 \u00b5A/cm2. The influence of the 100% RCA in the specimen is observed, influencing the mechanical properties and durability of GC due to a more permeable concrete matrix, lower density and a low resistance to compression compared to the control concrete . However, the use of mineral admixture resulted in a decrease in the charge passed through the concrete specimens [pecimens .2SO4 solution (aggressive medium), reporting icorr values in the curing stage of 0.0047 \u00b5A/cm2 on Day 7 to reach an icorr value of 0.0034 \u00b5A/cm2 on Day 28, observing a decrease associated with the increase in concrete protection due to the hydration process of said stage. The decrease in the corrosion rate occurs until Day 56, when the MC-S-304 specimen reports a minimum icorr of 0.0028 \u00b5A/cm2, from this point, the values stabilize in the range between 0.0039 and 0.0047 \u00b5A/cm2 between Days 112 and 196 of exposure the aggressive medium. Subsequently, the icorr increases gradually from 0.0054 \u00b5A/cm2 on Day 224 to the highest value in the entire exposure period at the end of monitoring, Day 364, with an icorr value of 0.0106 \u00b5A/cm2. As indicated previously, its performance was excellent in the presence of sodium sulfates, with icorr values well below 0.1 \u00b5A/cm2, which is the limit that would indicate the onset of corrosion according to The MC-S-304 specimen presents the best performance against corrosion when exposed for 364 days to 3.5 wt.% Naicorr values in the curing stage ranging from 0.0080 and 0.0031 \u00b5A/cm2 from Days 7 to 28, respectively. Day 56 shows an icorr value of 0.0032 \u00b5A/cm2, an increase in icorr from Day 56 to 196, with constant increases from Days 56 to 112 going from an icorr value of 0.0032 and 0.0052 \u00b5A/cm2, from there to stabilize and oscillate in the range of 0.0058 and 0.0061 \u00b5A/cm2. From Day 140 to 196, there is a constant increase until the end of the monitoring period, from an icorr value of 0.0077 \u00b5A/cm2 on Day 224 to 0.1321 \u00b5A/cm2 for the Day 364. Like the MC-S-304 specimen, the icorr values are much lower than 0.1 \u00b5A/cm2, which indicates that its corrosion level is negligible, or passivity occurs, according to the provisions of icorr values for AISI 304 SS during the curing period were 0.0071 and 0.0047 \u00b5A/cm2 on Days 7 and 28, respectively, during the curing stage. Next, the icorr increases from 0.0041 to 0.0098 \u00b5A/cm2 for Days 56 to 168, respectively. A second period of increase occurs from Days 196 to 280, from an icorr value of 0.00989 to 0.1143 \u00b5A/cm2. Finally, the third period with near-constant icorr of 0.01346 \u00b5A/cm2 on Day 308 to icorr of 0.01894 \u00b5A/cm2 on Day 364. The icorr values during all the periods of exposure showed values less than 0.1 \u00b5A/cm2, which indicates an excellent performance against sulfate corrosion for the M100-S-304 specimen with 100% of RCA and 20% of SCBA.In the case of the M50-S-304 specimen, it has a much higher anticorrosive efficiency than that presented by the specimen reinforced with AISI 1018 CS steel (M50-S-18). The M50-S-304 specimen presents icorr in both AISI 1018 CS and AISI 304 SS steels. This behavior is the opposite of the reported behavior in another research, where it was found that the influence on the performance against most usual corrosion processes displayed similar results under a natural chloride attack [The corrosion resistance was not influenced by the high permeability, low density and low mechanical resistance of the GC with which the M100-S-304 specimen was made. By data fitting, the durability properties generally decrease linearly with the increase of RCA replacement and the average water absorption rate . The cone attack . Therefoe attack ,109 thatAccording to the results from the study, the following conclusions were reached:GC samples showed a significant decrease in the slump in their fresh state, GC-M50 with a slump of 3 cm and GC-M100 with a slump of 2 cm, decreasing their workability compared to conventional concrete (MC) which presented a slump of 10 cm.The compressive strength shows a decreasing trend as the content of RCA present in GC increases. The GC-M50 mix with 50% RCA and 20% SCBA must be substituted for the CPC 30R. A compressive strength of 11.54 MPa was observed at 28 days, which represents a decrease of 42% with respect to the MC. A decrease of 51.5% for GC with 100% RCA and 20% SCBA replacing CPC 30R. A compressive strength of only 9.66 MPa was seen for Day 28.The results obtained in the present investigation indicate a direct influence between the percentage of aggregate used in the GC mixes and the level of corrosion that all the specimens present in both the control medium and the aggressive medium, the higher the content of RCA, the higher the corrosion rate in both CS 1018 and AISI 304 SS reinforcements.icorr values of the GC specimens reinforced with AISI 304 SS exposed to Na2SO4 were found to be 0.01894 \u00b5A/cm2 on Day 364, two orders of magnitude lower than the icorr values (0.7389 \u00b5A/cm2) obtained for CS 1018 in the same period. Therefore, it is shown that even with low mechanical properties, less dense concrete matrix and high permeability, the durability of GC is increased by presenting excellent resistance to corrosion when exposed to 3.5 wt.% Na2SO4 for more than 364 days, associated with the excellent corrosion performance of AISI 304 SS as reinforcement in concrete exposed to aggressive media.The"} +{"text": "Teladorsagia circumcincta.Benzimidazole resistance is associated with isotype-1 \u03b2-tubulin gene F200Y, E198A and F167Y SNPs. In this study, the recently described polymorphism E198L was reported and analysed in T. circumcincta. The resistance alleles frequencies were measured for F200Y and E198A. A 371-bp fragment of the isotype-1 \u03b2-tubulin gene was analysed, including the three codons of interest, and a new pyrosequencing assay was designed for testing E198L.The benzimidazole phenotypic resistance was measured by the faecal egg count reduction test (FECRT) and the egg hatch test (EHT) using a discriminating dose (DD) in 39 sheep flocks. Around 1000 larvae collected before and after treatment were used for DNA extraction. The resistant species identified in all flocks was T. circumcincta. The amplification of a 371-bp fragment confirmed the absence of F167Y and F200Y in 6 resistant flocks. Regarding codon 198, all samples after treatment carried a leucine (CTA). A pyrosequencing assay analysed the allele frequencies for the first two bases at codon 198 independently, G/C and A/T. The correlation between C and T frequencies was almost 1 and the mean value of both was calculated to measure the leucine frequency; this value ranged between 10.4\u201380.7% before treatment, and 82.3\u201392.8% after treatment. High and similar correlations were reported between the genotypic variables and phenotypic resistance , although negatively associated with the FECRT and positively with the EHT. According to multivariate linear regression analysis, the T frequency was the most significant variable influencing the phenotypic resistance . In the EHT, 67.1% of the phenotypic variability is associated with the T frequency but in the FECRT only 33.4%; therefore, the EHT using a DD seems to detect the genotypic resistance more accurately than the FECRT.The percentage of resistant flocks was 35% by FECRT or 26% by EHT; however, F200Y and E198A SNPs were absent in T. circumcincta. The E198L polymorphism can confer BZ resistance on its own in Infections by gastrointestinal nematodes (GIN) in grazing ruminants are highly prevalent worldwide. The importance of GIN infection is due to the economic losses associated with reduced weight gain, milk yield and wool production . Since tTeladorsagia circumcincta, Trichostrongylus spp., Haemonchus contortus, the most prevalent and resistant species [Nowadays, AR has been reported in many GIN species to BZs in many countries and multiple host animals , but als species , 5\u20137.in vivo faecal egg count reduction test (FECRT) has been extensively used for anthelmintic efficacy testing at farm level due to its simplicity and can be used for testing all anthelmintic families; however, it is time-consuming and expensive because it involves two separate farm visits. With the aim of avoiding these drawbacks, the in vitro egg hatch test (EHT) was developed for the detection of BZ resistance; this test is based on the fact that eggs from resistant isolates embryonate and hatch at higher drug concentrations than do those from susceptible isolates. Although the EHT was developed and standardized using serial dilutions of thiabendazole (TBZ) to calculate the dose required to prevent 50% of the viable eggs from hatching (ED50) [Since the number of anthelmintics is quite limited, and in the short to medium term it is unlikely that new drugs will appear on the market, a more sustainable use of anthelmintics is required. For that, AR has to be detected as early as possible to avoid its spread. The g (ED50) , a simplg (ED50) .H. contortus, T. colubriformis, T.circumcincta, Cooperia oncophora and Ostertagia ostertagi). Less frequently, other SNPs have been associated with BZ resistance, either in combination with others or on their own; for example, the same SNP as previously described was found at codon 167 (F167Y) and a point mutation of glutamic acid (GAA) to alanine (GCA) at codon 198 (E198A). Ram\u00fcnke et al. [T. circumcincta, Trichostrongylus spp. and H. contortus. Although the three SNPs seemed to be very widely distributed across the world, in our previous studies, we were unable to detect any of these in BZ resistant T. circumcincta larvae from Spain [T. circumcincta larvae, a substitution of glutamic acid (E) by leucine (L). The association between the frequency of this polymorphism and the phenotypic resistance measured by FECRT and EHT was studied.Rapid and accurate molecular tests are not available for most drug classes and helminth species, however, a DNA-based test for detection of BZ resistance in GIN is available. In the latter, resistance to BZ is associated with a single nucleotide polymorphism (SNP) at specific codons within the gene coding for isotype-1 \u03b2-tubulin . The sube et al. used a pom Spain , 13. On The study was conducted during the years 2015 and 2016 in the northeast of Spain with the collaboration of the livestock cooperative COBADU (Cooperativa Bajo Duero). The flocks had to be without any anthelmintic treatment 3 months before their selection and under grazing conditions. The first step was the identification of those farms infected by GIN by means of a faecal egg count (FEC). For this purpose, individual faecal samples from the rectum of 20 animals were randomly taken in each flock. Faecal samples were analysed individually by a modified McMaster technique and using a saturated solution of sodium chloride (density\u2009=\u20091.2\u00a0g/ml) for egg counting. The sensitivity of this technique was 15 eggs per gram (epg). In total, 39 flocks naturally infected by GIN were included in the study.\u00ae), a BZ drug , at the recommended dose (7.5\u00a0mg/kg bodyweight). Each animal was weighed before treatment in order to adjust the individual dose. Rectal faecal samples were taken from every sheep on the day of treatment (day 0) and 10\u201314 days post-treatment. All samples were processed and analysed individually as described above of 0.1\u00a0\u03bcg/ml of TBZ as previously described by Coles et al. . A stock50\u2009\u2265\u20090.1\u00a0\u03bcg/ml is indicative of resistance [The hatching ratio (Hdd) was used as the BZ resistance indicator in each EHT, it refers to the percentage of eggs hatching at DD, corrected by hatching in control wells. Given that when the EDsistance , a valuesistance .A pool of faecal samples was collected before and after treatment for a later egg concentration. Eggs were then incubated at 23\u00a0\u00b0C overnight for collection of first stage larvae (L1). A total of approximately 1000\u00a0L1 from each flock and sampled day was stored in 70% ethanol until DNA extraction was performed. DNA was extracted using the Speed Tools Tissue DNA Extraction Kit as per the manufacturer\u2019s instructions.Species identification was performed on pooled L1 extracts using the ovine nematode panel on an AusDiagnostics Multiplexed Tandem PCR (MT-PCR) machine at the Moredun Research Institute, exactly as outlined in Roeber et al. .DNA extracted from the ~\u20091000\u00a0L1 samples was used for the determination of allele frequencies \u201c\u201d sectionTeladorsagia spp. was carried out following the protocol and primers previously described by Esteban-Ballesteros et al. [The determination of allele frequencies at SNPs 198 and 200 of the gene encoding isotype-1 \u03b2-tubulin in s et al. . The allT. circumcincta isotype-1 \u03b2-tubulin gene was amplified, including the codons 167, 198 and 200. The sequence analysis was performed in pooled L1 obtained from faecal samples collected before and after treatment in 6 different flocks and in 40 individual L1 obtained from faecal samples collected before treatment in 2 resistant flocks.A 371-bp fragment of the For this, a pair of primers , was designed based on the mRNA sequence, GenBank accession number Z69258. PCR cycling conditions were 95\u00a0\u00b0C for 10\u00a0min followed by 40 cycles of 95\u00a0\u00b0C for 45\u00a0s, 60\u00a0\u00b0C for 30\u00a0s and 72\u00a0\u00b0C for 45\u00a0s followed by 10\u00a0min at 72\u00a0\u00b0C and 4\u00a0\u00b0C to finish; DNA mplitools HotSplit Master Mix 2\u2009\u00d7\u2009(Biotools) was used for the amplification. Specific PCR products were run on a 1.5% agarose gel stained with Gel Red and bands were purified with the Speed Tools PCR Clean-up Kit (Biotools) for later sequencing at the Laboratory of Instrumental Techniques .Escherichia coli JM109 competent cells . Eleven clones were sequenced afterwards using universal primers SP6 and T7.In the same way, this PCR was performed with DNA from pooled L1 collected from 16 flocks after treatment. All these PCR products were mixed together as a single sample. This sample was cloned into a p-GEMT Easy vector and transformed into Alignment and analysis of all partial sequences were carried out with the DNASTAR software program (version 14.1).T. circumcinctaisotype-1 \u03b2-tubulin gene including the codon 198 . The forward primer was labelled with biotin at the 5\u2032-end. PCR cycling conditions were 95\u00a0\u00b0C for 10\u00a0min followed by 40 cycles of 95\u00a0\u00b0C for 30\u00a0s, 60\u00a0\u00b0C for 30\u00a0s and 72\u00a0\u00b0C for 30\u00a0s followed by 10\u00a0min at 72\u00a0\u00b0C and 4\u00a0\u00b0C to finish; DNA Amplitools HotSplit Master Mix 2\u2009\u00d7\u2009 was used for the amplification in a 50\u00a0\u03bcl reaction volume. PCR products were run on a 1.5% agarose gel. Prior to pyrosequencing, a 7\u00a0\u03bcl aliquot of each PCR product was tested by agarose gel electrophoresis. The specificity of the primers was confirmed initially by running the same PCR but with DNA extracted from T. colubriformis and H. contortus adults.Initially, a new pair of PCR primers was designed to amplify a138-bp fragment of the Pyrosequencing of the resulting PCR fragments was done using the sequencing primer (5\u2032-ATT ATC GAT RCA GAA YGT T-3\u2032). The pyrosequencing assay was carried out using a DNA PSQTM96 MAsystem according to the manufacturer\u2019s recommendations. For this, 40\u00a0\u03bcl of PCR product was added to 37\u00a0\u03bcl 2\u00d7\u2009Binding buffer (Biotage) plus 3\u00a0\u03bcl streptavidin sepharose beads in a 96-well plate and then agitated for 5\u00a0min at room temperature to allow binding of biotin-labelled DNA to the beads. The beads were processed using the sample preparation tool and reagents (Biotage) then dispensed into the assay plate with 40\u00a0\u03bcl of 0.4\u00a0\u03bcM sequencing primer per well. Positive controls with the different independent variables tested.The data were then analysed using the Statistical Computer Package for Social Science (SPSS). The Kolmogorov-Smirnov test was carried out to determine if data were normally distributed. The statistical relationship between the different variables was calculated using the non-parametric Spearman\u2019s rank correlation test. Multivariate linear regression analyses were performed to assess any associations between the dependent variables (FECRT and EHT) and the independent variables . A forward stepwise selection procedure was used to select the variables that were significantly associated , with a FECRT higher than 95%. In one of the remaining flocks (3%), the FECRT was higher than 95% although the lower confidence limit was less than 90%, suggesting suspicious of resistance, and the rest (12/34) were declared as resistant with a FECRT lower than 90% (35%). It is noteworthy that in 2 flocks (flocks 23 and 35), the FEC was even higher after the treatment , susceptible to ABZ, and consequently 26% of them declared as resistant. Among the resistant flocks 10/39), the Hdd ranged from 54.4% to 91.6% (mean: 72%) , and T. colubriformis, found in 35 flocks with values between 1.3\u2013100% (mean: 35.5%). Oesophagostomum spp. and C. ovina were present in 17 and 16 flocks, with values between 3.9\u201349.6% (mean: 21.9%) and 0.4\u201351.9% (mean: 16.7%), respectively. Haemonchus contortus was not found in any flock. After treatment, MT-PCR was performed in 15 flocks; in 13 flocks out of 15, all larvae were identified as T. circumcincta (100%). In the other two flocks, T. circumcincta represented 99.2 and 99.6% of all larvae, and the remaining percentages were 0.8% T. colubriformis and 0.4% Oesophagostomum spp., respectively. Figure\u00a0T. circumcincta ; this flock was classed as highly resistant, with a value of 7% after the FECRT and 93.4% for the Hdd. The resistant allele at codon 198 was not found in any flock (data not shown).Due to the fact that the main resistant species was T. circumcincta \u03b2-tubulin gene was amplified, with a homology of almost 100% of isotype-1 \u03b2-tubulin gene, and including the three codons of interest. The amplification of this fragment confirmed the absence of SNPs F167Y and F200Y before and after treatment in DNA samples from pooled L1 collected from 6 resistant flocks.With the aim to confirm the previous results, a 371-bp fragment of the Regarding codon 198, in the samples collected before treatment, the amino acid present was not clear, with multiple peaks on chromatograms; the possibility of two nucleotides in the first two positions of the codon (G/C A/T A) could lead to four different amino acids Fig.\u00a0a. HoweveWith the aim to clarify the genotype before treatment, 40 individual L1 collected before treatment from two resistant flocks, were analysed after the amplification of the 371-bp fragment. As a result, 36 out of 40 L1 showed the homozygous genotype for glutamic acid (GAA/GAA) (GenBank: MT818234) , ranging from 39.0 to 59.8%; so, the frequencies were closer between G and T, or C and A. Therefore, the coding amino acid was not well defined in these four flocks before treatment although after treatment all of them showed a high frequency for leucine (84.8\u201392.8%) . Moreover, both measurements were highly correlated with the genotypic variables\u2014C frequency, T frequency and the mean of both frequencies. The higher the frequency of both nucleotides, the lower the efficacy of the anthelmintic (low FECRT and high Hdd) (P\u2009<\u20090.001). Despite the fact that neither the FECRT nor the EHT was associated with the percentage of T. circumcincta, the percentage of larvae carrying the leucine amino acid (measured as mean frequency of C and T) were highly correlated with both measurements; the higher the percentage of these larvae, the lower the FECRT , and the higher the Hdd . The values of each R2 mean that 33.4%, 67.1% and 67.0% of variance of each dependent variable (FECRT or EHT) is explained by the independent variables included in each model. The models showed that the phenotypic resistance was influenced by the T frequency at the second position of codon 198, negatively with the FECRT and positively with the Hdd, with a level of significance P\u2009<\u20090.01 in all models. Model 3 confirmed that the percentage of T. circumcincta carrying the leucine amino acid at codon 198 influenced slightly the Hdd measured by the EHT (P\u2009=\u20090.072) . This finding is in accordance with the study carried out by Rialchet al. [50 calculated by the EHT .The EHT has been used as an the ED50 \u201318; howethe ED50 . Consequchet al. , who alsin vivo and in vitro methods. The F200Y mutation at isotype-1 \u03b2-tubulin gene still appears to be the most important polymorphism associated with BZ resistance in nematodes infecting ruminants, however, it does not adequately explain all BZ resistant phenotypes, other mutations present in the same gene has also been reported, F167Y and E198A [T. circumcincta larvae collected before and after BZ treatment, but without success [T. circumcincta. However, in one of our studies, the F200Y polymorphism was reported in Trichostrongylus spp. larvae collected from the same flocks where the SNPs were absent in T. circumcincta [T. circumcincta larvae from Spanish sheep flocks, which were resistant to a BZ treatment. Only one flock showed a frequency of the resistant allele of 11.6% at codon 200 after treatment, contradicting the highly resistant phenotype . This frequency could be considered as an invalid value since in some studies it was assumed that frequencies equal to or lower than 10% are technical backgrounds [That said, the detection of specific mutations related to BZ resistance could result in a more specific and accurate measurement than success , 13. Ourkgrounds . Our hypT. circumcincta larvae, a 371-bp fragment including the three codons of interest was sequenced. We verified the susceptible genotype at codons 200 and 167; however, at codon 198, all samples analysed after treatment showed two new polymorphisms at the first two bases of the codon, C and T, leading to CTA, coding for leucine (L). Although this is the first time that CTA is reported at codon 198, the amino acid leucine had been already described in a few previous studies as TTA. The identification of a substitution of glutamic acid (GAA or GAG) with leucine (TTA) at position 198 in T. circumcincta on four out of seven farms from the UK was firstly described by Redman et al. [n\u2009=\u2009164), finding the polymorphism E198L (GAA\u2009>\u2009TTA) in T. circumcincta larvae collected on the majority of farms but with a very low frequency (6.41% in ewes and 7.80% in lambs); only a few of them showed high frequencies, the highest value being 68.76% [T. circumcincta adults collected after BZ treatment; only one adult nematode collected after treatment carried the polymorphism E198L and was homozygous for the BZ susceptible allele at position 200 (TTC) [T. circumcincta population with a frequency of 28.1% [With the aim to clarify the genotype of resistant n et al. . In thatg 68.76% . However00 (TTC) . E198L at codon 198 was present in 82% of all tested samples (n\u2009=\u2009133) and in 100% of the L3 and pooled adult samples collected after albendazole treatment in goat farms from Sudan [T. circumcincta and H. contortus, it has been confirmed that E198L can confer BZ resistance independently of F200Y or F167Y.There are two other GIN species infecting ruminants in which the polymorphism E198L has been described. In the study carried out by Avramenko et al. and mentom Sudan . In thesr\u2009=\u20090.929; P\u2009<\u20090.0001). Consequently, the mean frequency of C and T was calculated to determine the frequency of the polymorphism E198L. However, there are four samples collected before treatment that did not follow this pattern, the frequencies were more similar between G and T, or C and A, coding for valine (GTA) or glutamine (CAA), respectively. Unfortunately, we were not able to distinguish between these genotypes in pooled L1 using a pyrosequencing assay; however, this differentiation was possible using a deep amplicon assay designed by Avramenko et al. [P\u2009<\u20090.0001). This could open the possibility that the presence of valine (GTA) at codon 198 could be related to BZ resistance as well, although to a lesser extent, since after treatment all genotypes were CTA (L), by sequencing and pyrosequencing. However, as previously mentioned, the presence of other genotypes with low frequencies is difficult to distinguish using either technique.It is noteworthy that in five out of the six studies describing the presence of E198L in three different GIN species, the codon for leucine was TTA , although negatively associated with the FECRT and positively with the Hdd. However, according to the multivariate linear regression analysis, in the EHT 67.1% of the phenotypic variability is associated with the T frequency (model 2) but in the FECRT only 33.4% (model 1). It is important to mention that in the EHT all non-T. circumcincta eggs would have failed to hatch reducing the Hdd value and consequently its association with the T frequency. The lower accuracy for detecting the AR by FECRT could be due to the fact that there is a variation in FEC method used to calculate the FECRT, a McMaster technique with a sensitivity of 15. Using a FEC method as FLOTAC, with an analytical sensitivity of 1 epg could have improved the accuracy of the FECRT [Although Martin et al. suggestehe FECRT . TherefoT. circumcincta. The leucine frequency (L), measured as the mean C and T frequencies (first two bases), was measured in 39 sheep flocks ranging between 10.4 and 80.7% before treatment, and from 82.3 to 92.8% after treatment. Also, high and similar correlations were reported between the genotypic variables and phenotypic resistance , although negatively associated with the FECRT and positively with the EHT. In the EHT, 67.1% of the phenotypic variability is associated with the T frequency (second base) but in the FECRT only 33.4%; therefore, the EHT using a DD seems to detect the genotypic resistance more accurately than the FECRT.In the present study we have shown that a mutation of glutamic acid (GAA) to leucine (CTA) at codon 198 (E198L) of the isotype-1 \u03b2-tubulin gene can confer BZ resistance on its own in Additional file 1: Table S1. Individual faecal egg count (FEC) before and after treatment per flock.Additional file 2: Table S2. Faecal egg count (FEC) mean per flock before and after treatment and results of the faecal egg count reduction test (FECRT) (%) and egg hatch test (EHT) (%). Results before treatment: pyrosequencing (% of each base and mean C and T), proportion of T. circumcincta carrying the leucine (LEU) amino acid at codon 198 (%) and species identification (%).Additional file 3: Table S3. Results after treatment: pyrosequencing (% of each base and mean C and T), proportion of T. circumcincta carrying the leucine (LEU) amino acid at codon 198 (%) and species identification (%)."} +{"text": "Infertility is a common problem in testicular cancer. Affected men often decide to undergo sperm banking before chemo/radiotherapy. The cumulative effects of therapy can considerably reduce fertility.Testicular cancers impair fertilizing ability, even before diagnosis. This study tries to verify individual traits and semen quality in patients with testicular cancer.This observational study analyzed 190 semen of patients with testicular cancer (16 to 47 yr old) referred to the sub-fertility laboratory at the St. Mary hospital for semen banking prior to treatment carcinoma. Several aspects of their semen analyses were examined. The cases were divided into four different categories: seminoma, teratoma, mixed germ cell tumors and others. The results showed that 23 cases were azoospermic, and 13 of the patients who were not azoospermic, their sperm of \u201cnormal\u201d morphology were too few to count. Among patients that could produce spermatozoa, 59.4% had a sperm concentration of Abnormal spermatogenesis is seen in most patients with testicular cancer before treatment with radiation, chemotherapy, or surgery. The causes of poor semen quality in cancer patients are not well-understood, but the patients with impaired spermatogenesis should have precise examination to find out the correct diagnosis of problem and preserve the fertility before any treatment. Testicular cancer is often cited as the most common cancer of young men and boys between 15 and 35 years of age (1). It is a relatively rare tumor, comprising only 1% of all malignant neoplasms in men (2). The statistics reviews across Europe and the United States show that it is increasing in incidence in Caucasian men (3-4). An unexplained rise in the occurrence of testicular cancer has been observed in the United States, with a 100% increase in the number of reported cases since 1936 (2). A similar trend has also been reported in several northern European countries. There is extensive geographical variation and the incidence rate of testicular cancer can fluctuate between countries (5).Testicular tumors can be categorized into germ cell and non-germ cell tumors. Germ cell tumors arise from spermatogenic cells and comprise 95% of testicular neoplasms. Only 10% of cases of these tumors are malignant. Non-primary tumors such as lymphoma, leukemia, and metastases can also be presented as testicular masses (6).Testicular cancers (TC) impair fertilizing ability, even before diagnosis (7). They affect the hypothalamic-pituitary-gonadal axis and consequently disturb spermatogenesis (8). These deleterious effects are dependent on the stage and type of seminoma, resulting in poor semen quality or even azoospermia (9). In many TC patients, sperm quality is already abnormal and may even lack viable spermatozoa at the time of diagnosis (10). The treatment for this type of cancer, usually performed by surgery, chemotherapy, or radiotherapy, further affects semen quality (9) and hormonal function (11), thus highly impairing male fertility. In fact, after cancer therapy, patients may become temporarily or permanently infertile (12). For that reason, it is strongly recommended that men diagnosed with TC undergo sperm banking to increase the probability of fatherhood in the future.Semen analysis is a preliminary assessment of male infertility and can be done for several reasons such as unexplained infertility, screening sperm donors, examination of a male partner prior to reversal of female sterilization, post-vasectomy reversal, or assessment of patients banking semen before undergoing chemo/radiotherapy. In these patients, the preservation of male fertility is usually done through cryopreservation. This procedure stabilizes the cells at cryogenic temperatures, which is known as a useful aspect of cryobiology or continuation of life at low temperatures (13).Previous study has not yet recognized whether TC histology may clarify different alterations to semen quality. Some researchers show that a nonseminoma usually effects more negatively on semen quality than a seminoma, but others fail to prove this difference (14).While much has been progressed in the treatment and diagnosis of TC, knowledge about these patients is still needed to better understand their current lifestyles and future decision. Therefore, this is important to examine carefully men with impaired semen analysis.Therefore, this study sets out to prove individual characters and semen quality in patients with testicular cancer, and to compare semen quality in them.In the current observational study, 190 semen samples from men with TC were collected after a minimum of 48 hr, but not longer than seven days of sexual abstinence. Patients suffering from systemic disorders like diabetes, hypertension, etc. were excluded from the study. To diminish the variability of semen analysis results, the number of days of sexual abstinence was kept constant as possible. Ideally, the specimen was passed in a private room close to the laboratory or it was delivered to the laboratory within 1 hr of collection.Semen specimens were passed through masturbation and ejaculated directly into a 60 ml jar made of glass or plastic. It was warm, and kept at room temperature (25\u00b0C) to avoid reduction in sperm motility. All products were assessed for the absence of spermicidal properties prior to use.According to the WHO protocol, fresh specimens passed on the premises were placed in an incubator at 37\u00b0C until complete liquefaction had taken place. A normal semen sample liquefies within 60 min at room temperature, although usually this occurs within 15 min.The semen sample was examined immediately after liquefaction or within 1 hr of ejaculation, first by simple inspection at room temperature.The viscosity, sometimes referred to as consistency, of the liquefied sample was recognized as being different from coagulation. The pH was measured at a uniform time within 1 hr of ejaculation.During the initial microscopic investigation of the sample, estimates were made of the concentration, motility, agglutination of spermatozoa and presence of cellular elements other than spermatozoa.The volume of semen and the dimensions of the coverslip were standardized so that the analyses were always carried out in a preparation of a fixed depth of about 20 \u03bcm. A fixed volume of 10 \u03bcl semen was delivered onto a clean glass slide with a positive displacement pipette and covered with a 22 At least five microscopic fields were systematically scanned until the motility of 200 sperm had been graded.The length of a normal sperm head is defined as 4-5 \u03bcm, and for the purposes of motility assessment, sperm moving progressively at more than 5 head lengths/second can be defined as grade a. The count of 200 spermatozoa was repeated on a separate 10 \u03bcl specimen from the same semen sample and the percentages in each motility grade from the two independent counts were compared.Each group of patients were compared with each of the rest using the \u201cIndependent sample test\u201d and \u201cAnalysis of variance\u201d was used to determine the significance of the differences between all the groups. The way in which the mean value of a variable was affected by the classification of the data could be determined by analysis of variance. The one-way analysis of variance is a generalization of the independent sample test (for the comparison of the means of two groups of data), and is appropriate for any number of groups. Rather than examining the difference between the means directly, analysis of variance looks at the variability of the data.The project was approved by the ethical committee of Manchester University (ref 03238).In the semen analyses, samples were obtained from 190 patients that were referred to the sub-fertility department of St. Mary hospital, UK, with diagnoses of testicular cancer.The different types of testicular carcinoma in 190 patients consisted as 19 mixed germ cell tumour cases (10%), 88 Seminoma cases (46.3%), 58 Teratoma cases (30.5%), and 25 other types of tumour cases (13.2%).The patients could be categorized by their pathological diagnosis into four groups, seminoma, teratoma, mixed germ cell tumor, and other types of tumor The age range of the volunteers was between 21-40 yr, and the mean age was 31.38 yr. While, the age range of the patients was 16-47 yr with the mean age as 29.75 yr and the median was 29 yr .Patients diagnosed with mixed germ cell tumor were in the bracket of 18-40 yr. Patients in the seminoma group were in the 20-47 yr, while patients in the teratoma group were in the bracket of 16-41 yr. The mean age in the teratoma group was 26.5 yr. The patients whose carcinoma was categorized as `other types of tumor' were in a group with an age range of 18-41 yr, with the mean age being 29.80 yr.The collected data for all the patients show that in 102 cases, information for the side of the tumor was missing, and there is not enough evidence to support a subsequent determination on to the side upon which testes occupied by the tumor lay. Among the rest of the patients, in 42 cases the tumor was of the left testis, and in 46 of them, the tumor was of the right testis. Table I shows the sidedness of the tumors in different types of testicular cancer.The mean volume of the semen samples for the volunteers was 3.81 ml (range 2.5-8.5 ml). The mean volume for the whole group of patients was 2.6 ml. The lowest volume among all the patients was 0.3 ml and the highest volume was 10 ml. The lowest volume, which is 0.3 ml, belonged to one of the patient in the seminoma group and the highest volume (10 ml) belonged to a patient in the group diagnosed with mixed germ cell tumor.Sixty patients with various type of testicular cancer had subnormal semen volume compared to the WHO (1999) reference range: 5 cases with mixed germ cell tumor (8%), 31 with seminoma cases (52%), 15 teratoma (25%), and 9 other types of tumor (15%).The mean of the sperm population in volunteers was 111.5The mean sperm concentration of all cases with testicular cancer was 24.7 The minimum sperm count in those patients, who were producing spermatozoa was 0.09 In the group containing volunteers, the mean of \u201cmotility excellent\u201d was 35.2%, with a minimum of 25% and a maximum of 50%. All of the volunteers had 25% or more \u201cmotility excellent\u201d. Also, in the same group, the mean of \u201cmotility excellent\u201d and \u201cmotility sluggish\u201d together was 65.8%. The immotile sperm had a mean of 25.4%, with minimum and maximum percentages of 15% and 40%, respectively.The comparison of the results in cases with the reference range that is recommended by WHO showed that only 10 patients had a normal motility. The mean of the \u201cexcellent motility\u201d and \u201cexcellent motility\u201d and \u201csluggish motility\u201d together in this group was 49.9% and 62.4%, respectively.In the seminoma group, the mean of excellent motility was 33.41%. Of this group, seven patients (8%) had no spermatozoa with \u201cexcellent\u201d motility and 30.7% of all the patients had an \u201cexcellent\u201d motility of The mean of \u201cexcellent motility\u201d in the teratoma group was 34.67% with two patients having no spermatozoa with \u201cexcellent\u201d motility. In this group, 25.9% of the patients had an \u201cexcellent\u201d motility of In the group of patients with diagnoses of other types of testicular cancer, the mean of \u201cexcellent\u201d motility was 34.44%, and three patients in this group had no spermatozoa with \u201cexcellent\u201d motility.Table III shows the mean of motility sluggish, motility non-progressive, and motility immotile in various types of testicular cancer.In the group of volunteers, the normal morphology of sperm started from 50% and rose to 72%. Therefore, all of the cases had normal morphology of In the teratoma group, among the non-azoospermic patients, the mean of morphology was 6.64% and Assessment of male infertility is based mainly on the standard semen analysis, which includes sperm count, motility, and sperm morphology. This study was undertaken with semen analyses of 190 patients who had been referred to the sub-fertility laboratory at the St Mary hospital, for semen banking. Based on the standard procedure recommended by WHO (1999), several aspects of their semen analyses were examined. The preliminary diagnosis in all of the cases was testicular tumor. Initial statistical analysis revealed that they were a suitable group for analysis as their age mean (29.75), median (29), and mode (28) were closely similar.The cases were divided into four categories: seminoma, teratoma, mixed germ cell tumors and other types of tumor . The results of the semen analyses were studied and categorized. The variables in this study were age, volume (in ml), population of sperm (million/ml), motility of sperm , total sperm count (million/ejaculate), and morphology. These variables were used as their normal ranges have been specified by WHO.et al. (15), who found that the mean age of the seminoma patients differed from the mean age of patients with other types of testicular cancer and that the mean age of their seminoma patients was significantly higher than that of the groups of patients with embryonal carcinoma and mixed tumors. In contrast, Botchan et al. (16) found no mean age difference between patients with different types of testicular cancer which is not in agreement with the finding of this present study.In choosing a control group for comparison with the patients' results, consideration had to be given to the very important role played by the variable of age, because most patients with testicular carcinoma are young. Analysis of the data was in agreement with the finding of Gandini t tests treated the volunteer group as the control group and compared all other groups against it. In this comparison, the teratoma group was the only group that differed significantly from the volunteers. In each group, the correlation of age and other variables was checked and only one significant correlation was found in the teratoma group. In this, the age was correlated with the total sperm count. In the teratoma group, as the patients got older, the total sperm counts were increased. This finding suggests that the effect of the teratoma in the patients of younger age could be more serious than in the older patients. Although, the concentration of the sperm and the volume of the semen did not show any significant correlation with age in the teratoma group, the correlation of the age and the total sperm counts indicates that both the volume of the ejaculate and concentration of the sperm were affected by the carcinoma and that this resulted in a significant correlation between the age and the total sperm counts (16).This significant difference suggests that the age range of the teratoma patients started earlier than that for seminoma patients and that the patients with the teratoma were usually younger than the patients with the seminoma. This finding was in agreement with some previous studies . In the seminoma group, no patients was under 20 yr old, and 17 patients (19%) were older than 37 yr of age. The second significant difference was between the teratoma group and the volunteers. Dunnett et al., who found that there was no significant difference between the mean semen volume of their patients with testicular carcinoma and their control group of normal fertile men (18). Also, the findings of this present study are very similar to the findings of the studies of Bussen et al., who found that the mean semen volume for their patients with testicular cancer was 2.5 ml and 2.8 ml, respectively (19).Of the 190 patients, the mean semen volume was 2.6 ml, which according to the WHO reference range is an acceptable semen volume. This finding was in agreement with those of Panidis et al. (15) were in agreement with the finding of this study for the semen volume, they showed no significant difference between the semen volumes for different types of testicular tumor. The finding of this study could suggest that even if the differences were not significant, there were some changes among the various groups.Although the findings of Gandini The results of the semen analysis in this study showed that more than 61% of the patients with testicular tumor had an abnormal sperm concentration. Also, 5% of the patients were azoospermic. The mean of sperm concentration in all of the patients was 23.4 et al., who concluded that there was a significant difference between the sperm count in patients with testicular carcinoma and healthy volunteers (21).Investigating each individual group of patients, based on their type of carcinoma, also showed that the mean of sperm concentration in those patients with abnormal sperm concentration, in each group, was significantly different from the value recommended by the WHO. This finding is consistent with the results of Petersen Although there were different effects among the various groups, the results showed that those patients diagnosed with \u201cother types of tumor\u201d such as embryonal carcinoma and yolk sac tumor had the most severe effects on their sperm concentration. Nevertheless, there was a clear finding that all types of testicular carcinoma studied could reduce the sperm concentration, as et al. (15), who found a significant difference between the total sperm counts in a seminoma group and a group of embryonal carcinoma.The results of the present study showed that All of the groups in the present study showed a significant difference between their total sperm count and that of the group of volunteers as a control. There was a significant difference between the mean total sperm counts of those patients with normal sperm counts.The total sperm count is related to the sperm concentration and the volume of ejaculated semen. The result of this study showed that although the highest percentage of the patients with abnormal sperm concentration belonged to the patients diagnosed with \u201cother types of tumor, while the seminoma group was ranked third, seminoma had the highest percentage for abnormal total sperm count (at 56%).et al. (15), who found a significant difference between a seminoma group and an embryonal carcinoma group, but it is consistent with their further finding that there was no significant difference between the seminoma group and a group with mixed germ cell tumor and was also in agreement with their observation of no significant difference between their groups of mixed germ cell tumor and embryonal carcinoma.The mean of \u201cexcellent\u201d sperm motility and \u201cexcellent plus sluggish\u201d sperm motility were examined in all of the patients. Results showed that there were no significant differences between the groups. This does not agree with the finding of Gandini et al. (16), who found that there was a significant difference between the sperm motility of patients with testicular carcinoma and group of healthy volunteers. Also, the finding of the present study was in agreement with the results of the study of Panidis et al. (18) who found that 50% of the patients had abnormal sperm motility and their sperm motility was different from a group of fertile men as the control.The results showed that the mean \u201cexcellent\u201d and \u201cexcellent plus sluggish\u201d sperm motility, either for all of the patients together or in the individual groups, had a significant difference from the mean \u201cexcellent\u201d and \u201cexcellent plus sluggish\u201d sperm motility of the volunteers and from the values recommended by the WHO. This is in agreement the results of Botchan et al. (15) and Botchan et al. (16), which suggested that seminoma patients had better semen quality than patients with other testicular carcinoma.Comparison between the four different groups in the present study showed that the seminoma group had the lowest mean of \u201cexcellent\u201d sperm motility and the patients with mixed germ cell tumor had the highest mean. All of these data suggest that patients with seminoma experienced the greatest effect of the carcinoma on their sperm motility. The findings of the present study differ from those of other studies by Gandini Among 180 zoospermic patients, only three had nil \u201cexcellent\u201d sperm motility and belonged to the seminoma group. This finding supports the suggestions that patients with seminoma had the worst value for \u201cexcellent\u201d sperm motility and that this may be caused by the effects of the carcinoma. Sperm morphology is still one of the most controversial semen parameters in terms of its role in evaluating potential male fertility. As there have been very few previous studies that have explained the influence of testicular carcinoma on the morphology of sperm, the discussion that follows has to be based very largely on the findings of this present study.et al. (18). Of the 190 patients in the present study, who had various testicular carcinoma, only 15 had normal sperm morphology. Even these 15 patients with normal morphology had a mean morphology of 18.07%, which was significantly different from the lowest percentage of the morphology in the group of volunteers. This finding is in agreement with the study of Botchan et al. (16), who found that there was a significant difference between the normal sperm morphology in the patients with testicular cancer and healthy volunteers. The finding of Panidis et al. (18) was also in agreement with the finding of this study, as they found that 83.4% of their patients had abnormal sperm morphology and that there was a mean difference between their patients with testicular cancer and normal fertile men.The results of the present study show that the morphology of sperm is the most sensitive semen parameter that is affected by testicular carcinoma and this is in agreement with the similar finding of Panidis et al. (15), who showed that of the three groups of patients with testicular carcinoma, those patients with embryonal carcinoma had the lowest value of normal sperm morphology, being lower than that of the seminoma patients and patients with mixed germ cell tumors. In this latter group, three patients (12%) had too few sperm with normal morphology to count. The smallest effect of testicular carcinoma on sperm morphology was seen in the group of patients with mixed germ cell tumors, in which The most severe effects of testicular carcinoma on the morphology of sperm were seen, in the present study, in the group of patients diagnosed with \u201cother types of tumor\u201d, as all of the patients (100%) had abnormal sperm morphology with a mean morphology of 5.23%. This is in agreement with the finding of Gandini Impaired spermatogenesis is seen in most patients with testicular cancer before treatment with radiation, chemotherapy, or surgery. The causes of poor semen quality in cancer patients are not well-recognized, but the patients with impaired spermatogenesis should have precise examination to find out the correct diagnosis of problem and preserve the fertility before any treatment. Many mechanisms contribute to the impairment of semen, including the direct effect of the tumor on the testes and indirect effects such as hormones and secretions from the tumor, which should be considered on a case-by-case basis of male infertility. So, most carcinoma seriously impair sperm morphology. However, it is important to know the duration of problem, stage and grade of tumors, which might affect the results of evaluation. Future study should consider these limitations of the present study.The authors have no conflict of interest in this study."} +{"text": "H. pylori MAPK-triggering to derail from gastric normal epithelium to GC and to encourage researches involved in MAPK signal transduction, that seems to definitely sustain GC development.Gastric cancer (GC) is turning out today to be one of the most important welfare issues for both Asian and European countries. Indeed, while the vast majority of the disease burden is located in China and in Pacific and East Asia, GC in European countries still account for about 100,000 deaths per year. With this review article, we aim to focus the attention on one of the most complex cellular pathways involved in GC proliferation, invasion, migration, and metastasis: the MAP kinases. Such large kinases family is to date constantly studied, since their discovery more than 30 years ago, due to the important role that it plays in the regulation of physiological and pathological processes. Interactions with other cellular proteins as well as miRNAs and lncRNAs may modulate their expression influencing the cellular biological features. Here, we summarize the most important and recent studies involving MAPK in GC. At the same time, we need to underly that, differently from cancers arising from other tissues, where MAPK pathways seems to be a gold target for anticancer therapies, GC seems to be unique in any aspect. Our aim is to review the current knowledge in MAPK pathways alterations leading to GC, including Every cell in our body is capable to respond to external stimuli such as growth factors, inflammation, cytokines, microenvironmental changes, and to interact with other cells, owing to a complex molecular mechanism based on the action of the mitogen-activated protein kinases (MAPKs). MAPKs are a large family of serine/threonine kinases that, upon the reception of a stimuli, trigger a cascade of phosphorylation leading to a precise and specific cellular response. Commonly the canonical pathway starts with the MAPKKKs, among which Raf isoforms are the better known and the most described in the scientific literature , that phHelicobacter pylori infection, and metastasize mainly to the liver through blood flood. Diffuse GC most commonly occurs in young patients, mainly females, presents a hereditary component and metastasize through peritoneal surfaces. Moreover, the diagnosis of diffuse GC is a pejorative prognostic factor, since it behaves more aggressively than intestinal GC. The extraordinary molecular heterogeneity of GC that has been highlighted by studies on somatic copy number alterations, gene mutations, epigenetic, and transcriptional changes etc., 3474 entries, corresponding to lncRNAs associated to GC, can be retrieved. Very often, lncRNAs involved in GC act as a sponge for miRNAs. Some lncRNAs associated to GC are also responsible for GC overgrowth and survival mediated by the MAPK/ERK pathway. The very first report about the role of a lncRNA in GC is about antisense H19 [Beside microRNAs, another kind of non-coding RNAs are produced in the bulk of transcriptional output: the long non-coding RNAs (lncRNAs). The lncRNAs are usually RNAs longer than 200 base , but accense H19 . H19 is ense H19 . Howeverense H19 , thus itense H19 ,91. The ense H19 . When upense H19 ,94. In pense H19 . SNHG6 iense H19 . Like otense H19 , a miRNAense H19 . lncRNAsMAPKs participate in complex biological systems accounting for an entire world of cellular kinases, that being activated by many stimuli, in turn influence many important pathways. Their understanding is of paramount importance, as in numerous cancers the kinases families are commonly dysregulated and, as reported above, they might be interesting targets for future therapies. GC in particular, showing several molecular discrepancies among different ethnicities, may be a very interesting model of study to further understand MAPK roles in cancer. Such discrepancies will need to be further deeply investigated, not only to select more specific therapeutic protocols, but also to better understand more thoroughly their molecular bases and effects. We do believe that, even if many findings are slowly uncovering the real impact of the MAPKs in GC progression and metastasization, much more efforts need to be spent in order to detect and evaluate all MAPKs regulators and effectors. We hope that this review article, as a little compendium of the most and recently studied topics about gastric cancer and MAPKs, will be a stimulus for new researches. Improving the worldwide scientific knowledge on the involvement of the dysregulated kinase pathways in GC will encourage also new studies on therapeutic drugs which might improve the survival of GC patients, exploiting such extremely important cellular kinases."} +{"text": "Drosophila is a powerful animal model for large-scale studies of drug effects based on the precise quantification of behavior. However, a user-friendly system for high-throughput simultaneous tracking and analysis of drug-treated individual adult flies is still lacking. It is critical to quickly setup a working environment including both the hardware and software at a reasonable cost. Thus, we have developed EasyFlyTracker, an open-source Python package that can track single fruit fly in each arena and analyze Drosophila locomotor and sleep activity based on video recording to facilitate revealing the psychiatric drug effects. The current version does not support multiple fruit fly tracking. Compared with existing software, EasyFlyTracker has the advantages of low cost, easy setup and scaling, rich statistics of movement trajectories, and compatibility with different video recording systems. Also, it accepts multiple video formats such as common MP4 and AVI formats. EasyFlyTracker provides a cross-platform and user-friendly interface combining command line and graphic configurations, which allows users to intuitively understand the process of tracking and downstream analyses and automatically generates multiple files, especially plots. Users can install EasyFlyTracker, go through tutorials, and give feedback on http://easyflytracker.cibr.ac.cn. Moreover, we tested EasyFlyTracker in a study of Drosophila melanogaster on the hyperactivity-like behavior effects of two psychiatric drugs, methylphenidate and atomoxetine, which are two commonly used drugs treating attention-deficit/hyperactivity disorder (ADHD) in human. This software has the potential to accelerate basic research on drug effect studies with fruit flies.The mechanism of psychiatric drugs (stimulant and non-stimulant) is still unclear. Precision medication of psychiatric disorders faces challenges in pharmacogenetics and pharmacodynamics research due to difficulties in recruiting human subjects because of possibility of substance abuse and relatively small sample sizes. Drosophila is a powerful genetic animal model for studies of complex phenotypes such as circadian rhythms, sleep, movement, and diseases system . It records the frequency of fruit flies crossing infrared beams in a tube to study the locomotor, sleep, and circadian rhythms. The high cost of the single tube device limits its usage for high-throughput studies. Other well-known commercial tracking software, such as EthoVision XT from Noldus , is also expensive.diseases . With los (ASDs) , attentis (ASDs) , and oths (ASDs) . In parts (ASDs) . Sleep a, pySolo , ShinyR-, pySolo , and \u201ctr, pySolo softwareThe Ctrax and the Drosophila treated with two commonly used psychiatric drugs such as MPH (a stimulant) and atomoxetine (ATX) (a non-stimulant) for ADHD symptoms in humans and finally identified hyperactivity-like behavior.Thus, we developed EasyFlyTracker, which uses affordable and easy-to-build equipment to track and analyze the sleep/locomotor activities of individual adult fruit flies for the study of drug effects, especially psychiatric drugs. To avoid interference of social behaviors, each arena contains only single fruit fly. EasyFlyTracker can track the activities of up to 72 individuals simultaneously with current settings and scale up to any number of individuals theoretically. After evaluating the tracking accuracy of EasyFlyTracker, we used it to track and quantify the locomotor activities of http://easyflytracker.cibr.ac.cn/#/document. Next, for convenience of users, hardware setup is introduced first.Our tracking system consists of two parts, software and hardware setup, of which software (named EasyFlyTracker) development is our focus. All the hardware can be purchased directly online and installed easily and we provided product lists on our website We built the customized recording environments, which are easily rebuilt and cost-effective compared with commercial equipment. The setup after importing the stored video sequence. Sample videos,A total of 800 frames or available number of frames when it is smaller than 800 are randomly selected from video and the pixel value with the highest number of occurrences in the time dimension is kept for each pixel. The background image is obtained after traversing all the pixel points. It should be noted that a random factor is used here, which will lead to the probability of inconsistency in the results of multiple operations on the same video. However, this deviation is extremely small and belongs to the normal range.A pixel is determined to be a foreground pixel (fruit fly) if it satisfies the following conditions: its own pixel value is less than 120 and the difference with the background pixel is greater than 70 . In geneconnectedComponentsWithStats] function of OpenCV (version 4.5.2) package in Python 3.6.The coordinate values were calculated based on the barycenter method of the region [The minimum area boundary rectangle of the segmented fruit fly region was calculated to determine the tail-to-head orientation. We further combined the velocity direction to determine the exact location of head and tail. Due to the low resolution, we did not consider the difference between head and abdomen velocity directions as previously reported .Drosophila movements are also defined to describe the locomotor activity of fruit fly. All these statistics are provided in different formats for users.Based on the trajectory matrix of each fly (center position and orientation in each frame), EasyFlyTracker quantifies behavioral patterns of locomotor and sleep activity. Average distances every 10 min per fly are used to define locomotor activity . Sleep iDrosophila treatment groups during different time intervals (default every 10 min). The sleep status plot displays the statistics of sleep fly (default every 30 min). The heatmap plots show the relative frequency of the fly passage at each position and both the frequency per flies and grouped heatmaps are provided. Sleep intervals can be removed from the heatmap plots with the \u201cheatmap_remove_sleep\u201d parameter defined in the \u201cconfig.yaml\u201d file. The angle change plots show the statistics of average angle change per second per fruit fly and the regional preference of Drosophila movements more visually shows the regional bias of Drosophila movement. About the details of visualization parameters, please refer to our The software provides different outputs. The first outputs are the plots of different behaviors including the locomotor activity plot, sleep status plot, heatmap plot, angle change plot, and regional preference plot. The locomotor activity plot shows average distances of the different To ensure the usage of different platforms and users, we evaluated the tracking accuracy rate of location and orientation of EasyFlyTracker. Images of frames were randomly generated from three different videos taken at random . For the location evaluation, we have used 100 random frames for each video. Each frame is a picture recording location of each fruit fly at the corresponding time point. Then, three different people manually judged the accuracy rate of tracking of each fruit fly. We distinguished the consistency of tracking location and location of fly in each randomly generated image and numbers of mistracked flies were recorded. Tracking errors were defined as those without recognizable location or where the cross was obviously not in the center of the fly. Finally, the average accuracy of location of three videos evaluated by three people was calculated as the accuracy of the tracking rate. For the orientation evaluation, we have used 600 random frames for each video and checked one fruit fly per frame. In total, three people manually checked the same 1,800 fruit flies and recorded three types of evaluation result including correct, wrong, and indistinguishable. After removing the indistinguishable cases, the average accuracy of orientation was then calculated.H-test was used for comparisons of groups. A value of p < 0.05 was considered to indicate statistical significance. The website was built mainly using VUE version 2.6 and Spring Boot version 2.4.0.Statistical analysis was performed using Python (version 3.8.3). The Kruskal\u2013Wallis Drosophila. The average tracking accuracy rate of location and orientation are 99.89 and 87.75% separately, which were manually evaluated random frames of different videos by three people trahttp://easyflytracker.cibr.ac.cn modify config.yaml file according to the personal video path of user or example video provided by us, group information and time duration, and so on; (4) track the position of Drosophila at each frame by running the command line: easyFlyTracker config.yaml; and (5) run other command lines to analyze and statistically track information: easyFlyTracker_analysis config.yaml. More detailed tutorials are available from our website. Technical comments and suggestions can also directly add to GitHub.3We provide the special website Drosophila treated with wild-type (WT) w1118 control, MPH (a stimulant), and ATX (a non-stimulant) . MPH and ATX are two commonly used drugs to treat ADHD symptoms of inattention, hyperactivity, and impulsivity in humans (Drosophila breeding and modified capillary feeder (CAFE) assay (H-test: p = 1.93e-03) or ATX-exposed individuals (Drosophila locomotor activity after drug treatment.We applied EasyFlyTracker to 3-h videos recorded of n humans . DrosophE) assay for drugE) assay can be fE) assay . Based oE) assay . We obse.48e-06) , which i.48e-06) . Meanwhi.48e-06) , but no .48e-06) showed t.48e-06) . They cl.48e-06) and move.48e-06) to help .48e-06) similar .48e-06) . It makeDrosophila-like animals or even other animal models such as worm and mouse, as we provide detailed tutorials and open-source code on the website.4 If users wish to extend to other animal models, we still recommend testing the accuracy of tracking first. In addition, this study has some limitations. We did not conduct a real-time tracking function of the software because during our development process, it was considered more important to prove the offline accuracy rather than real-time tracking and analysis. Also, in order to maintain an open development for better expansion by others, we provided all source code rather than developing it as a fixed-format program. Tracking of group behaviors was not considered in current version, since we have not figured out a solution at a low cost. Finally, our software is designed for adult fruit fly, thus we did not test its applicability to larval fruit fly. In the future, we will optimize and upgrade the software taking into account the above elements and incorporating user comments. In summary, we developed a Python package, called EasyFlyTracker, which is simple, stable, and reliable for analyzing the locomotor activity of fruit flies and it is easy to rebuilt equipment, which is suitable for the software. We hope that this system can achieve large-scale screening of drug response and even target genes in the future, thereby providing clues for psychiatric research and is expected to provide precision medicine research and new drug development models for its drug treatment in Drosophila as well as other animals.As a bonus, EasyFlyTracker can be easily transferred to other http://easyflytracker.cibr.ac.cn.The datasets generated for this study can be found in the article/LiZ and SQ conceived the project and coordinated the collaboration. SQ designed the project, conducted drug and behavior experiments, designed the website, and drafted all the manuscripts. LiZ supervised the project plan. QZ wrote the program. HZ built the website. YG and YW fed fruit flies. YM purchased and setup the recording equipment. SQ and QZ mainly tested the function of the program. YG and ZW were involved in testing the accuracy and the installation. XS and LeZ made the industrial drawings of the activity chambers. QY provided the drugs. LK gave detailed suggestions on the software. SQ, QZ, LK, and LiZ revised the manuscript together. All authors contributed to the article and approved the submitted version of the manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "Correction to:Strahlenther Onkol 202110.1007/s00066-021-01865-3The original version of this article unfortunately contained a\u00a0mistake.a\u00a0BLUE) and bilaterally (b\u00a0RED) irradiated patients. Instead of: Fig.\u00a03 Dose\u2013volume histograms for the thyroid gland in unilaterally (a\u00a0red) and bilaterally (b\u00a0blue) irradiated patients.There is a\u00a0mistake in the legend of Fig.\u00a03. The labelling has been reversed. It should be: Fig.\u00a03 Dose-volume histograms for the thyroid gland in unilaterally (The original article has been corrected."} +{"text": "Background and Objectives: To identify the predictors of clinical outcomes in women with pelvic organ prolapse (POP) who underwent transvaginal reconstruction surgery, especially with transobturator mesh fixation or sacrospinous mesh fixation. Materials and Methods: All women with POP who underwent transvaginal reconstruction surgery, especially with transobturator mesh fixation or sacrospinous mesh fixation, were reviewed. Results: Between January 2011 and May 2019, a total of 206 consecutive women were reviewed, including 68 women receiving POP reconstruction with transobturator mesh fixation and 138 women who underwent POP reconstruction with sacrospinous mesh fixation. The least experienced surgeon (hazard ratio = 804.6) and advanced stage of cystocele (hazard ratio = 8.80) were the predictors of POP recurrence, especially those women with stage 4 of cystocele. Young age (hazard ratio = 0.94) was a predictor for mesh extrusion, especially those women with age \u226467 years. Follow-up interval was also an independent predictor of mesh extrusion. High maximum flow rate was the sole predictor of postoperative stress urinary incontinence, especially those women with Qmax \u226519.2 mL/s. Preoperative overactive bladder syndrome (hazard ratio = 3.22) were a predictor for postoperative overactive bladder syndrome. In addition, overactive bladder syndrome rate improved after surgery in the sacrospinous group (p = 0.0001). Voiding dysfunction rates improved after surgery in both sacrospinous and transobturator groups. Conclusions: Predictors of clinical outcome in women who underwent transvaginal POP mesh reconstruction are identified. The findings can serve as a guide for preoperative consultation of similar procedures. Pelvic organ prolapse (POP) includes prolapse of the anterior, apical, and posterior compartments. Anterior vaginal wall is the vaginal site most commonly affected by prolapse. Instead of an isolated defect, anterior vaginal wall prolapse is highly associated with apical prolapse ,2. Conco\u00ae PA system are sti Germany ,19,20,21To our knowledge, there was only one study mentioned about the comparison of POP surgery between transobturator and sacrospinous mesh fixation, and the predictor of clinical outcome was not analyzed in the study . TherefoMedical records of all consecutive women with Pelvic Organ Prolapse Quantification stage II or higher anterior/apical compartment prolapse, who were admitted to the department of Obstetrics and Gynecology of a tertiary referral center for POP reconstruction were reviewed. Those patients who did not undergo transobturator or sacrospinous mesh fixation were excluded in this study. In general, uncontrolled diabetes is a contraindication of vaginal mesh surgery in our hospital. The research ethics review committee of this hospital approved this study. Transobturator mesh fixation was available between January 2011 and October 2016; however, sacrospinous mesh fixation was available between June 2015 and May 2019 in the hospital. That is, patients received transobturator mesh fixation between January 2011 and June 2015, and patients received sacrospinous mesh fixation between October 2016 and May 2019. Between June 2015 and October 2016, the choice of sacrospinous or transobturator mesh fixation was made at each surgeon\u2019s discretion. After hydrodissection, a vertical midline incision was made on the anterior vaginal wall. The vaginal epithelium and the full-thickness muscularis layer were dissected from the bladder wall. The vesicovaginal space was opened bilaterally until the plane near the ischial spine. Frequently, the anterior wall prolapse was plicated with absorbable sutures to reduce the area of the cystocele. Four cutaneous incisions were made; 2 superior incisions were made at the level of the clitoris at the upper medial edge of the obturator foramen and 2 inferior incisions were made 3 cm inferior and 2 cm lateral to the superior incisions. Superior trocars were inserted through the incision wound, penetrating the subcutaneous tissue, passing through the obturator membrane, and emerging from the vaginal incision wound with finger guidance. With the similar method, the inferior mesh arms were attached to the pelvic side wall at the level of the arcus tendineus fasciae pelvis near the ischial spine. The central part of the mesh was placed under the bladder, laid flat on the anterior vagina wall, and fixed loosely. The vaginal incision wound was closed with two layers of delayed absorbable suture . After hydrodissection, a longitudinal midline vaginal incision was made with blunt dissection of the vaginal mucosa from its underlying fascia until identifying the sacrospinous ligament. With the aid of the Capio suture capturing device , the mesh arms were introduced and fixed at the bilateral sacrospinous ligaments. Instead of direct visualization, the location of mesh fixation was identified by palpation . The central part of the mesh was sutured to the bladder wall and paracervical ring or vaginal vault. After adjusting the tension of the mesh, the incision wound was closed with two layers of delayed absorbable sutures .Medical records, including obstetric and gynecologic history, body mass index, systemic disease, previous urogynecologic surgery history, coexistent overactive bladder syndrome (OAB), 20 min pad test, and urodynamic studies, were reviewed. In general, patients were requested to visit the outpatient clinic 7 days, 14 days, 1 month, and 3 months after surgery, and then 6-monthly thereafter. Stress urinary incontinence (SUI) was defined as the complaint of involuntary loss of urine on effort, physical exertion, sneezing, or coughing . PostopeMultichannel urodynamic equipment with computer analysis and Urovision was used for women with coexistent lower urinary tract symptoms or excluding occult urodynamic stress incontinence. All terminology conformed to the standards recommended by the International Continence Society and Urodynamic Society . All datp < 0.10 in the univariate analysis [p value of less than 0.05 was considered statistically significant. The receiver operating characteristic curve (ROC) analysis was performed to identify the optimal cut-off value for differentiation. The Stata software program was used for statistical analyses. Chi-square test, Fisher\u2019s exact test, Wilcoxon rank-sum test, or McNemar\u2019s test were employed for statistical analysis. The survival curve was estimated using the Kaplan\u2013Meier method. Multivariable Cox proportional hazards model was performed by using all variables with analysis . A p valp = 0.01). A total of 161 (78.2%) patients underwent preoperative urodynamic studies. Most patients were menopausal. Except \u2265stage II uterine prolapse and rectocele rates, detrusor pressure at maximum flow rate, vaginal total hysterectomy, posterior colporrhaphy and follow-up interval, there were no between-group differences in the other baseline characteristics . In addition, the presence of apical prolapse was a negative predictor of the use of transobturator mesh fixation = 0.09 to 0.76, p = 0.01). That is, those patients without concomitant apical prolapse tended to undergo transobturator mesh fixation.Five surgeons were involved in this study . Betweenp = 0.42, 0.97, 0.24 and 0.75, respectively). Probabilities of POP recurrence A, mesh eMultivariable Cox proportional hazards model revealed that the stage of cystocele (hazard ratio = 4.56) and surgeon E (hazard ratio = 804.6) were the independent predictors of POP recurrence . Stage op = 0.02) was also an independent predictor of mesh extrusion in additional to young age . Multivariable Cox proportional hazards model revealed that age (hazard ratio = 0.94) was the only predictor of mesh extrusion . Age \u2264 6Multivariable Cox proportional hazards model also revealed that Qmax (hazard ratio = 1.03) was the sole predictor of postoperative SUI . Qmax \u2265 Multivariable Cox proportional hazards model revealed that preoperative OAB (hazard ratio = 3.22) was the sole predictor for postoperative OAB . Univariate logistic regression analysis did not reveal any predictors for postoperative VD .p = 0.0001). In addition, VD improved after surgery in both groups is not a predictor for POP recurrence, mesh extrusion, and postoperative SUI, OAB and VD . Similarp < 0.001) was one significant predictor of POP recurrence [Cystocele stage was a predictor for POP recurrence [In our study, the least experienced surgeon was a risk factor for POP recurrence . Price e= 0.043) . Nonethe= 0.043) .p = 0.04, p = 0.15) [p = 0.02) was also an independent predictor of mesh extrusion. The mesh extrusion rate might increase with time.Young age was a predictor for mesh extrusion . The rea = 0.15) ,34. Asia = 0.15) . Thus, s = 0.15) . In addip = 0.04, Qmax was a predictor for postoperative SUI was an independent predictor for postoperative OAB ; however, this finding might bias our results. Despite the Uphold system and the Perigee system are not available currently. However, some similar commercial kits, self-tailored meshes, and autologous fasciae remain in use for transvaginal POP reconstruction [Limitations of this study included a retrospective nature and limited sample size. In addition, different between-group follow-up time intervals and different surgical experience of the surgeons may bias the results. Besides, an average of two cases per month in this hospital might be not enough for surgeons to achieve surgical proficiency in POP surgery, and this can result in the bias. In general, patients with anterior compartment prolapse but without apical prolapse might be suitable to receive transobturator mesh fixation. We also found that the presence of apical prolapse was a negative predictor of the use of transobturator mesh fixation (Odds ratio = 0.26, truction ,19,20,21Predictors of clinical outcome in women who underwent transvaginal POP reconstruction are identified. The above results can serve as a guide for preoperative consultation."} +{"text": "Simulating complex biological and physiological systems and predicting their behaviours under different conditions remains challenging. Breaking systems into smaller and more manageable modules can address this challenge, assisting both model development and simulation. Nevertheless, existing computational models in biology and physiology are often not modular and therefore difficult to assemble into larger models. Even when this is possible, the resulting model may not be useful due to inconsistencies either with the laws of physics or the physiological behaviour of the system. Here, we propose a general methodology for composing models, combining the energy-based bond graph approach with semantics-based annotations. This approach improves model composition and ensures that a composite model is physically plausible. As an example, we demonstrate this approach to automated model composition using a model of human arterial circulation. The major benefit is that modellers can spend more time on understanding the behaviour of complex biological and physiological systems and less time wrangling with model composition. i.e., containing meaningful interfaces for connecting to other modules. Such modules are readily combined to produce a whole-system model. For the combined model to be consistent, modules must be described using the same modelling scheme. One way to achieve this is to use energy-based models that are consistent with the conservation laws of physics. Here, we present an approach that achieves this using bond graphs, which allows modules to be combined faster and more efficiently. First, physically plausible modules are generated using a small number of template modules. Then a meaningful interface is added to each module to automate connection. This approach is illustrated by applying this method to an existing model of the circulatory system and verifying the results against the reference model.Biological and physiological systems usually involve multiple underlying processes, mechanisms, structures, and phenomena, referred to here as sub-systems. Modelling the whole system every time from scratch requires a huge amount of effort. An alternative is to model each sub-system in a modular fashion, Mathematical models have long been used to study biological systems and predict their behaviours , 2. HoweMathematical modelling in the context of biology was first intended to simplify the analysis of biological and physiological processes and systems. Such models are generally applicable in the context in which they were developed, which determines how complicated a model should be , 10. Whiwww.physiomeproject.org) and Virtual Physiological Human (VPH) (www.vph-institute.org) projects have demonstrated, initial steps have been taken to construct more realistic models able to describe almost every system in the body (from cell to organs) A letter containing a detailed list of your responses to all review comments, and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).Important additional instructions are given below your reviewer comments.Thank you again for your submission to our journal. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.Sincerely,Daniel A BeardDeputy EditorPLOS Computational Biology***********************ploscompbiol@plos.org immediately:A link appears below if there are any accompanying review attachments. If you believe any reviews to be missing, please contact [LINK]Reviewer's Responses to QuestionsComments to the Authors:Please note here if the review is uploaded as an attachment.Reviewer #1:\u00a0No commentsReviewer #2:\u00a0The paper addresses a general methodology for composing models, combining the energy-based bond graph approach with semantics-based annotations. This approach is proposed for biological / physiological models and ensures that the composite model is physically plausible. The technique is applied to automated model composition using a model of human arterial circulation.The contributions of the paper are very interesting and promising. The paper is written in a comprehensive manner and it has technical soundness. The references are appropriate. All in all, this is a very good paper.Some minor issues:1. It would be interesting for the readers to discuss some aspects regarding the use of entire arterial network . Which are the difficulties and problems that could arise when we deal with the cerebral system?2. The overall model is based on lumped-parameter models. There are some losses when we use this kind of model instead of PDEs description? Is there a workable solution with bond graphs for biological systems modelled via PDEs?**********Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code \u2014e.g. participant privacy or use of data from a third party\u2014those must be specified.The Reviewer #1:\u00a0YesReviewer #2:\u00a0None**********what does this mean?). If published, this will include your full peer review and any attached files.PLOS authors have the option to publish the peer review history of their article digital diagnostic tool, Data Requirements:http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: Reproducibility:https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocolsTo enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at References:Review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript.If you need to cite a retracted article, indicate the article\u2019s retracted status in the References list and also include a citation and full reference for the retraction notice. 25 Apr 2021Attachmentreviewers response letter.pdfSubmitted filename: Click here for additional data file. 27 Apr 2021Dear Shahidi,We are pleased to inform you that your manuscript 'Hierarchical semantic composition of biosimulation models using bond graphs' has been provisionally accepted for publication in PLOS Computational Biology.Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests.Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS.Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology.\u00a0Best regards,Daniel A BeardDeputy EditorPLOS Computational BiologyDaniel BeardDeputy EditorPLOS Computational Biology*********************************************************** 10 May 2021PCOMPBIOL-D-21-00472R1 Hierarchical semantic composition of biosimulation models using bond graphsDear Dr Shahidi,I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course.The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript. Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work! With kind regards,Andrea Szaboploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiolPLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom"} +{"text": "It is widely acknowledged that the construction of large-scale dynamic models in systems biology requires complex modelling problems to be broken up into more manageable pieces. To this end, both modelling and software frameworks are required to enable modular modelling. While there has been consistent progress in the development of software tools to enhance model reusability, there has been a relative lack of consideration for how underlying biophysical principles can be applied to this space. Bond graphs combine the aspects of both modularity and physics-based modelling. In this paper, we argue that bond graphs are compatible with recent developments in modularity and abstraction in systems biology, and are thus a desirable framework for constructing large-scale models. We use two examples to illustrate the utility of bond graphs in this context: a model of a mitogen-activated protein kinase (MAPK) cascade to illustrate the reusability of modules and a model of glycolysis to illustrate the ability to modify the model granularity. The biochemistry within a cell is complex, being composed of numerous biomolecules and reactions. In order to develop fully detailed mathematical models of cells, smaller submodels need to be constructed and connected together. Software and standards can assist in this endeavour, but challenges remain in ensuring that submodels are both consistent with each other and consistent with the fundamental conservation laws of physics. In this paper, we propose a new approach using bond graphs from engineering. In this approach, connections between models are defined using physical conservation laws. We show that this approach is compatible with current software approaches in the field, and can therefore be readily used to incorporate physical consistency into existing model integration methodologies. We illustrate the utility of this approach in streamlining the development of models for a signalling network (the MAPK cascade) and a metabolic network (the glycolysis pathway). The advantage of this approach is that models can be developed in a scalable manner while also ensuring consistency with the laws of physics, enhancing the range of data available to train models. This approach can be used to quickly construct detailed and accurate models of cells, facilitating future advances in biotechnology and personalised medicine. Over the past few decades, advances in both data generation and computational resources have enabled the construction of large-scale kinetic models in systems biology, including whole-cell models that represent every known biomolecule in the cell . An accuMycoplasma genitalium (blue variables) and molar flux v [mol/s] (green variables). Since \u03bc and v multiply to give power P [J/s], each connection transfers energy between components. In addition, separate nodes (\u25cf and \u25bc) are used to model mass and energy conservation laws inherent within these systems, discussed further below.The bond graph representation in \u03bc. In dilute systems at constant temperature and pressure, this quantity is related to abundance x byx [mol] is the amount of the species, K [mol\u22121] is the thermodynamic parameter for that species, R = 8.314 JK\u22121mol\u22121 is the ideal gas constant and T [K] is the absolute temperature. The parameter K is related to the standard free energy of the species; one can also write c [M] is the concentration of species, V [L] is the volume of the compartment and \u03bc0 [J/mol] is the standard chemical potential taken at a concentration of c0 . By equating Eqs K is related to \u03bc0 through the equationEvery component (node) within the system contains its own independent set of equations and parameters. Each chemical species (open circles \u25ef in RTln(Kx)where x v and the thermodynamic potentials. For example, the thermodynamic Marcelin-de Donder equation represents reversible mass action kinetics ), Rb0 (binding constant of the substrate [dimensionless]), Rb1 (binding constant of the product [dimensionless]) and e0 [Enzyme-catalysed reactions can be described by rate laws . The sime [mol]) . Thus, ae [mol]) , 48.In some cases, the full dynamics of the enzymatic reaction need to be considered. The advantage of a modular representation is that groups of reactions can be encapsulated into a model component. The diagram in As seen in the above examples, parts of a module can be exposed by leaving open one end of a connection, which imposes a boundary condition on the model, allowing it to be connected to an external component. This is analogous to leaving ports open in electrical circuits. This kind of modularity provides tools for managing model complexity: generic modules are easily replicated and reused for different reactions that use the same mechanism, and the internal details of complex enzymatic mechanisms can be hidden. We now illustrate these ideas through the modular development of a MAPK signalling cascade model, and then by considering the glycolytic metabolic pathway modelled using different reaction rate laws.The MAPK cascades are a family of biochemical signalling pathways that regulate important biological processes including growth, proliferation, migration and differentiation . These sXenopus oocytes, a key regulator of maturation in these cells A letter containing a detailed list of your responses to all review comments, and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).Important additional instructions are given below your reviewer comments.Thank you again for your submission to our journal. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.Sincerely,Anders WallqvistAssociate EditorPLOS Computational BiologyDaniel BeardDeputy EditorPLOS Computational Biology***********************ploscompbiol@plos.org immediately:A link appears below if there are any accompanying review attachments. If you believe any reviews to be missing, please contact [LINK]Reviewer's Responses to QuestionsComments to the Authors:Please note here if the review is uploaded as an attachment.Reviewer #1:\u00a0This is a manuscript that essentially summarizes the bond graph approach of network thermodynamics and proposes it to be a suitable method to modularize the development of large-scale models (such as whole-cell models). Overall I find the manuscript to be very clear and accessible and overall I think this will interest the readership of the journal.My only major concern regards the glycolysis example used, where they chose to keep AMP, ADP, ATP, NAD and NADH constant. These conserved species are recycled and are too important to ignore by keeping them constant. It would have been much more interesting if they had included their recycling reactions , but it would be a whole lot more interesting. Importantly, doing that would allow a better energetic analysis of glycolysis...There are a number of minor points that nevertheless I think they could easily address and which would make the manuscript much more useful for readers:1- ThermodynamicsTo be fair to readers it is important to be accurate in citing prior work. In this regard I bring attention to your sentence in the Introduction:\"While the whole-cell modelling community has emphasised the need to use physically measurable parameters, it is only recently that the importance of thermodynamics in these models has been acknowledged [5].\"the importance of thermodynamics in this type of models has been invoked prior to this by several authors, such as your references [32] and [33], Henry et al (2006), Soh et al (2012), Lubitz et al (2010) and very much in this large-scale kinetic model context by Stanford et al (2013) .2- Functional modularity: I thought it could be pertinent to mention the work of Del Vecchio et al (2008) in this context.3- Figure 2 is very useful, but it would be better to make 2B actually use the SBGN standard. I note that it would require only one change to make it compliant: the arrows should not point into the reaction nodes, only into the species nodes. Then you couldsay that this representation follows the standard (rather than it is similar to).4- The MAPK are not only significant clinically for being involved in many cancers, but also in inflammation (P38 pathway).5- Fig. 4, I note that in panel B you represent the phosphatase in the direction which seems to be the inverse of the biological flow. Normally phosphatases go from XP -> X rather than X -> XP Genome-scale thermodynamic analysis of Escherichia coli metabolism. Biophys J. 90(4):1453-61.- Soh KC, Miskovic L, Hatzimanikatis V. (2012) From network models to network responses: integration of thermodynamic and kinetic properties of yeast genome-scale metabolic networks. FEMS Yeast Res. 12(2):129-43.- Lubitz T, Schulz M, Klipp E, Liebermeister W (2010) Parameter balancing for kinetic models of cell metabolism. J Phys Chem B 114: 16298-16303.- Stanford NJ, Lubitz T, Smallbone K, Klipp E, Mendes P, Liebermeister W (2013) Systematic construction of kinetic models from genome-scale metabolic networks PLoS ONE 8:e79195- Del Vecchio D,Ninfa AJ, Sontag ED (2008) Modular cell biology: retroactivity and insulation. Mol Syst Biol 4:161.Reviewer #2:\u00a0The manuscript by Michael Pan et al describes a physics-based approach for modularity in biological modeling. They describe modular construction of models based on bond graphs.The first part of the manuscript described bond graphs, defining reaction network (RN) through thermodynamic (TD) notations, expressing all kinetic rate constants through TD parameters. Mathematically it means introducing a larger number of parameters that are energy-based and independent, contrary to dependent parameters in a reaction network. Visually it corresponds to decomposing the RN into 4-partite graph . This part is clearly written and very useful.http://co.mbine.org/standards/sbml/level-3/version-1/comp. Although SBML hierarchical package is too technical and not written for a general audience, it introduces the same concepts of black-box versus white-box encapsulation and \u201cports\u201d as interfaces between a module and its containing model. A modular SBML-based approach is implemented in iBioSim (https://async.ece.utah.edu/tools/ibiosim/). Thus, I would appreciate if authors could address the following comments:The second part describes using bond graphs to construct and combine modules into larger biomedical models. This is a nice and comprehensive description of modular approach, but the authors have not compared their approach with SBML hierarchical model composition package. It is described at 1. It would be useful to have more input on what in Figures 4, 6 and 7 is different from a regular modular composition of RNs using ports.2. When reading manuscript beyond page 13, energy conservation is not mentioned anymore and bond graphs look as RN defined with new TD parameters. Is it true?3. Is all TD machinery internal for each module and not exposed? Can we \u201cmix and match\u201d by defining some modules in terms of TD and some in terms of RN?4. Can benchmarking rate laws and effects of perturbations be computed in a regular RN simulator after a simple change of parameters?5. Compare their modular approach with SBML hierarchical composition.6. Discuss whether bond graphs can be described as an extension to SBML standard.Reviewer #3:\u00a0This is a nicely written paper on the use of bond-graphs to build composable systems biology models. I only have a couple of minor points to make and for the authors to clarify in the paper:1. I assume the authors know that SBML supports a white/black box approach to composition in the hierarchical package:https://pubmed.ncbi.nlm.nih.gov/26528566/This should be cited, perhaps on page 62. One important question is whether bond-graph representations (I think they are stored as json files?) could be converted to SBML given the ubiquity of simulators for this format. One could in fact imagine the bondgraph description being a higher level of description that can be converted to a simpler SBML file where the bond graph representation is the model that is edited and improved. The SBML this acts as an intermediary. Or do the authors envisage adding new annotations to SBML (or a package) to extend SBML to store bondgraph models?3. One the major disadvantages of the bondgraph approach is its verbosity . The same problem arises with Vivarium but this tool has also made the fatal mistake of mixing specification with implementation. Although perhaps not applicable to this manuscript, software tooling will be a major impediment to the widespread use of bond graphs. I looked at bondgraphstools and there might be a case of adding a higher layer of abstraction because currently is seems quite laborious to build even small models, I also wonder whether in its current state it could scale to whole-cell models (the same applies to vivarium) although the composition feature would help. There might need to be additional abstraction levels.3. Page 10, the authors mention K as the thermodynamic parameter, what is that exactly? I see that they cite the appendix for further info but a mention in the main text as to what K is would be useful.https://arxiv.org/abs/0710.5195, MAPK Cascades as Feedback Amplifiers) and has since been brought up since by other authors.4. Page 21, second line where they site ref 15 with respect to linearization due to feedback. I believe this was first mention by Sauro and Ingalls computational code underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code \u2014e.g. participant privacy or use of data from a third party\u2014those must be specified.The Reviewer #1:\u00a0YesReviewer #2:\u00a0YesReviewer #3:\u00a0Yes**********what does this mean?). If published, this will include your full peer review and any attached files.PLOS authors have the option to publish the peer review history of their article digital diagnostic tool, Data Requirements:http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: Reproducibility:https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocolsTo enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at References:Review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript.If you need to cite a retracted article, indicate the article\u2019s retracted status in the References list and also include a citation and full reference for the retraction notice. 27 Sep 2021AttachmentResponse combined.pdfSubmitted filename: Click here for additional data file. 30 Sep 2021Dear Dr. Pan,We are pleased to inform you that your manuscript 'Modular assembly of dynamic models in systems biology' has been provisionally accepted for publication in PLOS Computational Biology.Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests.Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS.Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology.\u00a0Best regards,Anders WallqvistAssociate EditorPLOS Computational BiologyDaniel BeardDeputy EditorPLOS Computational Biology*********************************************************** 10 Oct 2021PCOMPBIOL-D-21-01387R1 Modular assembly of dynamic models in systems biologyDear Dr Pan,I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course.The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript. Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work! With kind regards,Zsofia Freundploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiolPLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom"} +{"text": "This study assessed the determinants of early childbearing among women by disability status.the study used the 2016 Uganda demographic and health survey data, analyzing a weighted sample of 18,506 women of reproductive age. We used frequency distributions to describe respondents\u00b4 characteristics, chi-squared tests and multivariable logistic regressions to establish the determinants of early childbearing.early childbearing is higher among women with disabilities. The determinants of early childbearing among women with disabilities were marital status, religion, education, and occupation. The odds of early childbearing were higher among ever married compared with never married women ; women who engaged in sales and services compared with those that did not work ; and smaller religious faiths compared with protestants . The odds reduced with advancement in education. Region, attitude towards violence and knowledge of the ovulatory cycle, though associated with early childbearing for nondisabled women were not significant for women with disabilities.the lack of formal education and early marriages increased the odds of early childbearing for all women. Efforts to address early childbearing especially for women with disabilities should consider advancing women\u00b4s education; and preventive measures targeting women of smaller religious faiths, stressing the dangers of early sex and marriages. The measures should target women with disabilities irrespective of attitudes towards violence, knowledge concerning fertility, and region. According to the World Health Organization (WHO), about 15% of the world\u00b4s population has disabilities . Women wTeenage pregnancies significantly contribute to maternal and child morbidity and mortality. Up to 3.9 million adolescents in developing countries undergo unsafe abortions annually . In UganFactors associated with childbearing among young females in general include the level of education, wealth, family structure, exposure to media, alcohol and substance abuse, low self-esteem, sexual coercion, curiosity, early marriage and limited access to contraceptives ,16-19. ELiterature on determinants of early childbearing among women with disabilities in Uganda is scarce. Recent studies on disabilities in Uganda focused on different subjects and smaller sections of the population for instance access to HIV services by blind persons ; and chaStudy design: we used the 2016 Uganda Demographic and Health Survey (UDHS) data. The UDHS is based on a cross-sectional nationally representative survey design that employed a stratified two-stage cluster sampling design. The survey is representative of women age 15-49 years [Study setting: the Uganda demographic and health survey covered the whole country, to provide the data needed to monitor and evaluate population, health, and nutrition program on a regular basis. The 2016 UDHS provides a comprehensive overview of population, maternal and child health issues [Study population: the study included all women of reproductive in the women\u00b4s/individual recode or dataset. Men and household heads that did not fit the inclusion criteria were excluded from the analysis.Data and sample derivation (sample size estimation): datasets used for this study were obtained with permission from the Demographic and Health (DHS) program website. The weighted sample was 18,506 women [06 women . The stuData collection: collection of data that constitute the datasets used took place between June and December 2016 by Uganda Bureau of Statistics (UBOS) in partnership with the Ministry of Health, Uganda. Fertility, maternity health, including assistance at delivery were among the data collected. The data were collected using the woman and household questionnaires. The individual and people\u00b4s recodes were developed using data generated by the women and household questionnaires respectively. For further details on the data collection please see the UDHS report [Measures of the outcome variable: the individual recode included the age of the respondent at first birth and whether the respondent was pregnant at the time of interview. Early childbearing in this case meant conception or delivery before 18 years. Those that had delivered or were pregnant before 18 years were coded as 1 \u201cyes\u201d for early childbearing. The rest of the respondents were coded as 0 \u201cno\u201d.Measures of explanatory variables: WHO defines disability as experiencing a lot of difficulty or not functioning in the domains of sight, hearing, speech, memory, walking, and personal care [nal care . We adopStatistical analyses: data were analyzed using Stata version 15. Frequency distributions were used to describe the characteristics of the respondents. We used Pearson's chi-squared (\u03c72) tests to examine the differences between the outcome and explanatory variables. For multivariable analysis, we fit binary logistic regression models for women with disabilities with nondisabled women. The level of statistical significance was set at p<0.05. Inclusion of explanatory variables for multivariable analysis was set at p=0.2. Variables to be included in the models were tested for multi-collinearity using Pearson\u00b4s correlation. Variables with issues of multi-collinearity (in this case age) were excluded from the models. We used the link test for model specification. Multivariable analysis results are presented in form of odds ratios (OR) at 95% confidence intervals, p values inclusive.Ethical considerations: the informed consent form (ICF) Institutional Review Board (IRB) reviewed and approved the 2016 Uganda Demographic and Health Survey. The ORC MACRO, ICF Macro, and ICF IRBs complied with the United States Department of Health and Human Services regulations for the protection of human research subjects (45 CFR 46). Informed consent was obtained from all respondents. Participation was voluntary and anonymity was maintained by the exclusion of participants\u00b4 identifiers from the dataset [Descriptive results:Determinants of early pregnancy by disability status: separate models for fit for women with disabilities and nondisabled women. Confounders such as marital status, level of education, wealth, and residence were included in the model. Variables that had a p-value of 0.2 or less were included in the models. Age was excluded on grounds of its high correlation with marital status. Marital status was retained owing to its strong association with childbearing. For nondisabled women, early childbearing was associated with all explanatory factors except residence, wealth index, and ability to refuse sex if the partner has other women. The directions of the results for marital status and level of education were similar to women with disabilities\u00b4. The odds of early childbearing were lower in Northern and Western regions compared with Central region ; among women in professional or formal employment ; and those that knew the ovulatory cycle . The odds of early childbearing were higher among Muslims compared with Protestants ; and women who with attitudes that were supportive of spousal violence .The objective of this study was to assess the determinants of early childbearing among women by disability status, with emphasis on women with disabilities. Early childbearing is higher among women with disabilities. The determinants of early childbearing for women with disabilities were marital status, religion, education and occupation. These factors were also significant for nondisabled women, although there were variations regarding religion and occupation. For women with disabilities, early childbearing was not associated with the region of residence, attitude towards intimate partner violence and knowledge of the ovulatory cycle, which factors were significant for nondisabled women.Marital status is a strong determinant of early childbearing. Early marriage is a risk factor for early childbearing since in the Ugandan setting immediate conception after marriage is expected . This waWomen\u00b4s occupation was a significant determinant of early childbearing for both categories of women but with variations by categories. Women with disabilities in sales and services in contrast to nondisabled women that were not working had higher odds of early childbearing. For women with disabilities, the findings imply that engagement in work outside the domestic sphere, in this case, sales and services that often involves mobility and interaction with a diversity of persons presents a higher risk for early childbearing. The informal sector is associated with low socio-economic status that increases the risk of early childbearing . It is aOur results show that religion is a significant determinant of early childbearing ,39. The Limitations: our study has some limitations. It is not possible to establish causal relations, or the order of influence (what happens before or after) the data being cross-sectional. Examples include marital status and occupation. It is not possible to tell whether women were already in the relevant occupations before or after their first birth or pregnancy. Women\u00b4s attitudes towards teen pre-marital sexual intercourse, inability to resist sexual temptation, curiosity ,16-18, wEarly childbearing is still a major problem especially among women with disabilities. The determinants of early childbearing for women with disabilities are marital status, women\u00b4s level of education, and occupation. Women of low social status require special attention. Efforts to address early childbearing especially for women with disabilities should consider advancing women\u00b4s education; sensitization of smaller religious faiths, stressing the dangers of early sex, marriages, childbearing and their prevention, and should target women with disabilities irrespective of attitudes towards violence and knowledge concerning fertility and region of residence.Factors associated with early childbearing have been investigated in Uganda, some of the factors identified include low levels or lack of formal education, wealth, early marriages, family structure, exposure to media, low self-esteem, inability to resist sexual temptation, and sexual coercion.This study assesses determinants of early childbearing by disability status;The determinants of early childbearing for women with disabilities are marital status, women\u00b4s level of education, and occupation;Unlike nondisabled women, region, attitudes towards violence and knowledge concerning fertility were did not predict early child bearing for women with disabilities; these had done by studies addressing women with disabilities in Uganda."} +{"text": "Subacute thyroiditis can be rarely associated with autoimmune thyroid disorders. It includes Graves' disease\u00a0which is characterized by the presence of a highly specific antibody known as thyroid-stimulating hormone (TSH) receptor antibody (TRAb). There are three types of TRAb:\u00a0TSH receptor stimulating antibody (TSAb) which stimulates the TSH receptor causing Graves' disease, TSH receptor blocking antibody (TBAb) which blocks the TSH receptor causing hypothyroidism, and a neutralizing antibody which does not alter the thyroid function. There are two assays used\u00a0to check the TRAb: the thyroid-stimulating immunoglobulin (TSI) assay and the TSH receptor-binding inhibitor immunoglobulin (TBII) assay out of which the TSI assay measures the stimulating antibody which is specific for graves' disease.\u00a0Although autoimmune thyroid disorders can rarely occur following subacute thyroiditis,\u00a0their clinical presentation is usually compatible with the type of antibody detected in the patient\u2019s serum.We present a unique case of a 44-year-old patient who presented with subacute thyroiditis\u00a0followed by the development of persistent hypothyroidism even in the presence of elevated Graves' disease-specific TSI and TRAb. Graves\u2019 disease is the most common cause of hyperthyroidism in which antibodies directed against the thyroid-stimulating hormone (TSH) receptor cause continuous stimulation of the thyroid gland leading to hyperthyroidism. The antibody measured is known as TSH receptor antibody (TRAb) and there are two methods of measuring the TRAb which include the thyroid-stimulating immunoglobulin (TSI) assay and the TSH receptor-binding inhibitor immunoglobulin (TBII) assay. The TSI assay has a very high sensitivity (96%) and specificity (99%) for the diagnosis of Graves\u2019 disease . SubacutA 44-year-old female initially presented to her primary care physician in October 2018 with complaints of anterior neck pain, palpitations, increased sweating, and fatigue which started two weeks prior to presentation.\u00a0On clinical examination, she had signs of thyrotoxicosis along with anterior neck swelling and tenderness. Laboratory evaluation confirmed hyperthyroidism with a suppressed TSH 0.022 mIU/L (0.4-4.5 mIU/L), elevated free thyroxine (FT4) 2.34 ng/dl (0.8-1.8 ng/dl), and elevated total thyronine (T3) 247 ng/dl (71-180 ng/dl). Erythrocyte sedimentation rate (ESR) was also elevated at 66 mm/hour (0-25 mm/hour). Thyroid ultrasound levels are elevated suggesting thyroid follicular cell damage. A radioactive iodine uptake (RAIU) test is useful in such cases as it helps to differentiate between endogenous hyperthyroidism which is typically associated with high 24-hour iodine uptake and thyroiditis which demonstrates a low or absent 24-hour iodine uptake. Our patient presented with anterior neck pain and tenderness, and an elevated ESR to suggest sub-acute thyroiditis as the etiology of hyperthyroidism. An RAIU test would have provided good supporting evidence for thyroiditis. However, given the typical clinical presentation for sub-acute thyroiditis, our patient was initiated on prednisone once the lab work confirmed the hyperthyroidism.Although sub-acute thyroiditis is a self-limiting disease of viral etiology, it can be associated with autoimmune antibodies. The antibody that is detectable in Graves\u2019 disease is known as TRAb and it has also been reported following thyroiditis. Iitaka et al. reviewed 1,697 patients with subacute thyroiditis and found 38 patients positive for TRAb out of which, the patients who were positive for TSAb developed hyperthyroidism and the patients with TBAb developed hypothyroidism [Likewise, the antibodies that are detectable in Hashimoto\u2019s thyroiditis are serum TPOAb and/or serum thyroglobulin antibody (TgAb), and both antibodies can be present at low titers in thyroiditis. Nishihara et al. studied TgAb and TPOAb in 40 patients in the early phase of subacute thyroiditis [Autoimmune response to the damage of the thyroid follicular cells and immune complex formation is considered a major mechanism of the development of autoimmune thyroid antibodies in thyroiditis . BliddalThe presence of the TRAb or TSI antibodies could also be a predictor of Graves\u2019 disease in the future . Iitaka Even though TRAb is persistently positive and the TPO was negative, the unexplained hypothyroidism in our patient is most likely due to the presence of TBAb which could not be tested due to the low prevalence and unavailability of the test in the U.S.Autoimmune thyroid disorders have been rarely reported following subacute thyroiditis and when they occur, their clinical presentation is usually compatible with the type of antibody detected in the patient\u2019s serum, unlike our unique case in which the patient had a positive TSI antibody and a persistently positive TRAb but still developed persistent hypothyroidism instead of hyperthyroidism. Our case emphasizes the need for long-term monitoring due to the risk of developing Graves\u2019 disease in the future. Our case also emphasizes the need to develop the TBAb bioassays to help understand the reason for such an unusual presentation."} +{"text": "Patients should be counseled to use vaginally instead of rectally. Suppositories are initiated nightly for 2 to 4\u00a0weeks and decreased to every other night, or 2 to 3 times per week depending on disease control. Patients should be evaluated monthly, and suppositories should be tapered to the lowest dose/frequency that maintains disease remission.When prescribing suppositories, we recommend candidal prophylaxis, with topical antifungals 2 to 3 times per week or oral fluconazole 150 to 200\u00a0mg weekly.Insertion of dilators 3 times, weekly, to prevent adhesions. Dilators can be ordered online and come in sets of varying sizes. Patients should use the largest dilator comfortable, and vaginal moisturizers and lubricants can be utilized.Estrogen deficiency should be considered in postmenopausal patients as this can contribute to vaginal inflammation. Local estrogen therapy includes a ring, tablet, or cream. Generally, vaginal cream tends to have the least pain with insertion and is used nightly for 2\u00a0weeks with decrease to maintenance 1 to 3 times per week.None disclosed."} +{"text": "In this paper, the design of a 2-dof (degrees of freedom) rehabilitation robot for upper limbs driven by pneumatic muscle actuators is presented. This paper includes the different aspects of the mechanical design and the control system and the results of the first experimental tests. The robot prototype is constructed and at this preliminary step a position and trajectory control by fuzzy logic is implemented. The pneumatic muscle actuators used in this arm are designed and constructed by the authors\u2019 research group. The continuous growth of average lifespan in the world means more elderly people in the future. That is why more and more sanitary care with a growing of the health cost is expected. This is the main reason that pushes the development of automated systems to apply medical therapies. The physical rehabilitation sector is a very expensive sector because the main part of the therapy has to be performed with one-to-one attention from a therapist. The rehabilitation robots permits for economizing medical therapists, which apply the therapy on a person-to-person basis.On the other hand, robots are increasingly present in daily life, from robots for cleaning the house, to robots for garden care or self-driving vehicles, etc. As the number of applications useful in normal daily life grows, the need for integration in domestic environments and the need for safety in the interaction between man and machine grows as well. In this context, in recent years, collaborative robots and soft robotics have received a lot of attention, which meet these needs not only in biomedical and industrial fields, but also in the field of exploration and cooperative human assistance ,5,6,7,8.In the category of machines with high safety requirements, robots for motor rehabilitation and aid are certainly included.There are two broad categories of active rehabilitation machines based on the way of mechanical interfacing with humans. There are end-effector type machines, which work by being in contact only with the extremity of the limb to be treated ,15,16,17Bioinspired machines are more easily placed in a domestic context and are more easily accepted from a psychological point of view. The exoskeleton-type machines are certainly bioinspired whereas the end-effector ones often derive from the adaptation of industrial robots. To ensure safety, these robots must be equipped with systems to introduce compliance. This can also be obtained through control but it is not always possible, for example, when the present transmissions do not allow backdriveability.The exoskeleton-type machines also allow you to control the individual joints and guide the limb with precision in complex movements. The end-effector type machine is easier to use but may have critical issues related to the achievement of singularity configurations of the human limb. Among the kinematic architectures for end effector type systems, there are widespread projects with cables that allow quite easy control but are bulky and difficult to transport as machines. In an activThe present work in particular deals with a robot for upper limb rehabilitation. Robots for motor rehabilitation of the upper limb have been studied and used for some time. As for the actuators, they can play a fundamental role in safety. In general, but in particular for robots for rehabilitation or upper limb aid, by far the most used are electric actuators ,38,39,40In the present context, pneumatic muscle actuators, although used very little for rehabilitation devices in general, and in particular for devices dedicated to the upper limbs, are very interesting, given their peculiar characteristics particularly suitable for these devices ,61,62,63The robots for upper limb rehabilitation have different characteristics, especially for the possibilities of exercises they allow. There is no standard on performance and the various devices are distinguished not only by the architecture (exoskeleton or end effector), but also by the joints or movements they can handle. There are robots for the rehabilitation of the shoulder , the elbControl is a key part of rehabilitation robots. First of all, the case in which the control must introduce compliance must be considered. Classic control strategies such as PID control are often used which can work well in the case of passive patient protocols. Other control systems used are those based on sliding mode, mechanical impedance control or fuzzy logic , or in cOn the basis of all the literature analyzed, an activity was carried out, which is presented in this work, concerning the development of a robot for the rehabilitation of the upper limb for the treatment of the shoulder and elbow with a kinematic architecture that can be seen both as an end-effector and as an exoskeleton type. In fact, the robot, although it is expected to have its own end-effector as its only connection point with the user\u2019s hand, has an anthropomorphic architecture with joints and segments homologous to those of the human limb. It is a device with two motorized degrees of freedom (D.O.F.), actuated by pneumatic muscles, and is particularly innovative from this point of view because the muscles used are of the Straight Fibres type that have several advantages over the McKibben muscles. After the design phase, the robot was built. A control system based on Fuzzy logic has been implemented and, at the moment, the operation isokinetic mode, with passive patient, has been implemented. Some preliminary experimental tests, concerning step movements of the single joints, trajectory tracking of the single joints and trajectory tracking involving both joints at the same time, have been carried out and documented. Tests prove the validity of the project.As previously explained, the robot is designed to use for rehabilitation of upper limbs. A survey that involved users and therapists in order to determine the desirable specifications for an upper limb motor rehabilitation machine resulted in a machine for therapies in the home environment and with characteristics that fall into four categories : individIn particular, the machine must be transportable and therefore light, with a small footprint on the ground (safety and usability). Therapies should primarily focus on the movements of normal Activities of Daily Living (ADL). From an analysis of the ADLs, the main movements involved are flexion-extension of the elbow, prono-supination of the forearm, and flexion-extension of the shoulder (movement and task). Other key features for a rehabilitation machine are safety (safety and usability) and user acceptability (safety and usability). It must be adaptable to a large number of users , able to record and monitor the user\u2019s performance (recording of performance), and have a user friendly interface (safety and usability). Finally, it must be low-cost. The acceptable cost should be EUR 5000.rehabilitation with movements in the sagittal plane: flexion and extension of the elbow and flexion and extension of the shoulder in a physiologically correct way or movements that involve all joints at the same time;2 modes of functioning: passive and active-constrained;good compliance for safety purposes;weight, not more than 400 N;cost, around EUR 5000;2;footprint, 600 \u00d7 800 mmfriendly interface;good acceptability by the user.Therefore, the technical specifications that were considered for the robot design are:1: 435 mm;arm length L2: 385 mm;forearm length L1 < 90\u00b0;shoulder excursion \u2212110\u00b0 < \u03b82 < 160\u00b0;elbow excursion 0\u00b0 < \u03b8F < 360\u00b0.direction of force on the end-effector 0\u00b0 < \u03b8Regarding technical specifications, the conceptual phase proposed an anthropomorphic system that operates in a position parallel to the user\u2019s arm, with 2 dofs, one for the shoulder and one for the elbow. Moreover, the machine has to be able to apply a force F = 20 N in any direction to the user\u2019s arm. The anthropomorphic structure gives a better functionality at the robot and the 2-dof promises a better performance in the physical rehabilitation if compared with 1 dof . The dimAs for control, this must guarantee stable and extremely robust dynamic functioning of the machine with respect to the uncertainties of contacts in interactions with humans, therapists, or users. It must modulate the response to mechanical perturbations and ensure a gentle and soft evolution both for safety reasons and good therapeutic practice.For the determination of the working volume, the direct kinematic model is considered. Using the Denavit\u2013Hartemberg notation, the transformation matrix between the reference frame of the end link with respect to the base is given by the product of all the single transformation matrices between link i and link i-1:p = [0 0 0 1]T) and by varying the angles of the joints in the respective definition domains, the working volume of the robot is determined. In 1 and \u03b82, obtaining 1435 different positions.By this matrix, it is possible to determine the coordinates, with respect to the base, of any point, known as its coordinates with respect to the local reference of the end link, for any pair of joint angle values. Using the position of the end of link 2 in local coordinates , a kineto-static model has been considered. This was obtained by the Eulerian approach based on free body diagrams. Below is the considered model:1 = torque required on joint1T2 = torque required on joint2T1 = mass of link1 (arm) = 2 kgm2 = mass of link2 (forearm) = 0.45 kgm2 = mass of joint2 = 2 kgmjh = mass of handle = 0.1 kg.m1 \u03b82 ed F according to the values indicated above and it was possible to determine the trends of the torque required at the joints as a functions of the joint positions. A complete multivariate investigation was performed, by this model, on the parameters \u03b8About the actuation of the joints, pneumatic muscles were chosen in an agonist\u2013antagonist arrangement. A pulley teeth belt transmission is used for this purpose. As for the risk of transmission slippage, this is covered by the use of RPP type belts with a parabolic profile of the teeth suitable for the transmission of high forces and by an ever-present tensioning by the agonist\u2013antagonist action of the actuators. As for strength, the belts chosen have fiberglass reinforcements with a protective nylon filter. With regard to the requirements of the actuators, force and linear range, once the necessary torques for the joints have been determined by the kineto-static model, the diameter of the transmission pulleys must be chosen in order to determine the specifications of the pneumatic muscles. The pulley must be chosen considering two conflicting needs. As the diameter increases, the forces required by the muscles decrease but their strokes increase. Therefore, the dimensioning of the muscles together with the transmission is a single process.As already introduced, it was decided to use the straight fibers pneumatic muscle actuators designed and manufactured by the authors ,96. The The dimensioning of the actuators has been addressed by means of the procedure proposed by the authors and described in .Two couples of muscles drive each of the two joints of the robot, shoulder, and el-bow with a diameter of the transmission pulley for both joints of 63.66 mm. Other aspects of the design are explained in the following. The robot is installed firmly on a steel vertical rod, and the height is fixed considering that the patient will be seated in an armchair for the therapy. The articulation between the robot links is carried out by means of a fork a made byThe calculation of the fork was made using a numerical modelling by the Ansys finite element code, The structure of arm is made by a tubular element of aluminium. At one end, the tube is linked to the fork as part of the elbow joint, whereas at the other end, it is coupled with the cylinder as part of the shoulder joint. The 4 muscles that drive the elbow joint are placed on the arm. Each of these muscles is linked at one end with the belt, whereas the other end is linked at a plate fixed on the structure of the arm. Additionally, the structure of the forearm is an aluminium tube. At one end, the tube is linked at the fork as part of the elbow joint, whereas at the other end, it has a handle made by a simple aluminium tube. The 4 muscles that drive the shoulder joint are placed on the fixed structure. Angular position transducers (potentiometers) are installed coaxially to the hinges of the joints.A picture of the execution of one joint is shown in As said before, every joint is driven by two couples of muscles working in parallel: the agonist couple and the antagonist couple. Supply and exhaust of the muscles are provided in two ways by two positions of high frequency Pulse Width Modulation (PWM) driven digital valves manufactured by Matrix SpA. The digital valves are driven by a data acquisition board by National Instruments on a PC according to the scheme in The control system can adjust the air mass entering the muscles on the basis of the feedback signals given by the rotation of each of the two joints, measured by a conductive plastic potentiometer. This is a precision potentiometer with an electric arc of 340 degrees, 10 k\u03a9 of electric resistance, and 2% as for linearity accuracy.The control strategy is planned considering the main characteristics of the pneumatic muscle: compliance and non-linear behaviour. Furthermore, the system presents non-linearities due to the presence of two links which involve a dynamics depending on the current configuration of the system. In , an uppeTherefore, as a first step, a closed loop position and trajectory control system based on fuzzy logic is implemented. To define this system, it is not important to know if the relationship among internal pressure, contraction, and traction force is linear or not, but only the qualitative connection. It allows for the description of the qualitative behaviour of the controller by mean of linguistic rules whose quantitative meaning is defined by the membership functions shape, using in-house-developed control software with a fuzzy routine in C. The PWM driving allows the conductance of the valves to be continuously ruled between zero and fully open valve conductance. Hence, the control can compute the duty cycle for the valves. For one couple of muscles, the control system computes a parameter in the range , used to drive the 2 valves. When the specified parameter value is negative, the exhaust valve is driven with a duty-cycle equal to the absolute value of the specified parameter. Positive values drive the supply valve with a duty cycle equal to the specified parameter. For the other couple of muscles, in the antagonist position, what is said before is applied on the contrary: if, for a couple of muscles the exhaust valve is driven, for the other one, the supply valve is driven and vice versa. In In With the described modalities, it is also possible to control the single joints simultaneously. For the command of trajectories within the working volume, it is necessary to define a trajectory and derive the motion laws in the joint space.In order to be able to carry out experimental tests with trajectory tracking, the inverse kinematic model of the developed device was considered. Considering the transformation matrix from the local reference of link 2 to the base and expressing the position of the end of link 2 with respect to the base we have :(4)T20=cBy comparing Equation (4) with Equation (1) we obtain:From system (5), considering the additional trogonometric formulas, it is possible to obtain:1 and \u03b82, made explicit as functions of xF e yF:The first two equations of system (6) can be transformed by mathematical developments based on a geometric approach to obtain the following expressions of \u03b8F, yF).Using these expressions, it is possible to obtain the motion laws of the joints for any trajectory given as a sequence of points P(xThree types of preliminary experimental tests were conducted. Some tests were carried out giving a step input at the control system and recording the robot behaviour as values of angular position vs. time. The target positions were chosen to obtain only the movement of one joint at a time. Some tests on the position accuracy of the elbow and of the shoulder were conducted for different angular positions of the joints. Then some target trajectories in the joint space were selected for the movement of one joint at the time. Finally, some target trajectories in the working volume were selected for the movement of both the joints at the same time. The capability of the robot to follow a desired trajectory was also preliminary tested. Three trajectories were tested: a linear horizontal trajectory, a linear vertical trajectory, and a circular trajectory with a 300 mm diameter. Some tests results are in From the result shown in As for the trajectory tests in the joint domain, with movements of one joint at the time, it can be seen that the absolute error is confined within the \u00b13 degrees range for both the joints.As for the trajectory tests involving both the joints at the same time, it can be seen the absolute error is within the \u00b130 mm range.The 3 degree error on the shoulder joint results in an error of about 30 mm on the robot end. Although this error may be considered excessive for industrial applications, it is not excessive for a rehabilitation application. In fact, if we consider the value of 30 mm compared to the vertical dimension of the working volume equal to 1600 mm, this corresponds to a percentage error of 1.9%. From the point of view of the actuators, it should be noted that an error of 3 degrees, with a diameter of the transmission pulleys equal to 63.66 mm, corresponds to an error on the length of the pneumatic muscle equal to \u00b11.5 mm, which is a good value for a pneumatic actuator. On the other hand, observing the real trajectories in the working volume, we can see how these are certainly within the precision that a healthcare professional can guarantee by imposing movements on the user with his own limbs. Furthermore, the system is actually characterized by compliance as expected. The system can easily support the user\u2019s limb and impose the reference trajectory. The imposed movement is smooth thanks to the softness of the machine. In Regarding the other performances, Comparison would be desirable with existing machines and on the basis of a common denominator, but, as mentioned in the introduction, there are no standards. Therefore, the types, albeit strictly in the field of upper rehabilitation machines, are many. In fact, there are more than 120 upper limb rehabilitation devices of which only 19 treat shoulder and elbow rehabilitation . Of thesFor these reasons, the comparison was made considering a hypothesis with the same kinematic architecture, the same dimensions and kinematic domain and the same construction solutions adopted for the current project, but with electric motors.The hypothesis is based on the use of brushless electric motors with inexpensive gearboxes with worm and helical wheel. In addition, 0.9 Nm motors are considered that require gearboxes with a transmission ratio of 10 for the elbow joint and 35 for the shoulder joint, respectively. These reducers are non backdrivable and, for this reason, an intervention is necessary to introduce compliance. This can be obtained by applying a suitable flexible mechanical device between By the results shown, it is possible to state that the feasibility of the project here proposed is demonstrated, resulting in an outperforming of a conventional solution. Future developments will concern the implementation of other types of functions with active patient possibly through the use of other control techniques such as the Generalized Predictive Control particularly suitable for non-linear systems. Furthermore, a database-based system will be implemented for monitoring the patient\u2019s evolution according to rehabilitation programs. Clinical trials for the complete validation of the project will follow."} +{"text": "To evaluate the effect of adding Di-tan decoction (DTD) and/or electroacupuncture (EA) to standard swallowing rehabilitation training (SRT) on improving PSD. In total, 80 PSD patients were enrolled and randomly assigned to the DTD, EA, DTD\u2009+\u2009EA or control group at a 1\u2009:\u20091\u2009:\u20091\u2009:\u20091 ratio. All patients received basic treatment and standard SRT. The DTD group received DTD orally, the EA group received EA, the DTD\u2009+\u2009EA group received both DTD and EA simultaneously, and the control group received only basic treatment and standard SRT. The interventions lasted for 4 weeks. The outcome measurements included the Standardized Swallowing Assessment (SSA) and Swallowing-Quality of Life (SWAL-QOL), performed and scored from baseline to 2, 4, and 6 weeks after intervention, and the Videofluoroscopic Dysphagia Scale (VDS), scored at baseline and 4 weeks after intervention. Scores were compared over time by repeated-measures analysis of variance (ANOVA) among all groups. Interactions between interventions were explored using factorial design analysis. P\u2009<\u20090.05). The ER was higher in the DTD\u2009+\u2009EA group than in the DTD or EA group (both P\u2009<\u20090.05). (2) There were significant group effects, time effects and interactions for the SSA and SWAL-QOL scores . All groups showed decreasing trends in SSA scores and increasing trends in SWAL-QOL scores over time from baseline to 6 weeks after intervention . (3) Factorial design analysis for \u0394VDS showed that there was a significant main effect for DTD intervention and for EA intervention . However, there was no significant interaction effect between DTD and EA . Multiple comparisons showed that the DTD, EA and DTD\u2009+\u2009EA groups all had higher \u0394VDS values than the control group (P\u2009<\u20090.05). The DTD\u2009+\u2009EA group had a higher \u0394VDS than the DTD or EA group (both P\u2009<\u20090.05). (4) Most adverse reactions were mild and transient. (1) The effective rates (ERs) for PSD treatment were higher in the DTD, EA and DTD\u2009+\u2009EA groups than in the control group (all Adding DTD or EA to SRT can better improve PSD than applying SRT alone. Adding DTD and EA simultaneously can accelerate and amplify the recovery of swallowing function versus DTD or EA alone, and both are effective and safe treatments, alone or jointly, for PSD and are a powerful supplement to routine treatments. Dysphagia symptoms are present in approximately 37%\u201378% of patients after acute stroke . Nearly At present, there are quite a few rehabilitation treatments applied for PSD, such as swallowing rehabilitation training (SRT), compensation therapy, physiotherapy, and alternative therapy \u20136. HowevTraditional Chinese medicine (TCM), including Chinese medicinal decoctions and acupuncture therapy, has been widely used in the management of a many diseases in East Asia and has been reported in many studies \u201310. AccoHowever, there remains a lack of sufficient evidence to recommend the routine use of EA for dysphagia after stroke , 24. MosThus, we designed a randomized controlled trial to determine the individual effects of DTD and EA on the improvement of swallowing function in patients with ischaemic stroke and the interactive effect on PSD. We hypothesized that adding either DTD or EA to standard SRT could better improve PSD and that the intervention effect of DTD and EA could be synergistic.The study was a randomized, single-blind clinical trial and was conducted in the Department of Rehabilitation, Zhejiang Chinese Medical University Affiliated Wenzhou Hospital of Traditional Chinese Medicine. A total of 196 patients who developed PSD and were admitted to the rehabilitation department in our hospital from June 2021 to June 2022 were enrolled. A total of 80 participants, including 60 males and 20 females, were enrolled according to the inclusion criteria and exclusion criteria listed below, and all of them were of Han ethnicity. This trial was approved by the Ethics Committee of Zhejiang Chinese Medical University Affiliated Wenzhou Hospital of Traditional Chinese Medicine (No. WTCM-KT-2021059), and written informed consent was obtained from all participants in accordance with the Declaration of Helsinki. The study workflow is shown in (1) age between 50 and 85 years; (2) meeting the diagnostic criteria for ischaemic stroke, listed in the Diagnostic Criteria of Cerebrovascular Disease in China (version 2019) ; (3) a p(1) swallowing disorders caused by conditions other than stroke; (2) local throat lesions; (3) thyroid disease, local skin infection or ulcer; (4) severe heart, lung, liver, kidney disease, or unstable vital signs; (5) previous idiopathic epilepsy or epilepsy caused by other neurological disorders; or (6) metal implants in the body, such as pacemakers, cochlear implants and neck vascular stents.The patients in the control group received only basic treatment and SRT. According to the Guidelines of Diagnosis and Treatment of Acute Ischaemic Stroke in China (2018) , the basArisaema erubescens Schott. 15\u2009g, Acorus tatarinowii Schott. 15\u2009g, Poria cocos Wolf. 15\u2009g, Citrus aurantium L. 10\u2009g, Pinellia ternata Breit. 6\u2009g, Citrus reticulata Blanco. 6\u2009g, Panax ginseng C. A. Mey. 20\u2009g, Glycyrrhiza uralensis Fisch. 9\u2009g, Bambusa tuldoides Munro. 12\u2009g. All Chinese medicines used were standard decoction 200\u2009ml, which was boiled by the staff of the hospital decocting room. The method was as follows: 1 dose a day, twice orally in the morning and evening, for 4 weeks.The patients in the DTD group were treated with DTD on the basis of basic treatment and SRT, similar to those in the control group. The DTD was given orally by intermittent oro-oesophageal tube (IOET) feeding with the following prescription: The patients in the EA group were treated with EA on the basis of basic treatment and SRT, similar to those in the control group. EA treatment was given as follows: The target acupoints were Linquan CV23), FengChi (GB20), Yifeng (SJ17) and Fengfu (GV16). The EA operation method added modern electrotherapy to manual acupuncture as follows: The lian quan (CV23) was needled oblique towards the root of the tongue at a depth of 0.5\u20131.0\u2009cun (1 cun\u2009=\u200933.3\u2009mm); Fengchi (GB20) was needled towards the nasal apex at a depth of 0.5\u20131.2\u2009cun. Yifeng (SJ17) was needled perpendicularly with a depth of 0.5\u20131\u2009cun. Fengfu (GV 16) was needled perpendicularly with a depth of 0.5\u20130.8\u2009cun. After the \u201cde qi\u201d sensation was felt by the acupuncturist, the needles needled in the acupoints mentioned above were connected to an acupoint stimulator using electrodes, with a continuous wave of frequency of 2\u2009Hz for 30\u2009min, and the intensity of EA was set according to the maximum tolerated intensity of each subject (between 0.9\u2009mA and 3\u2009mA) [, FengChiThe patients in the DTD\u2009+\u2009EA group received DTD and EA treatments simultaneously, as well as basic treatment and SRT. The operations of the interventions and the entire course of treatment were all similar to those in the other three groups.The characteristics of demographic, clinical and medical history variables of all subjects, including age, sex, stroke course, National Institute of Health Stroke Scale (NIHSS) , comorbiThe SSA consisted of three steps. The first was the clinical examination, including conscious level, control of the head and trunk, breathing, closure of the lip, soft palate movement, laryngeal function, pharyngeal reflex, and autonomic cough. Then, in Stage 1, 5\u2009mL of water was given to the patient three times. In Stage 2, if swallowing was normal in Stage 1, 60\u2009mL of water was given to the patient. SSA scores ranged from 17 to 46, with a higher score indicating a decreased swallowing ability . SSA wasThe SWAL-QOL questionnaire consisted of 44 items and was specifically designed to evaluate various aspects of the quality of life of patients including those related to the physiological, psychological, emotional and social communication domains, with a higher score indicating a higher quality of life (QOL) . The SWAVFSS has traditionally been regarded as the \u201cgold standard\u201d for assessing swallowing function . The opeWith the change in VDS (\u0394VDS) scores between baseline and 4 weeks as a reference, the effective rate (ER) for the treatment of PSD was calculated using the following formula: (VDS scores at baseline\u2212VDS scores at 4 weeks)/VDS scores at baseline \u00d7100%. The efficacy was graded by ER quartile as follows: completely healed (>75%), markedly effective (50%\u201375%), effective (25%\u201350%), and ineffective (\u226425%) [Adverse events or all unexpected responses were recorded by physicians. In this trial, the adverse events were represented as the cumulative number of specific events occurring during the implementation of the intervention. The incidence of adverse events was calculated as the cumulative number of adverse events/the total number of intervention events. In addition, all new aspiration pneumonia during the entire 6-week follow-up was recorded as a dysphagia-related complication.\u03b1\u2009=\u20090.05) and 80% power (\u03b2\u2009=\u20090.2). An established sample size of 72 participants in this trial was enough to reveal significant differences between the arms. In total, a targeted sample size of 80 participants, which allowed for a 10% drop-out rate, was established.Sample size calculations were performed to determine the number of participants needed to detect effect sizes. We estimated the needed sample size of all the outcome indicators mentioned above and found that the sample size required in the SSA indicator was largest among all indicators. Therefore, we selected the SSA scores as the main referent of the sample size calculation. The procedure was as follows: based on data from our previous trials, the respective mean and standard deviation (SD) of SSA scores were 25.8\u2009\u00b1\u20095.6 in the control group, 23.9\u2009\u00b1\u20094.8 in the DTD group, 21.5\u2009\u00b1\u20093.9 in the EA group and 19.5\u2009\u00b1\u20094.5 in the DTD\u2009+\u2009EA group at the end of the interventions. A sample-size calculation was conducted using Power Analysis and Sample Size System version 15 (PASS 15), with a type I error of 5% basis, while the safety-related indicators were analysed on the per-protocol (PP) basis. For continuous data, normally distributed variables are expressed as the means\u2009\u00b1\u2009SD, while skewed variables are expressed as the median . After normality of the data was checked using the Shapiro-Wilk (S-W) test and homogeneity of variance was checked using Levene's test, continuous data between groups were assessed using one-way analysis of variance (ANOVA). The continuous data consisting of repeated assessment were analysed by repeated-measures ANOVA, in which the Mauchly spherical test was performed first, and then the Greenhouse-Geisser method was used to correct if the data did not meet the spherical hypothesis. In addition, the potential interactions between interventions were explored using a factorial design. Categorical data were assessed using the chi-squared test or Fisher's exact test. All multiple comparisons were corrected with the Bonferroni method. The results were considered significant at a P\u2009>\u20090.05). The characteristics of the demographic, clinical and medical history variables of the study are summarized in In total, 76 subjects completed the study , and 4 subjects dropped out owing to family reasons or loss to follow-up before completing the study. The 4 who dropped out were included in the analysis using their baseline values. No significant differences were found in the median age, sex proportion, median course of stroke, comorbidity proportion, aCCI, NIHSS or the distribution of the lesion location among the four groups . The ER in the DTD\u2009+\u2009EA group was higher than that in the DTD or EA group (both P\u2009<\u20090.05). According to the efficacy graded by ER quartile, there was a higher proportion of markedly effective patients distributed in the DTD\u2009+\u2009EA and EA groups (both P\u2009<\u20090.05).With the change in VDS (\u0394VDS) scores between baseline and 4 weeks as a reference, the effective rate (ER) for the treatment of PSD in the four groups is listed in W\u2009=\u20090.156, with P\u2009<\u20090.01, which did not meet the spherical hypothesis. Therefore, the Greenhouse-Geisser method was used for correction. The S-W test showed that the data in every group had a normal distribution with all having a P\u2009>\u20090.05. Levene's test showed homogeneity of variance with P\u2009>\u20090.05. There were significant time effects, group effects and interactions of time and group for the SSA scores. Further simple effect analysis showed that the simple effects of time were all significant in every group . According to multiple comparisons between different timepoints when fixing the group factor, all groups showed decreased trends in SSA scores over time from baseline to 6 weeks after intervention . However, the simple effect of group was not significant at baseline , but all were significant at 2 weeks, 4 weeks and 6 weeks after intervention . Multiple comparisons between groups showed that the SSA scores of the DTD\u2009+\u2009EA group were significantly lower than those of the control group at 2 weeks after intervention (P\u2009<\u20090.05). At 4 weeks and 6 weeks after intervention, the SSA scores in the DTD, EA and DTD\u2009+\u2009EA groups were all lower than those in the control group . In addition, the DTD\u2009+\u2009EA group showed lower SSA scores than the DTD group from 2 weeks to 6 weeks and showed lower SSA scores than the EA group at 4 weeks and 6 weeks . There were no significant differences in the SSA scores between the DTD and EA groups at any timepoint . The results are shown in According to the repeated-measures ANOVA for the SSA scores, Machly W\u2009=\u20090.277, with P\u2009<\u20090.01, which did not meet the spherical hypothesis. Therefore, the Greenhouse-Geisser method was used for correction. The S-W test showed that the data in every group had a normal distribution with all having P\u2009>\u20090.05. Levene's test showed homogeneity of variance with P\u2009>\u20090.05. There were significant time effects, group effects and interactions of time and group for the SWAL-QOL scores. Further simple effect analysis showed that the simple effects of time were all significant in every group . According to multiple comparisons between different timepoints when fixing the group factor, all groups showed increased trends in SWAL-QOL scores over time from baseline to 6 weeks after intervention . However, the simple effect of group was not significant at baseline , but all were significant at 2 weeks, 4 weeks and 6 weeks after intervention . Multiple comparisons between groups showed that the SWAL-QOL scores of the DTD\u2009+\u2009EA group were significantly higher than those of the control group at 2 weeks, 4 weeks and 6 weeks after intervention . At 4 weeks after intervention, the SWAL-QOL scores in both the DTD and EA groups were significantly higher than those in the control group . In addition, the DTD\u2009+\u2009EA group showed higher SWAL-QOL scores than the DTD or EA group at 2 weeks and 4 weeks . There were no significant differences in the SWAL-QOL scores between the DTD and EA groups at any timepoint . The results are shown in According to the repeated-measures ANOVA for the SWAL-QOL scores, Machly P\u2009>\u20090.05). Levene's test showed homogeneity of variance with P\u2009>\u20090.05. A main effect of DTD intervention was observed , and a main effect of EA intervention was observed . However, there was no significant interaction effect of DTD and EA observed . Further multiple comparisons between groups showed that the DTD group, EA group and DTD\u2009+\u2009EA group all had significantly higher \u0394VDS values than the control group (P\u2009<\u20090.05). The DTD\u2009+\u2009EA group had a significantly higher \u0394VDS value than either the DTD group or the EA group (both P\u2009<\u20090.05), while there was no significant difference in \u0394VDS scores between the DTD and EA groups (P\u2009>\u20090.05). The results are shown in The decrease in VDS (\u0394VDS) was calculated as the difference in VDS scores between 4 weeks and baseline using the formula \u0394VDS\u2009=\u2009VDS scores at baseline\u2212VDS scores at 4 weeks. According to the 2\u2009\u00d7\u20092 factorial design, one of the intervention factors in this trial had two options: giving Di-Tan decoction or not giving Di-Tan decoction. The other intervention factor also has two options: giving electroacupuncture or not giving electroacupuncture. Thus, this study consisted of the following 4 groups: (1) only Di-Tan decoction applied (DTD group); (2) only electroacupuncture applied (EA group); (3) Di-Tan decoction and electroacupuncture applied simultaneously (DTD\u2009+\u2009EA group); and (4) neither Di-Tan decoction nor electroacupuncture applied (control group). According to the S\u2012W W test, \u0394VDS in each group followed a normal distribution . In addition, the new aspiration pneumonia in the four groups during the entire 6-week follow-up was recorded. The incidence of new aspiration pneumonia based on the number of patients in both the DTD\u2009+\u2009EA and DTD groups was lower than that in the control group (both P\u2009<\u20090.05). The results are shown in The data of adverse events and complications were analysed on the per-protocol (PP) basis for the purpose of avoiding more conservative results. The adverse events in the DTD group included nausea or abdominal distension due to taking the decoction too quickly (3 times). The adverse events in the EA group included pain near the acupuncture site (4 times), mild bleeding at the acupuncture site (2 times), haematoma in acupoints after acupuncture (2 times) and transient discomfort related to tension (2 times). The adverse events in the DTD\u2009+\u2009EA group included pain in the acupuncture site (2 times), mild bleeding in the acupuncture site (2 times), haematoma in acupoints after acupuncture (2 times) and transient discomfort related to tension (2 times), nausea or abdominal distension due to taking too fast decoction (3 times). The incidence of adverse events based on the total number of intervention events in both the DTD\u2009+\u2009EA and EA groups was higher than that in the control group has a certain effect on PSD . AcupuncOn the other hand, Chinese medicinal decoctions have another effect on the symptoms of neurological disorders, such as tongue stiffness, sudden falling and loss of consciousness, convulsions, hot temper, aphasia, insomnia, dryness of the pharynx, and cough with sputum, which are similar to poststroke symptoms , 10. AccAccording to TCM theory and clinical practice in China, we designed a study to compare the effectiveness of adding Di-Tan decoction and/or electroacupuncture to SRT for PSD in patients with a stroke course between 2 weeks and 6 months.It is worth mentioning that we put the bottom line of the days from onset as 14\u2009days for the sake of safety and to eliminate the influence of spontaneous recovery, owing to a previous study suggesting that 63.6% of PSD patients would recover spontaneously within 2 weeks .First, based on balanced and comparable baseline data, we verified that there were more obvious improvements in the effective rate (ER) in the DTD and EA groups than in the control group, which indicates the positive intervention effectiveness of DTD and EA separately for swallowing function. Several meta-analyses summarized the RCTs about the effectiveness of EA or acupuncture for PSD, and the conclusions were quite consistent with our study , 22, 24.Additionally, we used SSA to dynamically assess the change in swallowing function over time. SSA is a valuable screening tool that has demonstrated excellent sensitivity and good specificity for determining quick dysphagia and the risk of aspiration . Then, tP\u2009<\u20090.05), which indicated that the patient's swallowing function gradually recovered with the implementation of the four intervention schemes . Although the stroke course of the participants varied from 2 weeks to 60 days in this trial, positive rehabilitation efficacy still existed. At present, few studies have compared the change in swallowing function in patients with PSD over time under various intervention methods.During the whole 6-week follow-up, all four groups showed decreasing trends in SSA scores and increasing trends in SWAL-QOL scores compared with their baseline values (P\u2009<\u20090.05), which indicated that adding either DTD or EA intervention to SRT improved the swallowing function of patients with PSD more than the SRT intervention alone. Compared with that in the DTD or EA group, the improvement in SSA scores in the DTD\u2009+\u2009EA group over the control group emerged earlier, occurring at 2 weeks after intervention and lasting for the whole 6-week follow-up . The SSA scores in the DTD\u2009+\u2009EA group were lower than those in both the DTD and EA groups at 4 weeks and 6 weeks after intervention , which indicated that adding DTD and EA simultaneously to SRT could accelerate and amplify the recovery of swallowing function. For SWAL-QOL scores, conclusions from multiple comparisons among groups were similar to those for SSA scores, which indicated that there were corresponding improvements in swallowing-related QOL with the progression of swallowing function. In addition, we found that regardless of SSA or SWAL-QOL, there were no differences between the DTD and EA groups at any timepoint , which may hint that the two interventions were almost equal in their effect on PSD. However, at the 6-week timepoint, the differences in the SWAL-QOL among the DTD, EA and control groups were not significant . Such an imbalance between the SSA scores and SWAL-QOL scores at the later stage of the intervention might be due to the distinct emphasis of the two-score assessment systems. As a subjective indicator, the SWAL-QOL scale was specifically designed to evaluate the quality of life of patients from various aspects of physiological, psychological, emotional and social communication [We further explored the data by fixing the factor of time, and multiple comparisons among groups showed that the SSA scores in either the DTD or EA group were significantly lower than those in the control group at 4 weeks and 6 weeks after intervention was regarded as a relatively objective indicator that can reflect the improvement in swallowing function. As mentioned previously, there was a more powerful effect for the combination of DTD with EA than applying it alone. It was necessary to know whether there was a statistical interaction for being responsible for the advantage of DTD\u2009+\u2009EA. The 2\u2009\u00d7\u20092 factorial design allows researchers to examine the main effects of two interventions simultaneously and explore possible interaction effects . TherefoP\u2009<\u20090.001) but no significant interaction effect between the two (P\u2009=\u20090.717). Then, multiple comparisons showed that both DTD and EA had a greater effect on the improvement of VDS compared with the control group, and the joint application of the two interventions achieved more improvement in VDS compared with any single intervention. As there was no interaction according to the data, we seemed to have a certain reason to believe that the obvious advantage of joint interventions came from the simple sum effect of the two intervention measures. A similar phenomenon was observed in SSA scores and SWAL-QOL scores, as discussed above.There was a significant main effect of DTD and EA theory has attributed PSD to the category of \u201cthroat bi\u201d, which is related mainly to meridian obstruction or phlegm dampness, qi deficiency and blood stasis , 39. AccMajor effector sites of acupuncture for PSD treatment were meridians . AcupuncP\u2009<\u20090.05). In addition, DTD contains Arisaema erubescens Schott., Acorus tatarinowii Schott., Poria cocos Wolf., Citrus aurantium L., Pinellia ternata Breit., Citrus reticulata Blanco., Panax ginseng C. A. Mey., Glycyrrhiza uralensis Fisch., and Bambusa tuldoides Munro. These herbal extracts have been demonstrated to have multiple effects, including replenishing Qi and tonifying the spleen in addition to cleansing the phlegm. The ingredients of DTD were reported to have anti-inflammatory and antioxidant activities [As another aspect of the aetiology and pathogenesis of PSD, phlegm dampness, qi deficiency and blood stasis are considered important targets for DTD therapy . In ancitivities \u201353. Sometivities \u201356. Othetivities . DTD maytivities . Therefotivities .Taken together, DTD and EA might have a certain biological synergism from the action mechanism based on TCM theory considerations. As we have seen in our data, DTD alleviating phlegm or saliva could lead to reduced aspiration and leakage. Meanwhile, the meridians are dredged through acupuncture, which leads to promoting qi and blood by relieving blood stasis. Then, DTD replenishes Qi and tonifies the spleen to increase Qi. Consequently, it is helpful to run Qi and blood, which strengthens resolving phlegm. In turn, resolving phlegm leads to more unobstructed meridians. All those effects derived from joint interventions mutually reinforce, interact with and promote each other, which might be presented as an improvement in swallowing function scores as a manifestation form of biological synergism that demonstrates a more powerful than simple sum effect of a single intervention. This is a preliminary clue for exploring the complex biological synergism between acupuncture and decoction, although it was not consistent with the lack of statistical interaction found in our data of VDS. There were several reasons for this contradiction. First, there were other interferences from other unknown elements that may neutralize the current effect. Second, insufficient statistical efficiency due to a small sample may lead to a false negative result in the statistical interaction. In addition, the VDS indicators were not comprehensive enough for evaluating swallow function to meet the requirements of biological complexity. Further enlarging the sample size, introducing an advanced model and searching more qualified indicators for PSD might be helpful for better understanding the relationship.P\u2009<\u20090.05).Our study also demonstrated sufficient safety for intervention of DTD and EA while ensuring their effectiveness. According to the records, adverse events were all transient and slight. Recently, a meta-analysis showed that DTD administration displayed nonspecific adverse effects, such as drowsiness, sweating, weight gain, constipation, loss of appetite, and dry mouth . In our There are several limitations to the study. First, limited by the requirements of ethics and the specificity in operation, our study was designed as single blind and lacked a sham acupuncture and sham decoction control group, which may lead to an adding bias when analysing. Second, the small sample size of the study impeded stratified analysis of patients according to their different clinical characteristics, which affected the further exploration of available data. In addition, the lack of multicentre involvement and short-term follow-up weakened the persuasiveness of the conclusions.Several conclusions can be drawn from our study. First, compared with SRT intervention alone, either adding DTD or adding EA intervention to SRT improved the swallowing function of patients with PSD, which presented an equivalent effectiveness. Second, adding DTD and EA simultaneously to SRT could accelerate and amplify the recovery of swallowing function and correspondingly improve swallowing-related quality of life compared with DTD or EA alone. Nonetheless, there was no statistical interaction effect found on the improvement in PSD according to the data acquired, although the intervention effect of DTD and EA might be synergistic.In summary, as two important methods of complementary and alternative medicine, both DTD and EA are effective and safe treatment strategies for PSD, which may be applied together or jointly as a powerful supplement to routine treatments. A well-established control trial with a larger sample and longer term is needed to draw more convincing conclusions."} +{"text": "Numerous studies have shown that microglia are capable of producing a wide range of chemokines to promote inflammatory processes within the central nervous system (CNS). These cells share many phenotypical and functional characteristics with macrophages, suggesting that microglia participate in innate immune responses in the brain. Neuroinflammation induces neurometabolic alterations and increases in energy consumption. Microglia may constitute an important therapeutic target in neuroinflammation. Recent research has attempted to clarify the role of Ghre signaling in microglia on the regulation of energy balance, obesity, neuroinflammation and the occurrence of neurodegenerative diseases. These studies strongly suggest that Ghre modulates microglia activity and thus affects the pathophysiology of neurodegenerative diseases. This review aims to summarize what is known from the current literature on the way in which Ghre modulates microglial activity during neuroinflammation and their impact on neurometabolic alterations in neurodegenerative diseases. Understanding the role of Ghre in microglial activation/inhibition regulation could provide promising strategies for downregulating neuroinflammation and consequently for diminishing negative neurological outcomes. Over the last few years, our knowledge of Ghrelin (Ghre) has increased significantly. In fact, the peptide Ghre is involved in several cellular activities affecting the gastrointestinal and immune systems. This orexigenic hormone not only regulates food intake and energy content but also modulates plasticity and cognition in the central nervous system (CNS). Ghre signaling deregulation is involved in the pathophysiology of obesity and may provide a link between metabolic syndromes and cognitive impairment . Indeed,During inflammation, proinflammatory cytokines and immune-derived cells stimulate hormone release and metabolism. Several reports show that Ghre and its receptor play a regulatory role during inflammation . In the Both Ghre and microglia are involved in the pathophysiology of neurodegenerative diseases characterized by neuronal damage such as Alzheimer\u2019s disease (AD) and Parkinson\u2019s disease (PD) ,6.The focus of this review is on the mechanisms by which Ghre modulates microglia activity during obesity-induced neuroinflammation by emphasizing the effects of Ghre in inducing these cells towards an anti-inflammatory phenotype, and then how these mechanisms impact neural plasticity and cognition. The understanding of this peptide\u2019s functions will allow for the development and implementation of new therapeutic and neurological diagnostic strategies.Ghre is a small peptide of 28 amino acids, which is involved in several physiological functions . OriginaAcyl-Ghre exerts these functions, acting through its related G-protein-coupled receptor (GPCR), known as the growth hormone secretagogue receptor (GHSR). GHSR exists in two isoforms: GHSR-1A and its truncated and nonfunctional splicing variant GHSR-1B . Only GHwrong food habits\u201d confirms the existence of the gut\u2013brain axis. Metaflammation is produced by the dysfunction of the immune metabolism. The nutrition burden triggers signaling pathways and cascades without severe immune response symptoms, but is comparable to a chronic immune response for a long time [An unhealthy diet can generate obesity and altered metabolism, which induces a chronic proinflammatory metabolic phenotype (metaflammation) and associated brain damage . The braong time . The metong time . Bioenerong time . This prong time . Ghre isong time . As prevong time . Notablyong time ,31. Proiong time . Moreoveong time . In microng time . The Ghrong time .Ghre, being involved in the processes of immune metabolism and inflammaging, is also implicated in eating disorders and obesity. In particular, obesity is a condition that results from a chronic disruption of energy balance related to the accumulation of body fat. It is a very widespread health problem with a multifactorial etiology that includes genetic, metabolic and lifestyle factors . The mecTherefore, unsaturated or saturated plasma lipids trigger microglia, resulting in positive or negative Ghre activation, respectively. Nevertheless, it is still unclear if overnutrition during maternal programming initiates hypothalamic Ghre signaling in offspring, hence stimulating food intake in adulthood . ObesityIt is well known that, in the CNS, microglia play an important role in immune surveillance.Microglia have oval-shaped nuclei and slender, elongated processes that help them to move through chemotaxis . They arMicroglia have phagocytic activity by promoting the release of proinflammatory cytokines and acting to remove damaged neurons. Similar to macrophages, microglia can polarize and can be activated in different ways showing two different phenotypes: cytotoxic M1, proinflammatory, stimulated by the phenomena of neuroinflammation; and cytoprotective M2.After the stimulation of the toll-like receptors (TLR), M1 triggers the immune response and releases proinflammatory cytokines such as TNF-\u03b1, IFN-y, IL-1, IL-6 and IL-12. These cytokines increase oxidative stress through ROS production, upregulation of iNOS and nitrogen free radicals, to trigger apoptotic mechanisms . M1 actiM2 promotes the release of IL-4, IL-10 and growth factors to recover injured tissue and for regeneration. M2 has neuroprotective functions such as the inhibition of inflammation and the restoration of homeostasis ,61. NeveGhre has not been detected in microglia. Nevertheless, Ghre, as an anti-inflammatory hormone, inhibits microglia activation and reduces the percentage of M1 microglia . UnderstIt has been reported that microglia ablation is strongly related to an increase in Ghre levels consistent with negative energy balance . In factMuch evidence suggests that nutritional variations and obesity can influence cognitive impairment and the development of neurodegenerative diseases . StructuDepending on the type of neurons and the brain area affected, neurodegenerative diseases can have different courses, leading to neuronal structure and function alteration and causing neuron death. These debilitating disorders involve several triggering factors and, in some cases, the mutation of a specific gene that causes the mutated protein expression to modify neuronal function, promoting degeneration and neuronal death. All these progressive losses occur in pathologies such as AD or PD. Both diseases are identified as proteinopathies; in fact, the presence of misfolded protein is one of their common characteristics. The peculiarity of these altered proteins resides in their ability to build aggregates accumulating within and between the neurons, forming amyloid plates and Lewy bodies . AD deveThe role of inflammation in neurodegenerative diseases has been highlighted in various experimental and clinical contexts. Caloric restriction is strictly related to the reduction of inflammation because it decreases proinflammatory cytokine biosynthesis.Microglia are important in neurogenesis and synaptic modification and in the reorganization of synaptic networks . It has AD is the most common form of dementia. The pathological features of AD consist of amyloid-\u03b2 (A\u03b2) settles, neurofibrillary tangles and neuronal injury . In partGhre might improve cognition in AD via a CNS mechanism involving insulin signaling. The investigation using a murine model fed a high-glycemic-index diet treated with the Ghre agonist revealed a persistent challenge for glucose homeostasis in AD. The Ghre agonist impairs glucose tolerance immediately after administration but not in the long term. Immunoassay analysis showed a beneficial impact of long-term treatment on insulin signaling pathways in hippocampal tissue. Moreover, the Ghre agonist improved spatial learning in mice, raised their activity levels and reduced their body weight and fat mass . It was These findings indicate the importance of Ghre agonists on cognitive effects in AD, particularly affecting hippocampal brain areas and functions. Ghre decreases peripheral glucose uptake in periods of fasting, whereas it improves or unalters uptake in CNS energy deficiency conditions .On the other hand, in vivo neurodegeneration studies have shown how Ghre administration prevents the death of neuronal cells induced by kainic acid (KA), promoting astroglia and microglia inactivation by regulating the expression of COX-2, TNF-\u03b1 and IL-1\u03b2 in the hippocampal area. In particular, Ghre acts by blocking KA-induced MMP-3 expression in hippocampal neurons . In anotTaken together, all these data support the interplay between Ghre and microglia activation also through the inhibition of MMP-3 expression.In AD, the main sources of cytokines, which contribute to neuroinflammation development, are microglia and astrocytes . ScientiOnce microglia become activated in response to neurodegenerative events, they are able to synthesize proteolytic enzymes such as cathepsin B. causing extracellular matrix damage and further neuronal dysfunction . OverallPD is the second most common neurological disease characterized by the progressive degeneration of the nigrostriatal system producing a dopamine deficit . The cliIt is known that GHSR-1A and DA receptor 2 (D2) interact. GHSR-1A dimerizes with several G-protein-coupled receptors, such as GHSR-1B, melanocortin 3 receptor (MC3), D1, D2 and serotonin 2C receptor (5-HT2C). Ghre and GHSR-1A binding trigger the heterodimer formation with D2, causing DA release and microglial activation .Moreover, Ghre is known to act on peripheral macrophages inhibiting LPS-induced release of proinflammatory cytokines, and LPS leads to DA neuron death through microglia induction . We can In this review, we looked at the correlation between Ghre and microglia .Much evidence supports that Ghre behaves as a metabolic hormone, being involved in feeding behavior and in the regulation of energy homeostasis. In addition, contributing to the preservation and compliance of neuronal activity and connectivity, Ghre exerts a protective role on the CNS. It shares similar properties to those of neuroactive peptides and internal messengers in the brain. Therefore, inducing Ghre signaling in the CNS improves neuroplasticity, neuroprotection and cognitive functions, promoting endogenous repair mechanisms in the brain, thereby reducing the possibility of neurodegenerative diseases. A growing number of investigations suggest that counteracting Ghre signaling recovers glucose control.In 2005, Dixit and Taub questioned whether Ghre is a hormone or a cytokine . Ghre, l"} +{"text": "Since early studies, the history of prokaryotes taxonomy has dealt with many changes driven by the development of new and more robust technologies. As a result, the number of new taxa descriptions is exponentially increasing, while an increasing number of others has been subject of reclassification, demanding from the taxonomists more effort to maintain an organized hierarchical system. However, expectations are that the taxonomy of prokaryotes will acquire a more stable status with the genomic era. Other analyses may continue to be necessary to determine microbial features, but the use of genomic data might be sufficient to provide reliable taxa delineation, helping taxonomy to reach the goal of correct classification and identification. Here we describe the evolution of prokaryotes' taxonomy until the genomic era, emphasizing bacteria and taking as an example the history of rhizobia taxonomy. This example was chosen because of the importance of the symbiotic nitrogen fixation of legumes with rhizobia to the nitrogen input to both natural ecosystems and agricultural crops. This case study reports the technological advances and the methodologies used to classify and identify bacterial species and indicates the actual rules required for an accurate description of new taxa. The taxonomy terminology has been broadly discussed. Some researchers consider the taxonomy as systematics, while others define taxonomy as the classification of organisms and part of the systematics, which would have a broader scope, including studies with evolutionary and phylogenetic components . TaxonomProkaryotes include living organisms belonging to both domains, Archaea and Bacteria, known as archaebacteria or archaea and eubacteria, respectively. Those microorganisms do not have a distinct nucleus or other organelles due to the lack of internal membranes, main characteristics distinguishing them from the eukaryotes. The prokaryotic taxonomy is traditionally split into three correlated areas: classification, nomenclature, and identification , 8. The The basic unit of taxonomy is the species. Bergey's Manual of Systematic Bacteriology defines a bacterial species as a group of strains with certain distinctive features that generally resemble each other in essential features of an organization . EstimatBesides arranging the organisms, the taxonomic tools are used to study microbial diversity and establish phylogenetic relationships. Biodiversity represents the basis of the stability of the ecosystems, providing environmental resilience . AssessiAlthough many techniques, rules, and concepts are used for prokaryotes in general, in this review we describe how bacterial taxonomy evolved until genomic era and important tools developed to assess bacterial diversity and guide to proper classification. We also present a case study with rhizobia to clarify how the evolution of the taxonomic science impacted this group of bacteria, probably amongst the most important for ecosystems and agricultural sustainability, and improved our knowledge about them.The species concept is considered a universal theory limiting the category \u201cspecies\u201d for all living organisms. Concerning the prokaryotes, several incongruences are discussed, since they do not fit into the most common eukaryotic species perceptions, such as the morphological, biological, or evolutionary concepts . For yeahttps://sites.google.com/view/taxonomyagrorhizo/home. The genomic profile comparison of strains studied and related type strains is required among the updates, and the genome sequence of the type strain representing the new species must be deposited in databases. This requirement will increase the number of genomes available for further studies.For the description of novel bacterial taxa, the taxonomists follow guidelines from the International Committee on Systematic of Prokaryotes (ICSP), split into several subcommittees, according to the knowledge areas. Regarding the rhizobial species, the Subcommittee on Taxonomy of Rhizobia and Agrobacteria is responsible for the guidelines. For many years, the only guideline for rhizobia taxa description available was published by Graham and collaborators in 1991 . More reAnother critical step in taxonomy concerns the proper nomenclature, which the ICNP regulates. The scientific name of a novel bacterial taxon needs to be in Latin, referring to the history of the taxon, and be published in the \u201cApproved List\u201d of prokaryotes to become a valid name. In the International Journal of Systematic and Evolutionary Microbiology (IJSEM), the official journal of the ICSP, a clear statement of the name and its etymology, as well as the characterization data of the taxon and the type strain designation, must be provided . The valhttps://lpsn.dsmz.de/). It includes a broad range of taxonomic information for each described taxon, such as etymology, nomenclatural and taxonomic status, type strain designations, and the link to the description publication [Candidatus [Candidatus to valid names [The List of Prokaryotic names with Standing in Nomenclature (LPSN) is an online tool constantly updated by the Leibniz Institute DSMZ\u2014German Collection of Microorganisms and Cell Cultures GmbH and DNA-DNA hybridization (DDH) , 37 for A remarkable breakthrough in the attempts to determine relationships between distantly related organisms came around the 1970s when the Taq polymerase enzyme was discovered and used for DNA amplification through the polymerase chain reaction (PCR) techniques , and thehttps://pubmlst.org/) of several pathogenic bacteria. Even though the MLST is not commonly used to infer phylogeny in epidemiology studies, it was applied for this purpose in taxonomic studies, contributing to the development of the Multilocus Sequence Analysis (MLSA) [The phylogeny studies the evolutionary relationships among organisms, and using conserved molecular data became commonly accepted in taxonomy. After the ribosomal sequences, other conserved genes started to be used . Althougs (MLSA) \u201352.The MLSA accesses the evolutionary information of concatenated housekeeping genes to build phylogenetic trees with more robust data than the analysis with single sequences , 54. It Today, advanced sequencing technologies allow the taxonomy to use genomic data in silico to compare microorganisms, helping to allocate them in their respective taxa or describe new taxa to accommodate the new group. With sequenced genomes, the taxonomists can calculate the overall genome-related indices (OGRIs) and estimate the relatedness among microorganisms; however, suggested threshold values must also be considered. The OGRIs came to replace the DDH due to its low cost, reproducibility, and quality of genomic information. Furthermore, the genome sequences can be deposited in databases so that other scientists can use the data without cultivating the respective bacteria , 58. TheIn conclusion, until the genomic era, the polyphasic taxonomy was used to identify, classify, and name prokaryotes according to phenotypic, genotypic, and phylogenetic characteristics. It enabled considerable progress and stability in microbial taxonomy. However, with advancements in genome sequencing, there are today better tools to delineate species, study phylogeny, and ordinate microbial diversity. The history of microbial taxonomy incorporates the most advanced technologies and adheres to standards and rules, representing a scientific field where the progress goes alongside conservatism , 62.The evolutionary molecular markers are constitutive genes that reflect the phylogeny of the organisms because the bases' substitution on DNA sequences (given by mutations and recombination) is proportional to the evolution that each species underwent from its ancestors, allowing estimates of the differentiation level of the species . In bactIn the next step, it is recommended to choose the best substitution model for the multiple sequence alignment, which depends on the phylogenetic method used to understand the phylogeny of the group under study. Models of substitution are algorithms responsible for evaluating the frequency of each nucleotide and its frequency of substitution, differentiating between transitions and transversions . With this, the models can infer the evolutive history for the alignment , 67. FinIn a phylogenetic tree, the extremities are represented by the investigated lineages . The horizontal lines are called branches, and the nodes that connect the branches represent the most recent common ancestry among the strains . HoweverIt is also possible to calculate the NI, a mathematic parameter used to measure the percentage of similarity among the nucleotide sequences from the alignment, but it does not include evolutionary analyses. Specific software is available to calculate the NI, such as BioEdit . SeveralThe 16S rRNA gene, a critical molecular marker in bacterial taxonomy, contains approximately 1,500 base pairs (bp) and plays a role in synthesizing essential proteins to the functioning of every prokaryote. It is originated from a common ancestor among all prokaryotes, being homologous and keeping conserved throughout the evolution process, but having variable sites with evolutionary information , 73. TheSome numerical values of 16S rRNA NI have been suggested to delimit species boundaries. For example, Stackebrandt and Goebel, in 1994 suggesteThe increase in the number of novel taxa described using the 16S rRNA sequence data has revolutionized our knowledge of the microbial taxonomy, especially at the species level . HoweverBradyrhizobium genus demonstrated that strains sharing 95.5% NI on the ITS sequences would correspond to 60% of reassociation on the DDH, belonging to the same species [Many taxonomists analyzed the internal transcribed spacer (ITS) as an alternative molecular marker to increase the knowledge about the ribosomal region , 82\u201384. species . However species , 89.Even though the 16S rRNA sequences have been broadly used as an effective tool for basic evolutionary analyses of cultivable and uncultivable bacteria, for closely related groups, they are unable to determine their nearest neighbors since different species can share identical or nearly identical 16S rRNA sequences , 90\u201393. The MLSA allows the analysis of genes together as a larger phylogenetic dataset , 52, 95.\u03b2-subunit (atpD), chaperone protein (dnaK), glutamine synthase II (glnII), glutamate synthase (gltB), DNA gyrase \u03b2-subunit (gyrB), recombinase A (recA), RNA polymerase \u03b2-subunit (rpoB), and tryptophan synthase \u03b2-subunit (trpB) [The main requirements of MLSA involve the selection of housekeeping genes that should be present in the genome of all organisms object of study as a single copy and spread in the genome. They must also have a consistent size to allow phylogenetic reconstructions and sequencing. Consequently, different genera may vary in the set of genes used in the analysis , 52. In t (trpB) , 97\u201399. After the housekeeping genes selection, each set of single-gene sequences should be aligned and trimmed to keep the same region of comparison and the same size for the alignment. Subsequently, the phylogeny of each single gene is individually built and compared to each other, and if they are congruent, the alignments must be concatenated to proceed with the MLSA. The concatenation process can be carried out manually using software or any text editor program. Some of the most common software for alignment of prokaryotic sequences are MEGA which prIn 2002, the ad hoc committee of the ICSP recommended the analysis of the concatenated housekeeping genes as a promising method to replace the DDH association in bacterial taxonomy . FollowiAs the DNA molecules represent the identity of the species, studying the genomic profiles allows obtaining relevant information for taxonomic purposes. Prokaryotic genomes contain repetitive sequences distributed throughout the chromosome; however, the sites, length, and the number of times they are repeated are characteristic of each strain, representing a fingerprint of each genome. To evaluate those genomic profiles, the taxonomists use DNA amplification by PCR with specific primers for those regions or restriction enzymes to cut the chromosome in the restriction sites. As a result, in both procedures, there is a mixture of different fragments of DNA that can be separated by electrophoresis revealing the respective genetic profile , 108. FoThe DDH evaluates the extension and stability of DNA hybrid strands after the dissociation and consecutive reassociation of a two-genomes mixture incubated under controlled conditions . The cutHaemophilus influenzae in 1995, using the conventional Sanger sequencing technique [The first prokaryote genome sequenced was of the bacteria echnique . Althougechnique , 120. Toechnique .The statistical parameters used to report the quality of the genome assembly recommended for taxonomic purposes are (i) the genome size, defined as the length of all contigs sequenced; (ii) the N50, defined as the length of the shortest contig that accumulatively shows 50% or more of the genome size when the lengths of the contigs are summed from the largest to shortest; and (iii) the depth of coverage from the sequencing, indicating how many sequencing reads are generated; this value is usually given as folds and is recommended a minimum of 50X (50-fold) for the platforms cited before , 121. Anin silico [Presently, using genome sequencing, the taxonomists can use other analyses to study the relatedness among the DNAs of bacteria. The threshold values suggested for these analyses relate to the 70% from the DDH technique. As mentioned above, these values are known as OGRIs and effectively calculate genomes' similarities n silico , 58. Then silico , 123.in silico is the DNA guanine and cytosine (GC) content, which is also used as a genotypic marker in taxonomy [With the availability of genome sequences, another parameter that can be calculated taxonomy . In the taxonomy . Later, taxonomy . The GC taxonomy , QUAST [taxonomy , and Biotaxonomy . Althougtaxonomy .In contrast to the taxonomy of eukaryotes, where the phenotypic characteristics can be used to differentiate some organisms, these traits are questionable in prokaryotes since different bacterial species can present identical phenotypes . Neverth2 or CO2 requirements, tolerance to different antibiotics, enzymatic activity , and metabolism of compounds [The classical phenotypic tests in microbiology include morphological, physiologic, and biochemical analysis. Morphological characteristics describe the cellular and colony features, such as the cell shape, endospore formation, presence of flagella, Gram stain, color colony, diameter, opacity, mucus production, and their consistency. On the other hand, the physiological and biochemical characteristics include data about the culture under different growth conditions, such as range of temperature, pH (4\u201312), salinity tolerance (1\u201310%), O source) , 28, 126Another common phenotypic test is the chemical characterization of cells, which evaluates extracellular elements , cell membrane composition , or cytoplasm compounds (polyamines) . These fIt is worth mentioning that there are many genes coding proteins without known function. Therefore, the phenotypic tests could help the search for proteins with biotechnological interest, improving the knowledge about the interactions of microorganisms with the environment . HoweverPisum sativum plants (pea) and reported by the Dutch microbiologist Martinus Willem Beijerinck to be responsible for the nitrogen fixation process. The isolated bacterium was first named Bacillus radicicola but later reclassified as Rhizobium, comprising the Rhizobium leguminosarum species [More than 2,000 years ago, ancient Chinese literature reported that crop rotation of legumes with cereals was traditionally used to enhance grain production. Improvement in soil fertility by cultivating legumes was thus already noticed at that time, although the mechanisms involved were not known yet . However species , 136. Si species .R. leguminosarum [R. phaseoli, R. trifolii, and R. meliloti [R. lupini [R. japonicum [Rhizobium spp. [In the early twentieth century, nodulation tests using a broad range of host plants and different bacteria were conducted, and the specificity between the host plants and the symbiotic bacteria was reported. Based on this, Baldwin and Fred proposedinosarum , R. phasmeliloti , which p. lupini and R. japonicum , which pium spp. , 142.The taxonomy dropped the cross-nodulation concept after several studies reporting both exceptions and strains sharing high similarity and belonging to different groups. Additionally, the rhizobia classification needed to include more information to adjust to the general bacterial taxonomy \u2013145. HowBradyrhizobium to allocate the slow-growing species B. japonicum and B. lupini. Six years later, the genus Azorhizobium [Sesbania rostrata and fix nitrogen under free-living aerobic conditions. In the same year, the Sinorhizobium genus was proposed for the fast-growing soybean species Rhizobium fredii [The next step of rhizobia taxonomy was based on the numerical taxonomy using computers to compare bacteria properties. Around the 1960s, many analyses were included in taxonomic studies involving phenotypic traits, growth conditions, nutrient resources, metabolic features, and resistance to antibiotics and other chemicals, among others. Also, the DNA molecule started to be investigated, and the base composition (GC mol%) was added to bacterial classification \u2013148. Usihizobium was descm fredii .Agrobacterium [\u03b1-Proteobacteria [Mesorhizobium to allocate five Rhizobium species with an intermediate growth rate than fast-growing Rhizobium and Sinorhizobium and slow-growing Bradyrhizobium. Following, de Lajudie et al. [Allorhizobium of symbiotic bacteria associated with the aquatic legume Neptunia natans.Around the 1990s, many other analyses were included in taxonomic studies. The polyphasic taxonomy confirmed some of the taxa proposed with the numerical taxonomy but also pointed out that the numerical taxonomy lacked information about the evolutionary relationships among rhizobia. Consistent DNA studies allowed the taxonomists to assess the diversity and phylogenetic relationship among bacteria at a molecular level . The 16Sacterium . Considebacteria . In the bacteria proposede et al. proposedBradyrhizobium [Mesorhizobium [Azorhizobium [Rhizobium [Sinorhizobium (reclassified as Ensifer) [Allorhizobium [The six rhizobia genera were allocated in four distinct families: hizobium , Mesorhihizobium , and Azohizobium belonginhizobium , SinorhiEnsifer) , 156, anhizobium belonginhizobium . Additiohizobium .Crotalaria, classified as Methylobacterium [M. nodulans [Agrobacterium [Allorhizobium undicola [Rhizobium. In 2002, a new species of the genus Devosia was reported to induce nitrogen-fixing root-nodules in Neptunia natans [Phyllobacterium were described as P. trifolii, isolated from the nodules of Trifolium pratense [Ochrobactrum [O. lupini and O. cytisi [Lupinus albus and Cytisus scoparius, respectively. In 2008, Lin and collaborators [Shinella genus [S. kummerowiae isolated from root nodules of Kummerowia stipulacea. In 2009, the first rhizobial isolate (BA135) belonging to the species Aminobacter aminovorans was reported, isolated from nodules of Lotus tenuis [Microvirga [M. lupini isolated from Lupinus texensis, M. lotononidis, and M. zambiensis isolated from Listia angolensis, and M. vignae isolated from Vigna unguiculata [The new century started a revolution on the rhizobia taxonomy. A first milestone occurred in January of 2001, with the first report of a non-rhizobia nitrogen-fixing legume-symbiotic bacterium isolated from the nodules of acterium , with thnodulans . In the nodulans , Young enodulans suggesteacterium and Alloundicola in the ga natans . In 2005pratense . From 20rucella) , 166 all. cytisi , 168, isborators describela genus the symbs tenuis . In the crovirga allocateuiculata , 174.\u03b1-Proteobacteria class, 2001 was outstanding by the report of a nodulating \u03b2-Proteobacteria [Burkholderia, described by Yabuuchi and collaborators in 1992 [nod) and the nodulation capacity were reported in symbiotic bacteria not belonging to \u03b1-Proteobacteria [\u03b2-Proteobacteria genera were described, including the Ralstonia genus [Cupriavidus [Burkholderia genus, later reclassified as the new genus Paraburkholderia [Neorhizobium and Pararhizobium, and also the revision of the genera Agrobacterium and Allorhizobium [Mycetohabitans and Trinickia, this last one containing the nodulating nitrogen-fixing species T. symbiotica.Besides all those changes in the bacteria , belongi in 1992 . It was bacteria . After tia genus , reclassriavidus \u2013180, in holderia , 182. Inhizobium , 184. Mohizobium suggestenod) and nitrogen fixation (nif) genes, some of which are related to genes of different members of classical rhizobial genera. All those findings show that the ability to establish symbiosis with legumes is more widespread in bacteria than anticipated before [With the evolution of taxonomic analyses, we may conclude that many descriptions of nodulating bacteria, isolated from nodules of different hosts and belonging to nonrhizobial genera have been published, and many taxonomic groups were reclassified. Most of those bacteria have a diverse set of nodulation , Brucellaceae, Hyphomicrobiaceae, and Xanthobacteiaceae. The \u03b2-Proteobacteria family is Burkholderiaceae with three genera. The list of genera and the number of species with valid names standing in nomenclature (without synonyms) according to the LPSN in October of 2021 are listed in Today, rhizobia are distributed in eight families, seven belonging to the stricted . The \u03b1-PEnsifer and suggested that the genus should be separated into two genera, one for the symbiotic clade and the other for the nonsymbiotic clade [Ensifer and Sinorhizobium is no longer justified, and eight new combinations were suggested, but not all involving rhizobial strains [In 2020, a study performed phenotypic, genomic, and phylogenetic analyses of the genus ic clade . More reic clade . The extic clade , there a strains .\u03b1- and \u03b2-Proteobacteria subclasses, and the number increases every year. However, less than half of these valid names from the 19 genera have species comprising strains already reported for their symbiotic properties, including nodulation and nitrogen fixation abilities. Furthermore, many species are reported as endophytes, or were isolated from environmental samples, or from nodules but unable to reestablish symbiotic associations. Therefore, the symbiotic capacity remains largely unknown for many species.From \u03b3-Proteobacteria species from legume nodules. In this study, 52 isolates belonging to the Pseudomonas, Escherichia, Leclercia, Pantoea, and Enterobacter genera were isolated from three Hedysarum species, and rhizobia-like bacteria were found occupying the nodules. However, Koch's postulates and the symbiotic parameters from the isolates were not investigated. Shiraishi and collaborators [\u03b3-Proteobacteria subdivision. The authors described nod and nif genes in the Pseudomonas sp. strain Ch10048, sharing high similarity with the symbiotic sequences of Agrobacterium sp., suggesting the acquisition of these genes through HGT from rhizobial species in the soil. Despite the report of nod and nif genes in strain Ch10048, and the confirmation of the ability to nodulate the host legume Robinia pseudoacacia, the existence of \u03b3-rhizobia remains controversial until additional evidence confirms that the genes were not provided by other bacteria coexisting in the nodules and that the nitrogen fixation ability of the strain is tested [Additionally, in 2004, Benhizia and collaborators publisheborators also reps tested .Undoubtedly, rhizobia taxonomy advanced together with prokaryotes' taxonomy, and improvements regarding the origin and evolution of these bacteria were obtained. However, there is a need to increase the studies relating taxonomy and phylogeny with the phylogeny of nitrogen fixation and biotechnological properties of rhizobia.As commented before, the members of the Subcommittee on Taxonomy of Rhizobia and Agrobacteria of the ICSP reviewed the taxonomic developments for this group of bacteria and updated the minimal standards for taxonomic studies, including additional considerations specific to rhizobia and agrobacteria. According to them, taxonomic definitions should not include symbiotic or pathogenic characters because the interactions with plants are determined by accessory genes that may be present in several bacterial species, and be gained or lost, imposing taxonomic limits . InsteadPhaseolus vulgaris, Macroptilium atropurpureum, Vigna unguiculata, and Mimosa pudica are promiscuous legumes commonly used in taxonomic studies of rhizobia. The symbiotic ability may be evaluated compared to negative controls by the presence/absence of root nodules, plant biomass, N content, or the acetylene reduction assay. In addition, the strains must be reisolated from the nodules, keeping the original phenotypic, phylogenetic, or genotypic features, obeying Koch's postulates [Concerning the description of new rhizobial species, it is especially recommended to evaluate the symbiotic ability of the strains based on Koch's postulates using the original host and/or other legume species. This last alternative may be used to expand the information about the host range of the strains and to define symbiovar groups or when the seeds of the original host plant are not available. The species stulates , 191.\u03b1-rhizobia and \u03b2-rhizobia and showed that rhizobial genomes range from 3.42\u2009Mb in Cupriavidus taiwanensis LMG 19424 to 9.36\u2009Mb in Microvirga lupini Lut6. However, the authors highlighted that some strains of Bradyrhizobium, Mesorhizobium, and Azorhizobium genera might have higher genomes, which means that the rhizobial genomes can be twice or more times higher than the average size of bacterial genomes reported in the two studies [Ensifer strains from the symbiotic clade carried an average of 325 fewer genes and appeared to have fewer rRNA operons when compared to strains belonging to the nonsymbiotic clade [Based on the meta-analysis of 1,708 completed bacterial genomes performed in 2017 by diCenzo and Finan , the ave studies , 193. Inic clade . Large gIn general, the rhizobial genes responsible for plant infection, nodulation, and nitrogen fixation are clustered together in symbiotic plasmids or symbiotic islands in the chromosome, or even in both genomic regions \u2013200. ThoAgrobacterium species with linear chromids carrying a unique replication system and conserved genes [Besides the main chromosome, some bacteria have a \u201csecond chromosome\u201d or \u201cmegaplasmid,\u201d for which the term \u201cchromid\u201d was proposed. These elements have some core genes and nucleotide composition similar to the associated chromosomes, but most of their genes are accessory. Some rhizobia and agrobacteria also have genus-specific chromids, similar within a genus but with different sets of conserved genes among genera. An example is some ed genes .As biological nitrogen fixation is considered one of the most important biological processes for life on Earth, there is great biotechnological interest in diazotrophic bacteria . Studyinnif and nod genes have different phylogenies, implying that rhizobia inherited the nitrogen fixation ability of their free-living relatives [Remigi and collaborators reviewedelatives . BesidesBradyrhizobium genus might be the rhizobia's ancestor [Bradyrhizobium originated 553 million years ago (MYA). Other rhizobia evolved 400-324 MYA, originating the Mesorhizobium, Rhizobium, and Sinorhizobium (=Ensifer) genera. Interestingly, the first legumes ascended on Earth long after, around 70 MYA [Bradyrhizobium ancestry is that some strains were detected with nitrogen fixation ability as free-living bacteria, as observed in some Azorhizobium, and both lineages are very distant from the other rhizobial genera [Evidence indicates that the ancestor . Using td 70 MYA , 213. Anl genera .A. caulinodans reported an increase in horizontal transference frequency of its symbiosis island in the legume rhizosphere or in the presence of plant flavonoids, suggesting a host-dependent evolution [It is well known that bacteria have different mechanisms to exchange genetic material. This event is more recurrent among organisms sharing the same ecological environment, reinforcing that some rhizobia evolved by acquiring symbiosis genes from other species by HGT , 215. Fuvolution . Over evvolution , 217. Givolution , 218\u2013222As presented in this review, the main goal of taxonomy is to ordinate living organisms in a stable and hierarchical system. As shown in"} +{"text": "Pancreatic cancer is the fourth leading cause of cancer\u2010related death with a 10% 5\u2010year overall survival rate (OS). Radiation therapy (RT) in addition to dose escalation improves the outcome by significantly increasing the OS at 2 and 3 years but is hindered by the toxicity of the duodenum. Our group showed that the insertion of hydrogel spacer reduces duodenal toxicity, but the complex anatomy and the demanding procedure make the benefits highly uncertain. Here, we investigated the feasibility of augmenting the workflow with intraoperative feedback to reduce the adverse effects of the uncertainties.We simulated three scenarios of the virtual spacer for four cadavers with two types of gross tumor volume (GTV) ; first, the ideal injection; second, the nonideal injection that incorporates common spacer placement uncertainties; and third, the corrective injection that uses the simulation result from nonideal injection and is designed to compensate for the effect of uncertainties. We considered two common uncertainties: (1) \u201cNarrowing\u201d is defined as the injection of smaller spacer volume than planned. (2) \u201cMissing part\u201d is defined as failure to inject spacer in the ascending section of the duodenum. A total of 32 stereotactic body radiation therapy (SBRT) plans (33\u00a0Gy in 5 fractions) were designed, for four cadavers, two GTV sizes, and two types of uncertainties. The preinjection scenario for each case was compared with three scenarios of virtual spacer placement from the dosimetric and geometric points of view.We found that the overlapping PTV space with the duodenum is an informative quantity for determining the effective location of the spacer. The ideal spacer distribution reduced the duodenal V33Gy for small and large GTV to less than 0.3 and 0.1cc, from an average of 3.3cc, and 1.2cc for the preinjection scenario. However, spacer placement uncertainties reduced the efficacy of the spacer in sparing the duodenum . The separation between duodenum and GTV decreased by an average of 5.3 and 4.6\u00a0mm. The corrective feedback can effectively bring back the expected benefits from the ideal location of the spacer (averaged V33Gy of 0.4 and 0.1cc).An informative feedback metric was introduced and used to mitigate the effect of spacer placement uncertainties and maximize the benefits of the EUS\u2010guided procedure. CTComputed tomographyEUSEndoscopic ultrasoundGTVgross tumor volumeHOPhead of pancreasOARorgans at riskOSoverall survival rateOVHoverlapped volume histogramPTVPlanning Target VolumeRTradiation therapySBRTstereotactic body radiation therapy1Pancreatic cancer is the fourth leading cause of cancer\u2010related death and the 12th most common malignancy in the US, with less than a 10% 5\u2010year overall survival rate (OS).However, the success of the spacer placement procedure is highly uncertain. Previous studies on rectal spacer have shown that hydrogel spacer injection is associated with risk of infection, inflammation, and soft\u2010tissue wall infiltration.We hypothesize that a novel spacer placement workflow for duodenal hydrogel spacer featuring corrective intraoperative feedback will increase the robustness of minimally invasive EUS\u2010guided procedure, and will reduce the associated risks and uncertainties. Thus, the purpose of this study was to find the most informative feedback to guide the spacer injection procedure, and second, to show the feasibility and benefit of using the corrective feedback and injections to optimize the placement of spacer. We believe that the intraoperative feedback and corrective injection increase the efficiency of delivering the preoperative ideal spacer placement plan and, thus, the entire procedure.First, the article explains how the data were collected and prepared for the study, and then describes the method used to simulate common uncertainties of spacer insertion. Next, it provides information on the radiotherapy planning protocol, and finally, focuses on introducing the informative feedback for corrective injection and evaluating the result from various aspects.22.1For this study, we used the data from four cadavers injected with hydrogel spacer. For each cadaver, two computed tomography (CT) scans are available, before hydrogel spacer placement and after the injection of hydrogel spacer. A biodegradable polyethylene glycol hydrogel was injected through 18\u2010gauge needle under EUS guidance in the pancreaticoduodenal groove. This allowed us to, first, validate our spacer simulation algorithm on paired pre\u2010, postinjection scans, and then, use the platform to perform the spacer simulation study on preinjection scans with high confidence. All ROIs (Region of Interest) were segmented by a certified physician in our institute. All scans were acquired with 3\u2010mm slice thickness, 120 kVp, 200\u00a0mA, and field of view of 50\u00a0cm. For further analysis, CT scans and contours were exported as Digital Imaging and Communications in Medicine using commercial software, Varian Velocity. The anonymized data were then imported to MATLAB for simulation and analysis.2.2We simulated the duodenal spacer placement scenarios using our in\u2010house finite element\u2010based spacer simulation platform, finite element model\u2010oriented spacer simulation (FEMOSSA).32:Eii indicates the strain, and c, a1, a2, and a3 parameters were 1.05, 41.4, 51.1, 13.2, respectively.The HOP was modeled with a linear elastic behavior as a homogeneous, incompressible isotropic material, with Young's modulus of 30\u00a0kPa, and a Poisson coefficient of 0.48.The spacer injection process was defined as a translation of an ensemble of blebs from an initial position, toward the final position, the desired spacer distribution. To initialize the simulation of bleb\u2010surface contact, each bleb was placed tangent to the contour surface that is going to be deformed, as close as possible to its final location. The desired spacer distribution, the final position of blebs, was created by placing an ensemble of spherical objects (blebs) with various radii using the FEMOSSA built\u2010in graphical user interface. The blebs push the proximal contour surface on their way from the initial to the final position, and thus, deform organs during this transition. This innovative and simplified definition was used to turn this complex physical phenomenon into a manageable quasi\u2010static problem while capturing the dynamics of the process.To ensure a well\u2010posed FE (Finite Element) problem, we used anatomic boundary conditions, inspired by the duodenum\u2010pancreas interface. Shown in Figure\u00a0We compared the pre\u2010 and postinjection scans to understand the effect of spacer placement. We observed that the inferior surface of the duodenal horizontal section (D3) relatively stays in the same position. However, the duodenal descending and ascending parts (D2 and D4) move considerably. Because of the stomach and sphincter higher stiffness, the movement of the duodenal section immediately after stomach (D1) is limited. We incorporated these anatomical restrictions by bounding the mesh nodes of the inferior surface of the D3 and the nodes within a 2\u2010mm distance from the stomach. Based on our observations, the HOP showed a local deformation, rather than a global movement. Thus, the superior and inferior margins of HOP mesh were fixated, preventing the target structure from global movement while allowing local deformation.The model was validated on the postinjection scans from cadavers. For the validation purpose, the distribution of spacer was determined by aligning the pre\u2010 and postinjection scans by HOP, because of its lack of global movement as mentioned earlier. Three figures of merit were used for validation: the dice similarity coefficient, the radial nearest neighbor distance, and overlapped volume histogram (OVH).2.3To perform the hydrogel spacer placement simulation, we divided the duodenum into three anatomical parts: P1, the descending part of the duodenum (D1 & D2); P2, the horizontal part (D3); P3, the ascending part (D4) Figure\u00a0. We simuA single value 3D measurement using the OVH distance L1cc was used for the initial evaluation of the separation between OAR and tumor. The OVH is an on\u2010demand quantity that shows the 3D relative geometry of ROIs. Previously, it is shown to have a high correlation with dosimetric indices,To determine the optimal spacer distribution, we manually placed the blebs where the planning target volume (PTV) overlaps with the duodenum. In this study, our goal was to achieve 95% PTV volume coverage with prescription dose without violating OAR constraints. As a result, the chosen spacer distribution aims at minimizing the PTV overlapping volume with the duodenum.Shown in Figure\u00a0For this feasibility study, two common uncertainties associated with spacer injection were simulated to show the corrective feedback: (1) narrowing uncertainty for the small GTV case and (2) missing part uncertainty for the large GTV case. The \u201cnarrowing\u201d was defined as the injection of less volume of spacer as was suggested by the ideal injection scenario. To simulate the narrowing, we randomly reduced the radii of the blebs, resulting in a decrease in the overall volume of the spacer. The \u201cmissing part\u201d was simulated by missing the injection in the ascending section of the duodenum (P3). Although the preoperative placement planning recommends the injection in P3, due to the hard\u2010to\u2010reach location of P3 it may not be injected.2.4For each case, the eight scenarios were planned with Stereotactic Body Radiation Therapy (SBRT) techniques. Each case has two GTV types with their corresponding uncertainties. For each GTV type, there are four scenarios: preinjection, ideal injection, nonideal injection, and corrective injection. Here, we used the 2D PTV overlap with the duodenum metric to determine the ideal distribution of the virtual spacer. The PTV was created based on the clinical planning protocol in our institute by first expanding GTV by 3\u00a0mm to get the mock multiple breath\u2010hold GTV (GTV\u2010multabc) and then expanded further by 2\u00a0mm.The preinjection scenario is based on the ROIs relative geometry before injection of the hydrogel. The ideal injection scenario is the simulation of the virtual spacer with the preoperative ideal spacer distribution. The nonideal injection scenario was simulated based on the distribution from the ideal scenario while incorporating the uncertainties. Finally, for the corrective injection case, the simulated ROIs from the nonideal case were used to perform another placement of the virtual spacer so that the PTV overlapping volume with the duodenum is minimized. Figure\u00a0A total of 32 volumetric modulated arc therapy (VMAT) SBRT plans (33\u00a0Gy in 5 fractions) were designed, for four cadavers and eight scenarios. The planning objectives and constraints, approved by our institute board, were as follows: at least 95% of PTV volume receives \u226533\u00a0Gy, 100% of PTV volume receives \u226525\u00a0Gy, less than 1 cc of PTV volume receives \u226542.9\u00a0Gy, at least 95% of GTV\u2010multabc volume receives \u226533\u00a0Gy, 100% of GTV volume receives \u226533\u00a0Gy, less than 25% of kidney volume receives \u226512\u00a0Gy, less than 50% of liver volume receives \u226512\u00a0Gy, less than 20 cc of duodenum, stomach, and bowel volume receives \u226520\u00a0Gy, less than 1 cc of duodenum, stomach, and bowel volume receives \u226533\u00a0Gy, and less than 1 cc of spinal cord volume receives \u22658\u00a0Gy. To avoid any planning bias, the planning parameters, namely the number of beams, number of iterations, and objective functions were identical for all the plans. To make the plans comparable, later in optimization, we forced the optimization to achieve 95% PTV volume coverage by adding an extra constraint. The plans were designed and optimized using the RayStation treatment planning system .2.5The ideal spacer distribution was chosen so that the PTV overlapping volume with the duodenum is minimized. As seen in the second column of Figure\u00a03The preinjection GTV and duodenum contours were deformed using the simulated deformation vector field to create the postsimulation contours. The preinjection and deformed postsimulation contours were then used for RT planning and analysis. Figure\u00a0The spacer\u2010induced separation was measured using OVH L1cc distance. As seen in Figure\u00a0We also compared the duodenal high dose volume, V33Gy, as it is critical for dose escalation which is the key to the increase in OS rate. Although we used the same planning parameters depending on the patient anatomy to spare the duodenum, each plan may achieve a different amount of target coverage. As mentioned in the Section 2, we made the plans comparable by adding an extra constraint to force the optimization to achieve 95% PTV coverage. This constraint resulted in all the plans having a PTV 33\u00a0Gy coverage of between 95 and 96%.The averaged preinjection duodenal V33Gy were 3.3 and 1.2cc for small and large GTV, respectively, and were reduced to 0.3 and 0.1cc after ideal spacer injection. Shown in Figure\u00a04In this work, we investigated the feasibility and advantages of the duodenal hydrogel spacer placement procedure featuring corrective feedback. The data from four cadavers were simulated with a virtual spacer using our in\u2010house, physics\u2010based, patient\u2010specific spacer simulation algorithm, FEMOSSA. Previously, we have applied FEMOSSA to rectal spacer and shown its advantages over previous models.To show the efficacy of the corrective feedback in the hydrogel spacer injection procedure, we used the OVH L1cc distance and the duodenal high dose volume (V33Gy). OVH is a useful 3D physical feedback measure that quantifies the spatial separation between target and OAR. OVH distances have been shown to be correlated with dosimetric indices, and have been used for predicting the duodenal dose, and automated and semiautomatic treatment planning.Because we performed two sequential simulations for the corrective injection scenario, the spacer distribution created for the ideal injection is different from that for the corrective injection. More specifically, first, the nonideal distribution of spacer was simulated, and then, the new scan and structures were used to determine the distribution of spacer needed to be injected. Then, we performed a second FE simulation to create the corrective injection. We believe that this method results in a more realistic simulation of corrective injection. Thus, as expected, the nonlinearity of the virtual spacer simulation resulted in getting different values for the corrective injection than the ideal injection, which more closely resembles what happens in the actual injection procedure.To find the ideal location of the spacer, we proposed using the overlapping PTV volume with the duodenum. In practice, the PTV volume is used to incorporate uncertainties, like motion and setup uncertainty. Therefore, minimizing the duodenum\u2010PTV overlap not only increases the duodenum sparing, but also can introduce the spacer as a buffer\u2010like structure to reduce RT planning uncertainties.The duodenum\u2010PTV overlapping volume can also provide informative feedback during the procedure. Because the procedure is done using an ultrasound endoscopic probe, the gastroenterologist can only inspect the outcome of injection from the 2D ultrasound scan. Thus, the 2D measurement of the distance between tumor and duodenum is useful guidance for the procedure. However, the main challenge is the registration of 2D ultrasound images to the CT and measuring the amount of separation, which is the aim of our current and future studies.We are aware that our study has a few limitations. First, due to the novelty of the duodenal spacer, the number of cases is very limited and has not been widely used in the clinic. One potential reason may be the complexity and the high uncertainty in the actual outcome of the procedure, which is the main motivation of this work. Thus, we believe that this feasibility study shows that using corrective feedback improves the spacer procedure outcome and promotes the use of the spacer to improve the quality of RT treatment.Moreover, here we did not incorporate the effect of breathing and normal organ movements, as these movements may induce further uncertainties for the actual procedure. The main goal of this study, however, was to demonstrate the effectiveness of the preoperative design, intraoperative evaluation, and potential correction. Although unavoidable, by taking advantage of the near real\u2010time AI\u2010based systems, we believe the effect of this uncertainty can be minimized. Currently, we are developing a portable C\u2010arm X\u2010ray\u2010based AI feedback system, where the X\u2010ray images are intraoperatively acquired to locate and track the spacer. The spacer is automatically segmented and then its volume is reconstructed from the X\u2010ray projections in near real\u2010time to be compared with the ideal distribution of spacer. As a result, this system can provide the physician with comprehensive image guidance for potential corrective injections.Another limitation of the study is that we only considered two types of uncertainties in the spacer placement procedure. However, there are more underlying uncertainties involved in the process, for instance, the uncertainty in the FE modeling process, like computing platform, choice of boundary conditions, element type, and material properties.5In this work, we investigated the feasibility and benefits of intraoperative corrective feedback for the duodenal spacer placement procedure. Our simulation result showed that corrective feedback compensated for common uncertainties associated with the spacer placement procedure, and thus, increased the effectiveness of the complicated EUS\u2010guided spacer placement. We showed that PTV overlapping volume with OAR is on\u2010demand and informative potential intraoperative feedback that can guide the physician during the procedure. Future work focuses on (1) developing a decision support system that predicts the optimum location of the spacer, (2) implementing the intraoperative feedback system to localize the spacer and provide quantitative and visual feedback during the actual procedure.The authors have no conflict to disclose.Supporting informationClick here for additional data file.Supporting informationClick here for additional data file."} +{"text": "Jaml conditional knockout mice, we demonstrated JAML promoted AKI mainly via a macrophage-dependent mechanism and found that JAML-mediated macrophage phenotype polarization and efferocytosis is one of the critical signal transduction pathways linking inflammatory responses to AKI. Mechanistically, the effects of JAML on the regulation of macrophages were, at least in part, associated with a macrophage-inducible C-type lectin\u2013dependent mechanism. Collectively, our studies explore for the first time to our knowledge new biological functions of JAML in macrophages and conclude that JAML is an important mediator and biomarker of AKI. Pharmacological targeting of JAML-mediated signaling pathways at multiple levels may provide a novel therapeutic strategy for patients with AKI.Although macrophages are undoubtedly attractive therapeutic targets for acute kidney injury (AKI) because of their critical roles in renal inflammation and repair, the underlying mechanisms of macrophage phenotype switching and efferocytosis in the regulation of inflammatory responses during AKI are still largely unclear. The present study elucidated the role of junctional adhesion molecule\u2013like protein (JAML) in the pathogenesis of AKI. We found that JAML was significantly upregulated in kidneys from 2 different murine AKI models including renal ischemia/reperfusion injury (IRI) and cisplatin-induced AKI. By generation of bone marrow chimeric mice, macrophage-specific and tubular cell\u2013specific Acute kidney injury (AKI), often caused by renal ischemia/reperfusion injury (IRI), nephrotoxic agents, and sepsis, is a global public health concern associated with high morbidity and mortality , 2. CurrRecently, the role of junctional adhesion molecules (JAMs) of the immunoglobulin superfamily has attracted much attention because of their important functions in immune cell activation and inflammatory responses , 23. JAMBy immunohistochemical (IHC) staining analysis , we firsJaml-knockout (Jaml\u2013/\u2013) mice, which was confirmed by mRNA (https://doi.org/10.1172/jci.insight.158571DS1), Western blot (Jaml\u2013/\u2013 mice compared with wild-type (WT) mice with renal IRI. Furthermore, JAML deficiency attenuated inflammatory responses by decreasing macrophage and neutrophil infiltration had obvious damage reduction compared to WT\u2192 WT or Jaml\u2013/\u2013\u2192 WT chimeras, respectively, although a trend toward reduction was observed. Thus, the presence of JAML in immune cells worsened renal damage and appears to be a major contributor to AKI.Considering that JAML was expressed not only in immune cells such as macrophages but also in renal parenchymal cells, we next determined JAML\u2019s contribution in bone marrow\u2013derived (BM-derived) immune cells or renal parenchymal cells individually to the pathogenesis of renal IRI through BM transplantation studies. Chimeric mice were created, in which BM was replaced with donor BM cells from WT or from enal IRI . Meanwhibout 80% . WT\u2192 WT bout 80% , blood ubout 80% , and tubbout 80% , as wellbout 80% . Notablyhi F4/80lo (F4/80lo). Resident macrophages, which are largely embryonically derived, are CD11bloF4/80hi (F4/80hi) . Then, gn by MFI . Howevern by MFI .Jaml-knockout (+LysM-Crefl/flJaml) mice by intercrossing Jaml-floxed mice with LysM (Lyz2)-Cre mice , which belong to a family of PRRs (Clec4e) expression in the kidney but had no significant effects on other CLRs, such as Clec4d, Clec1b, Clec2h, Clec7a, Clec9a, and Clec12a and F4/80hi (resident), in the mouse kidney by flow cytometry analysis. A significantly higher proportion of M2 (CD206hi) or lower proportion of M1 (CD80hi) macrophages was found in \u2013/\u2013Jaml IRI mice than in control mice , chemicareatment . All thent have been increasingly linked to renal inflammation and repair in AKI. Therefore, achieving the full therapeutic potential of macrophages for patients with AKI requires a better understanding of the regulation of macrophage dynamics and functions. In this study, we found that JAML was significantly upregulated in kidneys from 2 separate murine AKI models including renal IRI and cisp for AKI . More im for AKI and 3. Benal IRI , whereasenal IRI . Consideenal IRI , 6, we fenal IRI . Howeverr to AKI .In addition to our recent finding that JAML mediates podocyte lipid metabolism in diabetic kidney disease , we furtClec4e, is a member of the CLR family and is involved in the initiation of innate immune response. Moreover, as a sensor of cell death, Mincle can also recognize damage-associated molecular patterns, which induct inflammatory responses, and enable immune sensing of damaged self, which decreases dead cell clearance, thereby aggravating a vicious cycle of necroinflammation male mice were purchased from Vital River Laboratory Animal Technology Co., Ltd. Different groups were allocated in a randomized manner, and investigators were unaware of the allocation of different groups when doing surgeries. All mice (3\u20135 mice per cage) were housed under standard laboratory conditions in the specific pathogen\u2013free experimental animal center of Shandong University. Male mice were used in this study. The number of mice used for the experiments is indicated in the corresponding figure legends. All our experimental animals were kept under barrier conditions under constant veterinary supervision and did not display signs of distress or pathological changes that warranted veterinary intervention.fl/+Jaml mice were generated by standard homologous recombination in Shanghai Southern Model Biotechnology Development Co., Ltd. In these mice, exon 4 of Jaml was flanked by loxP sequences. Global Jaml-knockout mice (Jaml\u2013/\u2013) were obtained as described in our previous studies , in which the adenovirus EIIa promoter directs the expression of Cre enzyme in early mouse embryos (2- to 8-cell stage) to achieve homologous recombination between loxP sites, thereby triggering the deletion of exon 4 in all cells of the developing animal, including the germ cells that transmit the genetic alteration to progeny. The first generation of EIIa-Cre fl/+Jaml mice might be chimeric due to the mosaic activity of Cre recombinase. Therefore, chimeric offspring were backcrossed with C57BL/6J to generate +/\u2013Jaml mice, which were then intercrossed for the production of Jaml\u2013/\u2013 mice. Mouse genotyping was performed using genomic DNA isolated from mouse tails by PCR mice were crossed with mice expressing Cre recombinase under the control of lysozyme 2 promoter . Although LysM is not a specific marker for macrophages, Lyz2(LysM)-Cre mice currently have a relatively highly efficient gene depletion in mature macrophages and granulocytes isolated from the peritoneal cavity or derived from BM (To obtain myeloid cell\u2013specific deletion of from BM .fl/flJaml mice (C57BL/6J background) were hybridized with transgenic mice expressing Cre recombinase under the cadherin 16 promoter (Ksp-Cre) to generate tubular cell\u2013specific Jaml-knockout mice (+Ksp-Crefl/flJaml). fl/flJaml and Ksp-Cre mice were all used in a C57BL/6 background. Age-matched mice with 2 WT alleles and Cre expression were used as controls (Ksp-Cre+Jaml+/+). Mouse genotyping was performed using genomic DNA isolated from mouse tails by PCR at 2 weeks of age . After centrifugation at 900g for 10 minutes at 4\u00b0C, cells were resuspended in serum-free RPMI, and cell number was determined using a cell counter . Moreover, 8-week-old recipient mice were lethally irradiated and injected with 5 \u00d7 106 BM cells (volume 0.2 mL) via the tail vein 6 hours after irradiation. Mice were kept on an antibiotic (1 g/L sulfamethazine in drinking water) for 2 weeks after irradiation and then switched to water without antibiotics. We used GFP-transgenic mice as donors to confirm efficient replacement. In this experiment, we used mice matched for age and genetic background and transplanted with appropriate BM as controls.Donor mice were sacrificed. Tibias and femurs were flushed with medium as described in our previous studies . The mixAn established mouse model of renal IRI was performed as described previously , 57. BriAKI in mice was also induced by a single intraperitoneal injection of cisplatin at a dose of 30 mg/kg (MilliporeSigma). At 3 days, 5 days, and 10 days after injection, mice were sacrificed, and kidney samples were collected for various analyses , 57.2 conditions. Rat proximal tubule epithelial cells (NRK-52E) were purchased from ATCC and cultured in DMEM containing 5% FBS and penicillin/streptomycin. Jurkat cells were provided by Stem Cell Bank and cultured in DMEM supplemented with 10% FBS and 100 U/mL penicillin plus 0.1 mg/mL streptomycin.BMDMs were isolated from tibias and femurs of mice in a procedure similar to that used for the extraction of BMDMs in the BM transplantation experiment. BM cells were cultured in DMEM supplemented with 10% FBS and macrophage colony-stimulating factor (20 ng/mL) under 37\u00b0C, 5% COThe sections were stained by using 4-color multiple fluorescent immunohistochemical staining kit (Absin). As previously described , the sliAs previously described , by usinThese procedures were performed using standard techniques as described in g for 5 minutes at 4\u00b0C after passing through a 100 \u03bcm strainer. Cell pellets were resuspended and washed with PBS, and then monocytes were isolated by using density gradient centrifugation with Percoll (Solarbio P8370). Neutrophils were isolated with the EasySep Mouse Neutrophil Enrichment Kit according to manufacturer\u2019s instructions. The anti-CD16/CD32 antibody was used to block Fc\u03b3RIII/II to minimize nonspecific antibody binding. Cells were then stained by fluorescently conjugated antibodies (106 cells/1 \u03bcg) that are summarized in +CD11bhiF4/80lo. Resident macrophages are CD45+CD11bloF4/80hi. Cell apoptosis was determined by propidium iodide\u2013annexin V staining as described .The microarray experiments were performed by Sinotech Genomics Corporation. Microarray data sets have been deposited in the National Center for Biotechnology Information\u2019s Gene Expression Omnibus under accession code GSE192532.Jaml\u2013/\u2013 mice and cultured for 7 days. Isolated BMDMs were starved for 4 hours (M0) in medium without serum, then polarized to M2 with IL-4 (40 ng/mL) or M1 with LPS (100 ng/mL). Then, 8 hours later, M2 macrophages were switched to M1 with LPS (100 ng/mL), and M1 macrophages were treated with M2 stimulus IL-4 (40 ng/mL) for another 8 hours. Cells were harvested for the extraction of mRNA, and culture medium was collected for ELISA analysis 12 hours after the replacement of nonstimuli culture medium. The expression level of M1 or M2 markers was detected by real-time RT-PCR and ELISA analyses.BMDMs were used for phenotype switching after they were isolated from WT and Jaml\u2013/\u2013 mice by intraperitoneal injection. A total of 45 minutes later, the mice were sacrificed and the peritoneum was lavaged. Efferocytosis was analyzed by flow cytometry and quantified as the percentage of CD45+CD11b+F4/80+ macrophages that had taken up 1 or more PKH26-labeled apoptotic cells.Neutrophils were collected by peritoneal lavage from donor mice 6 hours after they had been injected with zymosan A . Neutrophils were isolated and then irradiated under a UV lamp (LightSources) to induce apoptosis. Apoptotic neutrophils labeled with PKH26 red fluorescent dye were injected into WT or 7 cells/mL, then incubated for 2 minutes with PKH26 red fluorescent dye (2 \u03bcM/107 cells). Isolated BMDMs were plated in dishes, and PKH26-labeled ACs were incubated with the macrophages for 45 minutes at a 1:5 macrophage/AC ratio. After 45 minutes, macrophages were washed 3 times with PBS to remove unbound ACs, and then the macrophages were fixed with 4% formaldehyde for 20 minutes. Images were taken by fluorescence microscopy (Olympus model U-LH100-3), and the percentage of BMDMs labeled with PKH26-tagged apoptotic Jurkat cells was quantified.Jurkat cells were irradiated under a 254 nm UV lamp for 15 minutes to induce apoptosis, followed by incubation under normal cell culture conditions for 2\u20133 hours. This method routinely yields more than 85% apoptotic cells (ACs) as previously described . The ACsThese procedures are provided in Microarray data have been deposited in Gene Expression Omnibus (GSE192532). All other study data are included in the article and/or the supplement. Additional data related to this paper may be requested from the authors.t test for normally distributed data and Mann-Whitney rank sum test for non-normally distributed data. Differences between multiple groups with 1 variable were determined using 1-way ANOVA followed by post hoc Tukey\u2019s test. To compare multiple groups with more than 1 variable, 2-way ANOVA followed by post hoc Tukey\u2019s test was used. For data with a non-Gaussian distribution, we performed a nonparametric statistical analysis using the Kruskal-Wallis test followed by Dunn\u2019s post hoc test for multiple comparisons. Spearman\u2019s test was implemented for statistical analyses of the correlation between 2 variables. Statistical significance was defined as *P < 0.05, **P < 0.01, ***P < 0.001, where P < 0.05 was the threshold for significance.Different groups of mice were allocated in a randomized manner, and investigators were unaware of the allocation of different groups when doing surgeries and doing outcome evaluations. Exclusion criteria prior to the start of any of the in vivo studies were death, injury requiring euthanasia, or weight loss of more than 15%. Data are expressed as mean \u00b1 SEM. Statistical analyses were performed using GraphPad Prism 8.0. Normality assumption of the data distribution was assessed using Kolmogorov-Smirnov test. Comparisons between 2 groups were performed using 2-tailed Student\u2019s unpaired Guide for the Care and Use of Laboratory Animals .All human renal biopsy samples studies were conducted in accordance with the principles of the Declaration of Helsinki and were approved by the Research Ethics Committee of Shandong University after written informed consent was obtained from the participants. All experimental protocols for animal studies were approved by the Institutional Animal Care and Use Committee of School of Basic Medical Sciences, Shandong University [Document KYLL-2017(KS)-395], and conducted in accordance with the NIH YS and FY designed research; WH, BOW, YF, SJC, JHZ, XYZ, RKL, JCW, ZYW, MW, XJW, and ML performed research; WH, YFH, XJW, ML, WT, YZ, YSX, YS, and FY analyzed data; and YS and FY wrote the paper."} +{"text": "Aneurysm wall enhancement (AWE) in high-resolution magnetic resonance imaging (HR-MRI) is a potential biomarker for evaluating unstable aneurysms. Fusiform intracranial aneurysms (FIAs) frequently have a complex and curved structure. We aimed to develop a new three-dimensional (3D) aneurysmal wall enhancement (AWE) characterization method to enable comprehensive FIA evaluation and to investigate the ability of 3D-AWE to predict symptomatic FIA.max was the greatest diameter in the cross-section, while Lmax was the length of the centerline of the aneurysm. Signal intensity of the FIA was normalized to the pituitary stalk and then mapped onto the 3D model, then the average enhancement (3D-AWEavg), maximum enhancement (3D-AWEmax), enhancement area (AWEarea), and enhancement ratio (AWEratio) were calculated as AWE indicators, and the surface area of the entire aneurysm (Aarea) was also calculated. Areas with high AWE were defined as those with a value >0.9 times the signal intensity of the pituitary stalk. Multivariable logistic regression analyses were performed to determine independent predictors of aneurysm-related symptoms. FIA subtypes were defined as fusiform, dolichoectasia, and transitional. Differences between the three FIA subtypes were also examined.We prospectively recruited patients with unruptured FIAs and received 3\u2009T HR-MRI imaging from September 2017 to January 2019. 3D models of aneurysms and parent arteries were generated. Boundaries of the FIA were determined using 3D vessel diameter measurements. Dmax, and FIA subtype, the multivariate logistics regression models showed that 3D-AWEavg , 3D-AWEmax , AWEarea , and AWEratio were independent predictors of aneurysm-related symptoms. Dmax and Aarea were larger and 3D-AWEavg, 3D-AWEmax, AWEarea, and AWEratio were higher with the transitional subtype than the other two subtypes.Forty-seven patients with 47 FIAs were included. Mean patient age was 55\u2009\u00b1\u200912.62 years and 74.5% were male. Twenty-nine patients (38.3%) were symptomatic. After adjusting for baseline differences in age, hypertension, LThe new 3D AWE method, which enables the use of numerous new metrics, can predict symptomatic FIAs. Different 3D-AWE between the three FIA subtypes may be helpful in understanding the pathophysiology of FIAs. Fusiform intracranial aneurysms (FIAs) account for 3 to 13% of intracranial aneurysms (IAs) . CompareIn previous histopathological IA studies, AWE correlated with atherosclerosis, neovascularization, and macrophage infiltration . Atherosstalk) was reported as the most reliable quantitative parameter of AWE cut-off value of 0.90 had the highest sensitivity for identifying symptomatic FIAs , computed tomography angiography (CTA), or magnetic resonance angiography (MRA) were prospectively recruited at Beijing Tiantan Hospital from September 2017 to January 2019. Institutional ethics committee approval was obtained. All participants provided written informed consent. Patients with an MRI contraindication, MRI of poor quality, incomplete MRI, history of surgical or endovascular IA treatment, or coexisting saccular IA, vascular dissection, or other intracranial cerebrovascular disease were excluded. Aneurysm-related symptoms were defined as sentinel headache or oculomotor nerve palsy . Sentine-section . Specifianeurysm . Thrombuaneurysm . Atherosaneurysm . Currentaneurysm .3. Post-contrast T1-weighted images were obtained 6\u2009min after injection of contrast (0.1\u2009mmol/kg gadopentetate dimeglumine [Gd-DTPA]) using parameters identical to those of the pre-contrast T1-weighted images. Other sequence parameters are provided in MRI was performed using a 3.0\u2009T Trio-Tim , Ingenia CX , or Discovery 750 system with a 32-channel head coil. FIAs were localized using 3D time-of-flight MRA. The HR-MRI protocol included 3D T1-weighted imaging (SPACE/VISTA/CUBE), 3D T2/proton density imaging (SPACE/VISTA/CUBE), and contrast-enhanced 3D T1-weighted imaging (SPACE/VISTA/CUBE). Images were acquired in the oblique coronal plane to cover the entire aneurysm. Voxel size was 0.7\u2009\u00d7\u20090.7\u2009\u00d7\u20090.7\u2009mmhttp://www.slicer.org). The boundary of an FIA was defined as 1.5 times the diameter of the normal vessel . The SI of the aneurysm wall was then normalized to SIstalk to demonstrate 3D-CRstalk were then observed , and AWEratio was defined as the percentage of AWE area on the surface area of the entire aneurysm (Aarea). Then, using our in-house software, five FIA indicators in three dimensions were calculated: 3D-AWEavg, 3D-AWEmax, Aarea, AWEarea, and AWEratio and classified as good (ICC 0.60\u20130.80) or excellent (ICC >0.80). Interobserver reliability for the CRstalk measurements was assessed using the ICC . Variables are expressed as numbers with percentage or medians with interquartile range. Categorical data were compared using Pearson\u2019s chi-square test. Continuous data were compared using the Mann\u2013Whitney test or the Kruskal\u2013Wallis H test. Variables with max of 8\u2009mm and 10\u2009mm, respectively) and one locates at the posterior circulation . FIA subtype was fusiform in 35 patients (74.5%), dolichoectatic in seven (14.9%), and transitional in five (10.6%). Patient characteristics are shown in Fifty-nine patients with an FIA were recruited. We excluded six with poor quality or incomplete MRI, two with a history of endovascular or surgical aneurysm treatment, and four with other saccular IAs, dissections, or arteriovenous malformations. Finally, 47 patients with 47 FIAs were included for analysis . Mean pap\u2009=\u20090.012) and prevalence of hypertension (p\u2009=\u20090.017) were significantly higher in the symptomatic group. FIA subtype significantly differed between the groups (p\u2009=\u20090.010). Lmax , 3D-AWEavg , 3D-AWEmax , AWEarea , and AWEratio were significantly higher in the symptomatic group. Dmax did not significantly differ between the groups (p\u2009=\u20090.743). Results of univariate logistics regression for the 3D-AWE parameters are shown in avg , 3D-AWEmax , AWEarea , and AWEratio were all associated with symptomatic FIAs.Median age , 3D-AWEmax , AWEarea , and AWEratio were independent predictors of aneurysm-related symptoms of aneurysm-related symptoms , Lmax (p\u2009<\u20090.001), Dmax (p\u2009=\u20090.004), 3D-AWEavg (p\u2009=\u20090.005), Aarea (p\u2009=\u20090.001), AWEarea (p\u2009=\u20090.002), and AWEratio (p\u2009=\u20090.049) significantly differed between the three FIA subtypes and 3D-AWEmax .Interobserver agreement was excellent for the measurement of 3D-AWEBecause the pathophysiology of FIAs is quite complex and differs from that of saccular IAs, they require a different standardized method for quantifying AWE. In this study, we developed a quantitative assessment protocol for FIAs that can observe AWE distribution in 3D space. We found that FIA 3D AWE is associated with aneurysm-related symptoms and that transitional subtype FIAs tended to be symptomatic, larger, and exhibit greater 3D AWE.stalk-max had the highest sensitivity for identification of symptomatic FIAs when a cut-off value of 0.90 was used. However, the pathophysiology of FIAs is complex as the high AWE area for quantifying focal enhancement because is the cutoff value that discriminates aneurysm-related symptoms. We also found that high AWE area independently associated with symptomatic FIAs. Such associations may indicate that more extensive aneurysmal wall inflammation may underly aneurysm-related symptoms, as AWE is considered a biomarker of aneurysm inflammation and hemodynamic characteristics, which may affect the enhancement of the aneurysm wall . In thisEven using multiplanar reconstruction, AWE analysis in 2D space may not sufficiently reflect 3D structure. First, tracking AWE in one plane may miss focal enhancement, leading to an underestimation of enhancement level. Second, some FIAs are morphologically distorted and their AWE cannot be characterized in a single plane. Third, in 2D planar analysis, regions of interest (ROIs) are typically determined by visual inspection and manual delineation of HR-MRI sequences, which may introduce selection bias and decrease repeatability. Our 3D-AWE protocol enables objective quantitative analysis of the entire aneurysm wall, which may assess FIA more comprehensively and reproducibly and contribute to a better understanding of the pathophysiological processes underlying FIAs. The 3D-AWE protocol in this study has shown its potential for predicting aneurysm-related symptoms. Symptomatic FIAs tended to exhibit a greater level of AWE and larger areas of high AWE, which shows that symptoms may indicate greater aneurysmal wall inflammation. In addition, wall areas that enhance in 3D space may be relatively weak. Therefore, the 3D-AWE protocol described here may help stratify the risk of FIA patients in clinical practice.Determining FIA boundaries based on 2D imaging is difficult. Flemming et al. defined the FIA boundary as arterial dilation greater than 1.5 times the normal diameter . Vessel max and the transitional subtype had the largest Dmax. In addition, it is reported that hemodynamics were also differentiated by different FIA subtypes , the multivariate logistic model allows for 3 to 4 variables. However, the multivariate logistic model has incorporated 5 to 8 variables, so the statistical model may not allow for so many variables, which is a major statistical concern. Future large-scale multicenter studies are warranted. Second, three different MRI scanners were used, which may have introduced bias and need further validation of the comparability in the future, although the parameters were adjusted consistently. Third, more complex 3D-AWE parameters are needed in the future to explore the pathophysiological mechanisms underlying the different FIA subtypes. Fourth, there may be some limitations about this new 3D-AWE model to evaluate fusiform intracranial aneurysms, such as inadequate spatial resolution, intramural hemorrhage, aneurysm wall thickness, slow-flow, and further pathological validation is required in future studies. Fifth, each aneurysm model was based on post-contrast T1 images, which may introduce some deviations in morphology measurement. Sixth, at bifurcations, normal arteries would be excluded by manual segmentation, which may also cause potential bias. Seventh, because the highest signal intensity on each probe is used to determine the signal intensity on the wall, 3D-AWE in this model may exaggerate the AWE degree and lead to a high false positive rate. Eighth, sentinel headache is more suitable for the warning symptom of saccular aneurysms, and there may be false positives in the diagnosis of symptomatic fusiform aneurysms in this study. Ninth, \u201cenhancement\u201d indicates the signal intensity change after the injection of contrast, while the current study only included post-contrast T1 images. If the aneurysm wall has high signal intensity on the pre-contrast T1 images, the high intensity on the post-contrast images cannot surely indicate \u201cenhancement.\u201d Among all the cases included, there are only 2 aneurysms presented with high signal intensity on pre-contrast images . Although the incidence is low, future studies should incorporate the pre-contrast images. Finally, this study did not exclude the patients withaspirin/stain use, which may suppress the aneurysms wall enhancement .3D-AWE can predict aneurysm-related symptoms in patients with an FIA. The transitional subtype FIA is associated with a larger cross-sectional size and higher AWE and may grow and rupture more easily than the fusiform and dolichoectasia subtypes. This new AWE analysis method is more accurate and enables use of numerous new metrics which can provide more detailed information for assessing FIA pathophysiology.The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.The studies involving human participants were reviewed and approved by Beijing Tiantan Hospital Institutional ethics committee. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.FP and XC: conception and design. XL, JX, HN, XH, BX, XB, ZL, and PX: acquisition of data. YD and BS: analysis and interpretation of data. XC and FP: drafting the article. XC: technical supports. XZ and AL: study supervision. All of the authors approved the current version to submit.This current study was supported by the National Natural Science Foundation of China (Nos. 82171290 and 81771233), Natural Science Foundation of Beijing Municipality (Nos. 7222050 and L192013), Beijing Municipal Administration of Hospitals\u2019 Ascent Plan (DFL20190501), and Horizontal Project in Beijing Tiantan Hospital (HX-A-027 [2021]), and Research and Promotion Program of Appropriate Techniques for Intervention of Chinese High-risk Stroke People (GN-2020R0007), and BTH Coordinated Development\u2014Beijing Science and Technology Planning Project (Z181100009618035), and Beijing Municipal Administration of Hospitals\u2019 Ascent Plan (DFL20190501), and Beijing Natural Science Foundation (L192013 and 22G10396).The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.https://www.frontiersin.org/articles/10.3389/fnins.2023.1171946/full#supplementary-materialThe Supplementary material for this article can be found online at: Click here for additional data file."} +{"text": "Grazing management and stocking strategy decisions involve the manipulation of grazing intensity, grazing frequency, and timing of grazing to meet specific objectives for pasture sustainability and economic livestock production. Although there are numerous stocking systems used by stakeholders, these methods may be broadly categorized as either continuous or some form of rotational stocking. In approximately 30 published experiments comparing continuous vs. rotational stocking, there was no difference in liveweight gain per animal between stocking methods in 66% of studies. There was no difference in gain per hectare between methods in 69% of studies, although for gain per hectare the choice of fixed or variable stocking rate methodology affected the proportion . Despite these experimental results showing limited instances of difference between rotational and continuous stocking, rotational strategies have received what appears to be unmerited acclaim for use for livestock production. Many proposed \u201cmob stocking\u201d or \u201cregenerative grazing\u201d systems are based on philosophies similar to high intensity-low frequency stocking, including provision for >60 d of rest period from grazing. In addition, grazing management practitioners and stakeholders have voiced and proposed major positive benefits from rotational stocking, \u201cmob stocking\u201d, or \u201cregenerative grazing\u201d for soil health attributes, carbon sequestration, and ecosystem services, without experimental evidence. The perceptions and testimonials supporting undefined stocking systems and methods have potential to mislead practitioners and result in economic disservices. Thus, we suggest that scientists, extension-industry professionals, and producers seek replicated experimental data as the basis for predicting outcomes of grazing decisions. Grazing management has been defined as the manipulation of grazing in pursuit of a specific objective or set of objectives . Those sWhether grazing management strategies are based on experimental evidence, experience, or perceptions-philosophy, management systems may be difficult to change, alter, or amend. Cynodon dactylon (L.) Pers] and bahiagrass are best adapted and tolerant of increased grazing pressure. These forages may also be harvested for hay, baleage, or silage. Alternatively, in semi-arid regions, native perennial warm-season bunch grasses and other forbs and browse are the best-adapted forages for rangelands. Tolerance to frequency and severity of defoliation regimens differ for sod-forming rhizomatous grasses in humid areas vs. bunch grasses in semi-arid regions. Thus, stocking strategies and expected economic return may be substantially different among grazing-land ecosystems.Grazing management strategies and implementations vary among introduced forages on pastures and native plant species on rangelands. Although management and mindsets may be targeted toward sustainable beef cattle systems, the Vegetational-Hardiness Zones of semi-arid vs. humid conditions determine the best adapted and persistent forages in each region. For example, in the more humid regions, warm-season perennial sod-forming grasses such as bermudagrass [Stocking method has been defined as a \u201cprocedure or technique to manipulate animals in space and time to achieve a specific objective\u201d ; Table 1With respect to published experimentation on pastures, With minimal to no effect of stocking method on nutritive value, but with an increase in forage mass with rotational stocking, how does this translate to gain per animal (ADG) and gain per hectare for various stocking methods? With respect to ADG, 66% of the studies showed no difference between stocking methods; 20% showed continuous to be greater than rotational; and 14% of the studies showed that rotationally had greater ADG than continuously stocked pastures. This conforms to expectations since more than 70% of studies found no effect of stocking method on nutritive value. While some presume that rotational stocking results in greater nutritive value than continuous stocking, there are factors working against this presumption. In rotationally stocked swards, cattle may be subjected to greater grazing intensities to achieve a greater percentage of forage utilization in the resident paddock . Forced Forage mass and forage allowance set the boundaries for potential ADG. However, forage nutritive value is responsible for setting the upper limits on ADG . TherefoIn addition to the potential impact on forage and animal performance, \u201cFew topics in agriculture have been addressed with such charismatic language and with such abandonment of scientific evidence and logic\u201d as discussions of continuous vs. rotational stocking , 1991. ISome studies comparing various mob-stocking approaches with more conventional grazing practices have been conducted. A multiyear and replicated mob-stocking experiment was conducted in the Nebraska sandhills . During Festuca arundinacea Schreb), orchardgrass (Dactylis glomerate L.), Kentucky bluegrass (Poa pratensis L.), white clover (Trifolium repens L.), and red clover (Trifolium pratense L.) during 3 yr in Virginia stocker-yearlings due to their high nutrient requirements and the reduced nutritive value of the more mature forage available for selection in successive paddocks.Stocking method terminology has taken on a multitude of \u201ccatch phrases\u201d that may have individual or locational meaning; however, these terms often add to confusion among scientists and stakeholders. The word \u201cregenerative\u201d has become popular terminology for sales and promotion of remedies, products, and actions. Today, one can adopt or be engaged with regenerative medicine, health, energy, economics, engineering, agriculture, and grazing. Regenerative grazing has been defined, without comparative data or experimental evidence, as stocking practices or methods that enhance ecosystem services, soil health, and soil conservation. The adoption and incorporation of regenerative grazing systems has basically been accepted as a timed, \u201cauto-graze method\u201d. This auto-graze philosophy has been readily accepted primarily because it is a method or system that combines all cattle into a single paddock with timed relocation to the next paddock generally on a daily or twice-daily basis. Thus, this stocking approach allows for managers to \u201cstart the process\u201d and then let time and rotations make default management decisions of matching forage production-nutritive value to animal requirements. From the perspective of the novice, as well as from seasoned managers, there is a human behavior tendency to follow popular methods or practices. Thus, there is an increasing adoption of a method or system that has a popular name and substitutes for regular visual inspections of pastures to determine the best management practice for sustainable pastures and sustainable beef systems. This approach often includes intensive rotational stocking that may range from one to a few days\u2019 residence on a pasture with 30 to 60 d or more rest. It also does not conform to accepted definitions of mob stocking, which do not require long periods between defoliation events. Thus, regenerative grazing is not a specific practice like mob stocking, but rather it is based on various philosophies and testimonials of management or strategies that are perceived to promote soil health, carbon sequestration for credit accountability, and sustainability.Stocking strategies should be characterized or designed within a specific vegetation or hardiness zone and combined with the art and science of management for efficient-strategic forage utilization and sustainability for the desired optimum pasture-animal production. Thus, management strategies are site-specific for multiple input\u2013output decisions with objectives to \u201cmatch\u201d forage-animal requirements to production and economic rewards . GrazingImplementing revised or new management strategies requires attention to detail and the use of data-driven results from comparative experiments. These strategies may include: fertilizer ingredients and rates for hay or pasture; supplementation ingredients and amount for specific classes of livestock; breed type for cow\u2013calf and/or stocker operations; forage cultivars for perennial and/or annual pastures; stocking method for sustainable beef system and economic returns; and seasonal and/or year-long stocking rate or carrying capacity of a particular property. The Land Grant System routinely disseminates data through State Agricultural Experiment Stations and Extension Service publications and short courses. Their recommendations-suggestions are based on multiyear and/or multilocational comparative research and experimental data. To better reflect current methods of accessing information, we suggest that forage-animal scientists increase and enhance distribution of grazing systems research results through social media content that is focused, concise, free of scientific jargon, and designed to attract the novice and seasoned manager audience alike. Grazing systems should be viewed as a \u201cwork in progress\u201d, as management fine-tunes input strategies and delivery systems for sustainable pasture-livestock production and ecosystem services that benefit soil-water components and provide positive economic returns."} +{"text": "Ferroptosis is related to the immunosuppression of tumors and plays a critical role in cancer progression. Fanconi anemia complementation group D2 (FANCD2) is a vital gene that regulates ferroptosis. However, the mechanism of action of FANCD2 in Hepatitis B-related hepatocellular carcinoma (HCC) remains unknown. In this study, we investigated the prognostic significance and mechanism of action of FANCD2 in Hepatitis B-related HCC.The expression of FANCD2 in Hepatitis B-related HCC was explored using The Cancer Genome Atlas (TCGA) and validated using the Gene Expression Omnibus (GEO) database. Univariate and multivariate Cox regression analyses and Kaplan\u2013Meier survival curves were used to analyze the relationship between FANCD2 expression and the overall survival of patients with Hepatitis B-related HCC. Protein\u2013protein interaction networks for FANCD2 were built using the STRING website. In addition, correlations between FANCD2 expression and the dryness index, tumor mutational burden, microsatellite instability (MSI), immune pathways, genes involved in iron metabolism, and sorafenib chemotherapeutic response were analyzed.Our results indicated that FANCD2 was significantly overexpressed in Hepatitis B-related HCC and demonstrated a strong predictive ability for diagnosis and prognosis of the disease. High FANCD2 expression was associated with poor prognosis, high-grade tumors, high expression of PDL-1, high MSI scores, and low sorafenib IC50 in Hepatitis B-related HCC. BRCA1, BRCA2, FAN1, and FANCC were vital proteins interacting with FANCD2. The expression level of FANCD2 significantly correlated with the infiltration levels of Treg cells, B cells, CD8\u2009+\u2009T cells, CD4\u2009+\u2009T cells, neutrophils, macrophages, myeloid dendritic cells, and NK cells in Hepatitis B-related HCC. FANCD2 was positively correlated with the tumor proliferation signature pathway, DNA repair, and cellular response to hypoxia.Our study indicated that FANCD2 was a potential novel biomarker and immunotherapeutic target against Hepatitis B-related HCC, which might be related to the chemotherapeutic response to sorafenib. One of the biggest challenges associated with malignant tumors is hepatocellular carcinoma (HCC), the incidence and mortality of which have increased worldwide . The mosAdditionally, Hepatitis B-related HCC has a\u00a0poorer\u00a0prognosis . Thus, iFerroptosis,\u00a0a\u00a0recently discovered form of cell death, has attracted significant attention as a\u00a0potential therapeutic target in carcinoma . ResearcTherefore, this study aimed to analyze the prognostic significance and mechanism of the ferroptosis-related gene FANCD2 in Hepatitis B-related HCC to predict therapeutic response.https://portal.gdc.nih.gov) for patients with Hepatitis B-related HCC and other cancers. Expression samples from normal humans were downloaded from GTEx (http://commonfund.nih.gov/GTEx/). R software version 4.0.3 was used to conduct statistical analyses. First, we analyzed the pan-cancer expression of FANCD2. We then studied the expression of FANCD2 in different genders, races, and tumor stages in Hepatitis B-related HCC. Furthermore, the expression of FANCD2 in Hepatitis B-related HCC was verified using the GSE121248, GSE55092, GSE19665 and GSE84402 datasets.We downloaded expression profiles and clinical data from the TCGA dataset with 95% confidence intervals (CI). We compared predictive accuracy using time ROC (v 0.4) analysis. The relationship between FANCD2 expression and the prognosis of patients with Hepatitis B-related HCC was investigated using univariate and multivariate analyses. Based on the multivariate Cox proportional hazards analysis, a nomogram was developed to predict overall survival in the first, second, and third years. In addition, we calculated the diagnostic Area Under Curve (AUC) based on the expression data of FANCD2.https://cistrome.shinyapps.io/timer/) to investigate the relationship between FANCD2 expression and the abundance of infiltrating immune cells using TIMER and QUANTISEQ algorithms. We selected SIGLEC15, TIGIT, CD274, HAVCR2, PDCD1, CTLA4, LAG3, and PDCD1LG2 as immune checkpoints and analyzed the correlation between their expression and the expression of FANCD2 in Hepatitis B-related HCC. Spearman\u2019s correlation analysis determined the correlation between FANCD2 expression, immune cells, and immune checkpoints. The potential immune checkpoint blockade (ICB) response was predicted using the tumor immune dysfunction and exclusion (TIDE) algorithm.We used the TIMER database network. We downloaded and visualized the mutation data using the maftools package in the R software. The tumor mutation burden (TMB) and microsatellite instability (MSI).First, the relationship between the dryness index and FANCD2 expression was analyzed. Using Spearman correlation, we calculated mRNAsi using the one-class linear regression (OCLR) algorithm and explored the relationship between the mRNAsi score and FANCD2 expression. Based on the expression data, mRNAsi was calculated and ranged from 0 to 1. In\u00a0general, the closer the index is to 1, the lower the degree of differentiation of the tumor cells and the stronger the characteristics of the tumor stem cells. Second, the correlation between FANCD2 and the pathway was investigated using Spearman\u2019s correlation. Analysis was conducted using R software, package GSVA, with parameter method\u2009=\u2009\u201cssgsea.\u201d We also investigated the correlation between FANCD2 and the genes involved in iron metabolism, including FTH1, HAMP, HSPB1, SLC40A1, STEAP3, TF, and TFRC.https://www.cancerrxgene.org), the largest publicly available pharmacogenomic database, to predict therapeutic responses for each sample. The prediction process was carried out using the R package \"pRRophetic.\" We calculated the samples' half-maximal inhibitory concentration (IC50) using the ridge regression method. Using Spearman's correlation, we then analyzed the relationship between sorafenib IC50 score and FANCD2 expression.We used the Genomics of Drug Sensitivity in Cancer (GDSC) database (P\u2009=\u20095.9e\u221210), GSE55092 (P\u2009=\u20090.0021), GSE19665 (P\u2009=\u20091.1e\u221205) and GSE84402(P\u2009=\u20090.00034).In the (TCGA) dataset, the expression of FANCD2 in many types of cancers, including HCC and Hepatitis B-related HCC, was higher than that in normal tissues Fig.\u00a0. We analIn Hepatitis B-related HCC, high FANCD2 expression had a poor overall survival , progression-free survival , disease-free\u00a0survival and disease-specific survival in TCGA database .Using the TIMER and QUANTISEQ algorithm, we found a positive correlation between the expression of FANCD2 in Hepatitis B-related HCC and the infiltration levels of B cells , CD4\u2009+\u2009T cells , CD8\u2009+\u2009T cells , Neutrophil , Macrophage , Myeloid dendritic cells , and Treg cells in Fig.\u00a0P\u2009=\u20090.008; P Spearman, 0.22), HAVCR2 , LAG3 , PDCD1 , and TIGIT in Fig.\u00a0P\u2009=\u20095.16e\u221207; P Spearman, 0.41) in Hepatitis B-related hepatocellular carcinoma in Fig.\u00a0The correlation between FANCD2 expression and immune checkpoints was assessed. In Hepatitis B-related HCC, the outcomes demonstrated that FANCD2 was positively correlated with the expression of immune checkpoints, including CD274 . The higher the expression of FANCD2, the stronger the tumor dryness index of Hepatitis B-related HCC . Figure\u00a0/MSI expression. FANCD2 was positively correlated with MSI in Hepatitis B-related HCC . However, FANCD2 expression was not associated with TMB.Figure\u00a0FTH1, HAMP, HSPB1, SLC40A1, STEAP3, TF, and TFRC are the genes involved in iron metabolism. FANCD2 was positively correlated with the expression of FTH1, HSPB1, SLC40A1, and TFRC and negatively correlated with HAMP Fig.\u00a0. Using sIn the TCGA dataset, we determined the relationship between FANCD2 and sorafenib IC50 scores using Spearman's correlation analysis. Figure\u00a0Hepatitis B-related HCC is the most common type of HCC. People infected with hepatitis B are 14 to 223 times more likely to develop HCC than those without hepatitis B infection . HepatitFANCD2 is a ferroptosis suppressor involved in DNA repair and has been studied in other cancers \u201336. HighIn this study, we analyzed the relationship between the ferroptosis-related gene FANCD2 and the development, prognosis, treatment, immunity, and other related functions of Hepatitis B-related HCC using the TCGA and GEO databases. Our study reveals that FANCD2 is significantly upregulated in Hepatitis B-related HCC and is a promising diagnostic and prognostic biomarker. Furthermore, the expression of FANCD2 was found to increase progressively with the increase in tumor grades, with the highest expression observed in grades 3 and 4. High FANCD2 expression was associated with poor prognosis in Hepatitis B-related HCC and was shown to be an independent prognostic factor by multivariate Cox analysis. In addition, our study demonstrated that the higher the expression of FANCD2, the stronger the stem cell characteristics of tumor cells, with lower differentiation and higher proliferative capacity in Hepatitis B-related HCC. These results demonstrate that FANCD2 may serve as a biomarker for the prognosis and diagnosis of Hepatitis B-related HCC.We found a positive correlation between FANCD2 expression and Tregs, B cells, CD4\u2009+\u2009T cells, CD8\u2009+\u2009T cells, neutrophils, and macrophage infiltration in Hepatitis B-related HCC. FANCD2 may influence the immune microenvironment of tumors. Tregs play a crucial role in tumor immunity by suppressing the immune response of tumors and boosting their development and progression , 45. In FANCD2 regulates the expression of genes involved in iron metabolism, such as FTH1, HAMP, HSPB1, SLC40A1, STEAP3, TF, and TFRC, to affect ferroptosis. Our study found that the expression of FANCD2 was associated with the expression of the iron metabolism genes FTH1, HAMP, HSPB1, SLC40A1, and TFRC. Therefore, FANCD2 may affect ferroptosis by altering the iron metabolism in Hepatitis B-related HCC. In addition, we used ssGSEA to analyze the correlation between FANCD2 expression and this pathway. FANCD2 is associated with multiple pathways, including DNA repair and tumor proliferation signatures, in Hepatitis B-related HCC. A protein-interaction network for FANCD2 was constructed using the STRING online tool. FANCD2 interacts primarily with BRCA1, BRCA2, FAN1, FANCC, FANCE, FANCG, FANCI, SLX4, and USP1, most of which are highly expressed in Hepatitis B-related HCC. FANCD2 and other interacting proteins are also involved in DNA damage repair. Inhibitors of DNA repair processes are highly effective in treating carcinomas , 52. ThuIndividuals with HBV infection are at higher risk of developing liver cancer and cirrhosis. Furthermore, HBV is transmitted to people without antibodies, causing related diseases and increasing the burden on society . SorafenThis study provides new evidence for the diagnosis, prognosis, and targeted therapy of patients with Hepatitis B-related HCC. This study had some limitations. Data were obtained from online databases and were not validated in vitro or in vivo.Our study showed that the expression of FANCD2 was increased and that high FANCD2 expression was associated with poor outcomes and unfavorable immune infiltration in Hepatitis B-related HCC. Thus, FANCD2 could be a potential diagnostic and prognostic biomarker for Hepatitis B-related HCC."} +{"text": "As the world faced the COVID-19 pandemic, most of us did not expect it to so profoundly affect all aspects of human life. Sudden decisions on social isolation rules and lockdowns significantly disrupted labour in all sectors and industries, beyond the scope of the global health crisis. Consequently, companies and employees rapidly adopted remote working models. This led many employees to recognize the benefits of remote work, such as flexibility, comfort, and work-life balance .Even though that movement is decreasing, people are still in need of a better work-life balance. This has led to another tendency called \u201cquiet quitting\u201d or \u201csilent resignation\u201d in the business world. It does not refer to quitting a job, but rather indicates an adopted work behaviour . The empWhile it has been adopted in most sectors, the remote working model was not applied to health workers (HWs). The surge of cases and shortages of medical staff and equipment led to insufficient response to the urgent crisis in health care demands. Consequently, HWs have had to face many challenges since the beginning of the pandemic crisis . TherefoAlthough the recruitment of HWs increased in most countries, the massive wave of resignations is expected to inevitably hit the health sector. Female workers with children, younger, primary care, and frontline HWs constitute the largest group willing to turn over. Additionally, HWs who are not willing to leave the health sector may pursue alternative careers in the same or different professions with reduced work hours and workload ,7. It isDespite efforts to increase HWs\u2019 motivation, the inability to manage these issues highlighted that the toxic organizational culture could be a significant barrier to improvement efforts. Uncertainty, a lack of well-structured action plans, communication problems, inequalities in workload, income, protective equipment (whose distribution was based on profession and seniority), lack of appreciation by colleagues or patients, resignation and annual leave restrictions, as well as associated factors leading to HWs feeling as victims of injustice all contributed to the changes in work attitudes and behaviours ,7. BeyonThe following question may come to mind: \u201cDoes quiet quitting pose a significant threat even if there is no reduction in the number of employees?\u201d. Considering the increasing rate of young people in the workforce and the greater adoption of this new trend globally, the answer would be \u201cyes\u201d.The resignation movement is still going on and will not be stopped. Over the next several years, the deficit of HWs will reach millions. In quiet quitting, employees act only within their job descriptions, without passion and work commitment. When considering the increasing ratio of young employees, we should critically analyse the future of health care quality. Cooperation between patients and health care providers and a well-structured environment are strong determinants of health care quality. Therefore, improving organizational culture is essential for appropriate attitudes, behaviours, enjoyment, and engagement for organizational members . Some peTo sum up, HWs have faced the risk of infection, adverse working conditions, physical and verbal violence, disparities in workload and payment, limitations to attend social activities, and disruption in work-life balance for a long time. The inadequacy of attempts to solve the issues caused them to change their work attitudes and behaviours. Current trends emphasize the importance of understanding the reasons behind the employees\u2019 resignations and how they can be prevented on time. The new trend of \u201cquiet quitting\u201d has been adopted in many countries, especially among young employees, and could adversely affect health care quality by triggering a toxic organizational culture.The outbreak sparked radical changes in all sectors of human activity, including health care. We have learned the importance of protecting and improving health besides patient-specific clinical aspects; we need well-structured organizational culture, a strong and well-founded economy, well-equipped health centres and health workers, proper policy practices, and many more. Health systems are just one area where fundamental changes should be made before it is too late. HWs are essential to the functioning of health systems; expanding health care coverage and attaining the right to the highest possible level of health are based on the availability, accessibility, acceptance, and quality of health care. Policymakers must take the necessary steps to improve health care quality by considering gender, family, profession, and age group differences in line with technological, scientific, and social developments. Additionally, we need to remember the factors that reduce organizational commitment, job satisfaction, productivity, and motivation. Acting in the international cooperation framework will contribute to greater harmony between societies."} +{"text": "To achieve this, a progressive optimization pipeline is proposed which systematically optimizes both aspherical lenses and diffractive optical elements with over 30 times memory reduction compared to the end-to-end optimization. By designing a simulation-supervision deep neural network for spatially varying deconvolution during optical design, we accomplish over 10 times improvement in the depth-of-field compared to traditional microscopes with great generalization in a wide variety of samples. To show the unique advantages, the integrated microscope is equipped in a cell phone without any accessories for the application of portable diagnostics. We believe our method provides a new framework for the design of miniaturized high-performance imaging systems by integrating aspherical optics, computational optics, and deep learning.The optical microscope is customarily an instrument of substantial size and expense but limited performance. Here we report an integrated microscope that achieves optical performance beyond a commercial microscope with a 5\u00d7, NA 0.1 objective but only at 0.15\u2009cm 3 in volume and a weight of 0.5\u2009g, which outperforms a commercial microscope and can be seamlessly integrated with a smartphone.Traditional optical microscope, while bulky, often fails to deliver optimal performance. Here, the authors have engineered an integrated microscope of 0.15\u2009cm Most of the microscopes require tabletop optical instrumentations, including multiple glass lenses and bulky sensors, as well as trained personnel for operations. However, the complexity prevents accessibility in resource-limited settings and hampers the scope and scale of applications. Even with that bulkiness, the development of the microscope is confounded in several aspects. Scale-dependent geometric aberrations limit the resolution of the microscope in the margin of a millimeter-scale field-of-view (FOV), resulting in an undesirable trade-off between the effective space-bandwidth product (SBP) and the complexity of the optical design4. High resolution is always desired in microscopic systems, but the depth-of-field (DOF) is inevitably reduced due to the high numerical aperture (NA) and leads to poor imaging quality for 3D distributed samples5. Emergent advances in sophisticated optical design try to circumvent these restrictions through complex lens configuration6 and multi-view information acquisition8, which achieve remarkable results in table-level laboratory equipment, but the bulkiness is even more problematic.Microscopy is an indispensable tool in understanding the world that cannot be seen with the unaided eye and facilitates diverse applications in fundamental biology9. Recently, a miniaturized microscope has achieved breakthroughs in multiple aspects, including neural recording in freely behaving mouse10, high-throughput screening12, and flow cytometry14. Further with computational enhancement, the extended DOF (EDOF) that provides robustness over rough surfaces of 3D samples15 can be achieved together with corrected color and enhanced resolution18. However, the optical performance of current miniaturized microscopes is still limited in size, performance, and cost. Approaches with simple lenses are limited in sub-millimeter FOVs with remarkable distortions21, while larger FOVs can be achieved through more complex lens combinations, but the overall length and weight of the system increase rapidly23. Although two-photon and three-photon-based miniaturized microscopes have been developed to provide deep penetration with optical sectioning26, they require more specialized optical elements and suffer from low acquisition speed for high-throughput imaging. Moreover, limited space for placing multiple compound lenses makes most miniaturized microscopes monochromatic28. Integrated light microscope designs that break those limitations remain to be explored.Miniaturized integration is a pivotal advance that facilitates low-cost production and typically leads to improved performance and broad applications in telecommunications, computing, and genomics30 than traditional ray-tracing-based optical designs. The end-to-end fashion in deep optics has been validated to be distinguished in achieving large FOV32, large DOF33, high dynamic range34, and hyperspectral imaging16, among others. However, current deep optics techniques have been limited in simplistic optical systems and remained a great challenge for applications with small working distances and large FOVs due to the ever-larger solution spaces and aberrations in microscopic applications29. In addition, most deep neural networks for megapixel-level microscopic image restoration require large storage spaces and computational resources, which are hard to be distributed in integrated systems for practical use.Recently, deep optics technologies that parallel optimize the optical design and image processing algorithms are emerged and promising to achieve superior performance3 volume and can even be integrated into a cell phone for potable diagnosis. Inspired by emerging technologies in diffractive elements35, we integrate a cubic phase mask to achieve an EDOF of 300\u2009\u00b5m for 0.16 NA acquisition that is tenfolds of the commercial system and in the single-dollar range for mass production. With four aspherical lenses optimized to generate spatially uniform coded point spread functions (PSF), our device achieves 3\u2009\u00b5m optical resolution across a wide FOV with a diameter larger than 3.6\u2009mm after learning-based reconstructions. A physics-aware model is established to simulate the forward imaging process of the integrated microscope, which can fuel the training of the recovery algorithm to accomplish ground-truth-deficient restoration and perpetuate the generalization ability. We further apply a pruned deep neural network as the image recovery module, offering the powerful capability of resolving high-fidelity information in a noniterative, feed-forward manner, but with near 80% parameter reductions for real-time processing of megapixel level captures, which is critical for ready distributions in mobile platforms. Thereby, not only compressing over 100,000 times in volume, our integrated microscope obtains imaging performance beyond a commercial 5\u00d7 microscope with over 10 times improved DOF, which is necessary for practical applications on rough surfaces of most samples across a large FOV. Even compared with existing advanced miniaturized microsopes40, the proposed integrated microscope has a much smaller size and weight (Supplementary Table\u00a041) and hope they spur the development of high-performance integrated optical devices.To overcome these limitations, we develop a progressive optimization pipeline to exploit state-of-the-art optical design techniques in computational imaging systems, together with physics-based deep-learning reconstructions compositely. Specifically, the progressive optimization paradigm first constrains the heavily non-linear and complicated design space into a feasible size by ray-tracing-based merits and leverages advanced artificial intelligence algorithms to exhaustively rummage the optima, with over 30 times memory reduction compared to the end-to-end optimization paradigm. We consequently build a compact multi-color microscope that is as light as 0.5\u2009g in a 0.15\u2009cm42. Second, intrinsic tradeoffs between the spatial resolution and DOF impair the performance either in capturing delicate structures or in being robust over rough 3D samples. Third, chromatic aberrations in miniaturized devices are raised as diffractive elements with complex surfaces used and hamper wide applications of multi-channel screening and color-coded neural imaging.To accomplish high-quality imaging in an integration platform with minimized size and maximized depth of field (DOF), pivotal challenges from geometrical aberrations, resolution, and DOF dilemma, and chromatic aberrations are necessary to be remedied. First, the effective space-bandwidth product of an optical system reduces rapidly with the reduction of the lens scale due to the practical limit set by the geometrical aberrationsTo solve all these problems, we propose an advanced progressive design pipeline that fully leverages the advantages of both traditional ray-tracing-based and emerging deep-optics-based optimizations compositely Fig.\u00a0. We noti43 (\u201cMethods\u201d). The strict Rayleigh-Sommerfeld diffraction theory is used to establish the corresponding optical propagation model, and adaptive gradient descent is applied to optimize the surface shape to reduce aberrations. As a comparison, a conventional microscope system that is consisted of spherical glass lenses . Successive lens parts thus can be acquired at a low price (cheaper than $10 each) thanks to our plastic design, molding fabrication, and being free of cemented elements. To confirm the successful fabrications, we calibrated the proposed system through a customized 1-\u00b5m pinhole array written by lithography across a \u03a63.6\u2009mm FOV Fig.\u00a0.Fig. 3Evx and y directions in simulated and calibrated data. We found the simulated PSF size corresponded well with the experimental data at different depths and lateral positions across the entire sensor area , without any apparent artifacts. We further confirmed the DOF extension by imaging a USAF-1951 resolution target placed at different axial planes , but only the integrated microscope maintained that sharpness when the resolution target was largely defocused (z\u2009=\u2009150\u2009\u00b5m). A per-depth comparison further corroborated that the proposed integrated microscope achieved consistently high-quality images across various depths and various samples . We find that under blue, green, and red illuminations, the calibrated PSFs of three wavelengths are very similar with a structural similarity higher than 0.7 across the whole FOV Fig.\u00a0, indicat54. The illumination is provided by a circular LED around the lenses. Besides the hardware integration, the optimized neural network used for reconstruction needs to be deployed in the cell phone for real-time visualization. To accommodate the processors in the mobile platform, we prune the network with 78% reduced parameters but nearly the same performance and artificial intelligent algorithms to accomplish higher performance. In principle, the proposed optimization paradigm is scalable to any complex system .The optical design is heavily non-linear and is characterized by many local minima and steep ridges with many fabrication-related physical constraints 27. Our integrated microscope consists of plastic lenses without cemented elements for the capability of massive production. To achieve even more compact size and advanced performance, metasurfaces could be introduced to replace the plastic lenses with sub-micron thickness and over 80\u00b0 of FOV angle58.Through the proposed optimization pipeline, we create the most compact mesoscope among ever-fabricated designs 9, air quality monitoring62, and cancer screening63. For in vitro applications, the integrated microscope has the potential to achieve even higher throughput on a large scale through massively parallel strategies12 and is capable of being equipped with other instrumentation, such as incubators, thanks to the miniaturized sizes9. With upcoming GPU advances in improved speed, efficiency, and reduced size, integrated microscopes in intelligent platforms seem likely to facilitate the emerging paradigm of mobile analysis, screening, and diagnostic evaluations.In addition to potential use in behaving animals, the integrated microscope is a multipurpose instrument for various applications, including flow cytometryLastly, we believe the proposed progressive optimization paradigm sheds new light on optical designs by harnessing the advantages of aspherical optics, computational imaging, and deep learning reconstructions in a complete pipeline. Catalyzed by the optimization formulas, the proposed integrated microscope sets a new record for miniaturized microscopes, which can facilitate diverse applications spanning from image-based mobile diagnostics to neural recording in freely behaving animals and beyond.We start designing the high-performance integrated microscope with ray-tracing merits which remarkably reduces the parameter searching space compared to brute-force deep-optics optimization. The system numerical aperture is set to be 0.16 for subcellular spatial resolution and fluorescence-capable energy collection efficiency, while the focal length remains 1\u2009mm to ensure a compact system. It is obviously challenging to design a system with such a large aperture in a conjugate distance of only 6\u2009mm by the traditional method. The parameters of the lenses do not exist independently but restrict mutually and are mainly constrained by the aberration correction efforts. Since the primary aberrations, such as spherical aberration, coma, astigmatism, and field curvature, are closely related to the aperture of the lens, optimization on such a relatively high-NA microscope faces substantial challenges.First, we arranged the lens structure based on the principle of the Chevalier Landscape lens, which is the first widely used camera lens introduced after the invention of film-based photography, to correct aberrations. With reference to the structure model of the Chevalier Landscape lens, we set the aperture in the front of the imaging lens and made the aperture diameter smaller than that of the subsequent lens. In the optical path of the lens, the rays in normal and oblique incidence are separated by the frontmost aperture and then focused by different parts of the subsequent lenses so that the curvature of each lens can be adjusted to reduce aberrations, especially coma. On the other hand, the full correction of the aberrations from different incident fields requires the lens surface to be non-spherical.Note that although a singlet spherical lens cannot achieve diffraction-limited focusing for different angles of incidence, adding more lenses in principle could provide more degrees of freedom to correct spherical aberration, coma aberration, astigmatism, and Petzval field curvature. However, this approach, combined with conventional lens manufacturing techniques, results in bulky imaging systems placed near the microscope aperture and the subsequent deep learning-based reconstruction algorithm. In our case, the DOE is substantiated as a cubic phase plate (CPP) with the surface profile as We next optimized the best pupil modulation strength To select the best mentclass2pt{minimChromatic aberration is caused by the dispersion characteristics of the material or optical structure. Compared to an ideal lens that focuses a point in the object space on a point in the image space, light of different wavelengths generates focal spots at different spatial positions in a practical imaging system. This phenomenon deteriorates the performance of imaging systems under broadband illumination. In a microscope, dyes and labels that range a wide spectrum make chromatic correction necessary. In principle, chromatic aberration can be approximately corrected by using materials with complementary dispersion properties, as in an achromatic doublet. As one of the most commonly used optical elements in optical designs and engineering, an achromat cements a positive crown glass element (low refractive index) and a negative flint glass element (high refractive index) together. The compound lens brings at least two wavelengths of light to a common focus. However, this technique is cumbersome since the number of materials equals the number of wavelengths where the chromatic aberrations are minimized.Instead, we presented an implementation of a non-cemented aspherical lens group made of two plastic optical materials (EP-9000 and ZEONEX_K22R&K26R_2017) for chromatic correction. Axial MTF and chromatic focal shift data in such a design are comparable with a system with cemented achromatic doublets. The secondary color was corrected mainly through the optimized aspherical surfaces since the two plastic materials alone are not sufficient to fully satisfy the dispersion diversity.After the optical design is finished, aspherical lenses are plastic molded, and the phase masks are fabricated through nanoimprint. All components (including plastic housing) are fabricated by Sunny optical technology. The manufacturing process typically involves a combination of CNC machining, injection molding, and surface coating. The lens barrel was machined from a solid block of aluminum using a CNC machine. The lens elements were injection molded using specialized equipment that was designed to produce high-quality optical polymers. Once the lens elements are produced, they are assembled into the lens barrel, and the entire assembly is coated with an anti-reflective coating to improve image quality. Nanoimprinting was opted as our chosen fabrication process in order to facilitate mass production. The mold for nanoimprinting was created using two-photon polymerization with the desired nanostructure patterns. The mold is then pressed onto the substrate surface, transferring a pattern from the mold to the substrate. Following this, the imprint was cured through UV light, and the mold and residual material were subsequently removed from the substrate.3, our integrated microscope features a size of 150\u2009mm3, leading to a size reduction of 6.7\u2009\u00d7\u2009105 and an overall volume reduction of 5 orders of magnitude.Compared to a tabletop microscope with dimensions of 323\u2009mm (W) x 475\u2009mm (D) x 656\u2009mm (H), which results in a total volume of 100,646,800\u2009mm64 network, a GAN-based model that provides rich texture details for image restoration tasks. For the generator, our model was mainly based on the U-Net65 model, which was reported to have superior performance in microscopy tasks. In general, our generator network was composed of a U-Net encoder and decoder module. In the U-Net encoder module, four encoder blocks were used, where each block consisted of a 4\u00d74 convolutional layer (stride=2) followed by a leaky rectified linear unit (LeakyReLU). In the U-Net decoder module, four symmetrical decoder blocks were used, where each block consisted of bilinear interpolation, 3\u00d73 convolutional layer, followed by a ReLU. Considering the difficulty of restoring images with inconsistent PSF in the horizontal and axial direction, we used nine residual blocks after the encoder module to further strengthen the feature transformation ability of the network. Each residual block consisted of two 3\u00d73 convolutional layers, followed by a ReLU and a shortcut connections component. As the core part of U-Net, a skip-connection architecture was used between the encoder module and the decoder module to fuse shallow features with deep features. For the discriminator, we adopted the standard PatchGAN model with 70 receptive fields from pix2pix. This discriminator architecture could penalize structure at the scale of local patches to encourage high-frequency details.Our network architecture employed pix2pix49 for the loss function. Compared to pix2pix, we used the VGG19 model to extract features to compute an additional perceptual loss in feature level, which made the output look more realistic and accomplished better performance visually.We used the GAN loss term, L2-norm loss term, and perceptual loss term66 was used to optimize network training, with a learning rate of 0.0002 and exponential decay rates of 0.9 for the first moment and 0.999 for the second moment. We used a learning rate warmup for 10 epochs and then linearly decayed the learning rate over the course of training. We used graphics processing units (GPU) to accelerate the training and testing process. It took about 10\u2009h to train our model for 300 epochs with a batch size of 16 on our training set (about 110 microscopy and daily life images with the size of 2160\u00d72560\u00d73) with 4 GPUs . In the training phase, we randomly cropped an image into 20 small image patches with a size of 512\u00d7512\u00d73 such that we had 2200 images for training eventually. In the testing phase, we tested 19 images with the size of 2160\u00d72560\u00d73 directly.AdamW optimizer15. To manage this, we proposed a shift-variant forward model considering the PSF change with an optimized computing burden. An optical system with shift-variant PSF satisfies the general form of the following superposition formulation,To train a restoration neural network for evaluating optical designs in the optimization stage, it is necessary to numerically simulate the blurred captures through the DOE-combined aspherical system. However, relatively large FOV (\u03a63.6\u2009mm) and high numerical aperture (NA 0.16) cause nonuniform point spread functions (PSFs) across the field-of-view (FOV), which precluded using traditional forward propagation models48, that is, to model the PSF as the weighted sum of a set of bases 67. We described Given the difficulties that querying all PSFs corresponding to all field points 67 to solve the above problem. For the purpose of reducing color fringing caused by nonuniform coefficient maps across channels, we modified the algorithm such that the coefficient maps are uniform across channels for each depth.We used Hierarchical Alternating Least Squares (HALS) algorithmAfter the above simplification, the complete shift-variant forward propagation model can be written as:With the simplification by using the convolution operator, the above formula can be further written as:We utilized a motorized stage and a tabletop microscope with 5\u00d7 objective to capture both microsection and tiny objects as samples. The sample was first focused in the focal plane and motorized scanned axially across \u2212150\u2009\u00b5m to +150\u2009\u00b5m at a step of 10\u2009\u00b5m to form focal stacks, which were further used to generate training pairs.68. The deconvolution problem regularized by total-variation distance can be represented in the following way:69 to solve the above equation.To restore clear images from coded captures, deconvolution algorithms are widely used48 regarding the aforementioned shift-variant forward propagation model. The On the other hand, the above optimization requires the PSFs to be uniform across the fields, which contradicts the facts in our mesoscopic imaging system. We thereby utilized a modified Richardson-Lucy deconvolution algorithm with TV regularizationbwconncomp in MATLAB and calculated the size in the x and y directions. We illuminated the sample with LEDs with different wavelengths while keeping the pinhole array fixed at the same position. The colorful PSFs thus were acquired, visualized by ImageJ, and evaluated for chromatic aberration measurement.We fabricated a 1-\u03bcm pinhole array in a 1-mm thick glass slide through binary lithography. The glass slide that contained the pinhole array was then mounted in a customized holder that matched the cell phone. The fabricated lenses were mounted before a GC5035 sensor that was already been embedded inside an OPPO Find X3 cell phone for calibration. To calculate the size of the PSF, we first cropped the PSF in one site and binarized the PSF by the threshold that equals 10% of the maximum intensity of the PSF. We then picked up the maximum component through Considering the limitation of computational cost, memory usage, and real-time requirement in mobile devices, we designed a lightweight version for our network by pruning the number of channels in our generator network. For the U-Net encoder module, the origin output channels of four encoder blocks are 128, 256, 512, and 512, respectively. Now it becomes 32, 64, 128, and 256. The decoder module also makes symmetrical changes to keep the U-shaped structure. The number of output channels in nine residual blocks module correspondingly becomes 256. When migrating our model to mobile devices, we used the sigmoid activation function at the end of the generator instead of tanh because of the acceleration of mobile phone processors. All training procedures were accomplished on desktop PCs. The restoration of the captured image was carried out through the APP using the computational sources available on the mobile phone. After capturing an image using the proposed integrated microscope, a low-resolution reconstruction will be produced through deconvolution for preview purposes. In the background, the network will restore the high-resolution image, which will then be stored in the phone gallery. The typical computation time on the APP side for the pruned network was 1729.3\u2009ms.One 35-year-old male volunteered to be tested with informed consent. Initially, we employed our proposed integrated microscope to capture images across multiple regions of the volunteer\u2019s hands and lower arms, subsequently followed by the measurement of skin moisture levels at identical locations using a portable skin tester (Pocreation). The paired data from the integrated microscope capture and the skin moisture values were utilized to construct a customized skin moisture detection application (details outlined in subsequent sections). Measurements were reiterated post-application of skincare . Our research complies with all relevant ethical regulations overseen by the Committee on Ethics of Tsinghua University.We employed the MobileNet-V2 to complete the skin moisture detection task in a cell phone. It is a lightweight convolutional neural network for classification and segmentation tasks. The model used depth-wise separable convolution to reduce computation and the number of parameters, making it possible to deploy this model directly on mobile devices. Compared to Mobilenet-V1, it used inverted residuals and linear bottlenecks to get better performance.For the loss function, we used the Cross-Entropy (CE) loss. We resized the input image size to 512 and trained the model for 240 epochs with a batch size of 128 on our training set (about 9000 skin images). Adam optimizer was used to optimize network training, with a learning rate of 0.0002 and exponential decay rates of 0.9 for the first moment and 0.999 for the second moment. The network was trained on a server and then migrated into the cell phone for portable diagnosis.ssim function in MATLAB to calculate the similarity between PSFs from different spectrums or the similarity between reconstruction images and ground-truth images.Structure similarity index (SSIM) is a widely used full-reference metric for the assessment of the visual quality of images and remote sensing data. We called psnr function in MATLAB to calculate the similarity between reconstruction images and ground-truth images.Peak signal-to-noise ratio (PSNR) is an engineering term for the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. We called 70. It is a perceptual similarity metric that is based on deep features extracted from a neural network. It can compute a \u201cperceptual distance,\u201d which measures how similar two images are in a way that coincides with human judgment. Compared to PSNR and SSIM, the result of the LPIPS metric is more in line with human perception. In our work, we used the pretrained AlexNet71 to extract image features and compute the \u201cperceptual distance\u201d between output images and label images. The lower the value of this evaluation metric, the higher the perceptual similarity.Learned Perceptual Image Patch Similarity (LPIPS)Further information on research design is available in the\u00a0Supplementary InformationDescription of additional supplementary filesSupplementary Software 1Reporting Summary" \ No newline at end of file