FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S1532046413001433 | Clinical feature selection problem is the task of selecting and identifying a subset of informative clinical features that are useful for promoting accurate clinical diagnosis. This is a significant task of pragmatic value in the clinical settings as each clinical test is associated with a different financial cost, diagnostic value, and risk for obtaining the measurement. Moreover, with continual introduction of new clinical features, the need to repeat the feature selection task can be very time consuming. Therefore to address this issue, we propose a novel feature selection technique for diagnosis of myocardial infarction – one of the leading causes of morbidity and mortality in many high-income countries. This method adopts the conceptual framework of biological continuum, the optimization capability of genetic algorithm for performing feature selection and the classification ability of support vector machine. Together, a network of clinical risk factors, called the biological continuum based etiological network (BCEN), was constructed. Evaluation of the proposed methods was carried out using the cardiovascular heart study (CHS) dataset. Results demonstrate a significant speedup of 4.73-fold can be achieved for the development of MI classification model. The key advantage of this methodology is the provision of a reusable (feature subset) paradigm for efficient development of up-to-date and efficacious clinical classification models. | A biological continuum based approach for efficient clinical classification |
S1532046413001445 | Discovery of clinical pathway (CP) patterns has experienced increased attention over the years due to its importance for revealing the structure, semantics and dynamics of CPs, and to its usefulness for providing clinicians with explicit knowledge which can be directly used to guide treatment activities of individual patients. Generally, discovery of CP patterns is a challenging task as treatment behaviors in CPs often have a large variability depending on factors such as time, location and patient individual. Based on the assumption that CP patterns can be derived from clinical event logs which usually record various treatment activities in CP executions, this study proposes a novel approach to CP pattern discovery by modeling CPs using mixtures of an extension to the Latent Dirichlet Allocation family that jointly models various treatment activities and their occurring time stamps in CPs. Clinical case studies are performed to evaluate the proposed approach via real-world data sets recording typical treatment behaviors in patient careflow. The obtained results demonstrate the suitability of the proposed approach for CP pattern discovery, and indicate the promise in research efforts related to CP analysis and optimization. | Discovery of clinical pathway patterns from event logs using probabilistic topic models |
S1532046413001469 | A large number of topics in biology, medicine, neuroscience, psychology and sociology can be generally described via complex networks in order to investigate fundamental questions of structure, connectivity, information exchange and causality. Especially, research on biological networks like functional spatiotemporal brain activations and changes, caused by neuropsychiatric pathologies, is promising. Analyzing those so-called complex networks, the calculation of meaningful measures can be very long-winded depending on their size and structure. Even worse, in many labs only standard desktop computers are accessible to perform those calculations. Numerous investigations on complex networks regard huge but sparsely connected network structures, where most network nodes are connected to only a few others. Currently, there are several libraries available to tackle this kind of networks. A problem arises when not only a few big and sparse networks have to be analyzed, but hundreds or thousands of smaller and conceivably dense networks (e.g. in measuring brain activation over time). Then every minute per network is crucial. For these cases there several possibilities to use standard hardware more efficiently. It is not sufficient to apply just standard algorithms for dense graph characteristics. This article introduces the new library FALCON developed especially for the exploration of dense complex networks. Currently, it offers 12 different measures (like clustering coefficients), each for undirected-unweighted, undirected-weighted and directed-unweighted networks. It uses a multi-core approach in combination with comprehensive code and hardware optimizations. There is an alternative massively parallel GPU implementation for the most time-consuming measures, too. Finally, a comparing benchmark is integrated to support the choice of the most suitable library for a particular network issue. | FALCON or how to compute measures time efficiently on dynamically evolving dense complex networks? |
S1532046413001470 | Knowledge Organization Systems (KOSs) are extensively used in the biomedical domain to support information sharing between software applications. KOSs are proposed covering different, but overlapping subjects, and mappings indicate the semantic relation between concepts from two KOSs. Over time, KOSs change as do the mappings between them. This can result from a new discovery or a revision of existing knowledge which includes corrections of concepts or mappings. Indeed, changes affecting KOS entities may force the underline mappings to be updated in order to ensure their reliability over time. To tackle this open research problem, we study how mappings are affected by KOS evolution. This article presents a detailed descriptive analysis of the impact that changes in KOS have on mappings. As a case study, we use the official mappings established between SNOMED CT and ICD-9-CM from 2009 to 2011. Results highlight factors according to which KOS changes in varying degrees influence the evolution of mappings. | Understanding semantic mapping evolution by observing changes in biomedical ontologies |
S1532046413001482 | Patient discharge summaries provide detailed medical information about individuals who have been hospitalized. To make a precise and legitimate assessment of the abundant data, a proper time layout of the sequence of relevant events should be compiled and used to drive a patient-specific timeline, which could further assist medical personnel in making clinical decisions. The process of identifying the chronological order of entities is called temporal relation extraction. In this paper, we propose a hybrid method to identify appropriate temporal links between a pair of entities. The method combines two approaches: one is rule-based and the other is based on the maximum entropy model. We develop an integration algorithm to fuse the results of the two approaches. All rules and the integration algorithm are formally stated so that one can easily reproduce the system and results. To optimize the system’s configuration, we used the 2012 i2b2 challenge TLINK track dataset and applied threefold cross validation to the training set. Then, we evaluated its performance on the training and test datasets. The experiment results show that the proposed TEMPTING (TEMPoral relaTion extractING) system (ranked seventh) achieved an F-score of 0.563, which was at least 30% better than that of the baseline system, which randomly selects TLINK candidates from all pairs and assigns the TLINK types. The TEMPTING system using the hybrid method also outperformed the stage-based TEMPTING system. Its F-scores were 3.51% and 0.97% better than those of the stage-based system on the training set and test set, respectively. | TEMPTING system: A hybrid method of rule and machine learning for temporal relation extraction in patient discharge summaries |
S1532046413001494 | Clinical records of traditional Chinese medicine (TCM) are documented by TCM doctors during their routine diagnostic work. These records contain abundant knowledge and reflect the clinical experience of TCM doctors. In recent years, with the modernization of TCM clinical practice, these clinical records have begun to be digitized. Data mining (DM) and machine learning (ML) methods provide an opportunity for researchers to discover TCM regularities buried in the large volume of clinical records. There has been some work on this problem. Existing methods have been validated on a limited amount of manually well-structured data. However, the contents of most fields in the clinical records are unstructured. As a result, the previous methods verified on the well-structured data will not work effectively on the free-text clinical records (FCRs), and the FCRs are, consequently, required to be structured in advance. Manually structuring the large volume of TCM FCRs is time-consuming and labor-intensive, but the development of automatic methods for the structuring task is at an early stage. Therefore, in this paper, symptom name recognition (SNR) in the chief complaints, which is one of the important tasks to structure the FCRs of TCM, is carefully studied. The SNR task is reasonably treated as a sequence labeling problem, and several fundamental and practical problems in the SNR task are studied, such as how to adapt a general sequence labeling strategy for the SNR task according to the domain-specific characteristics of the chief complaints and which sequence classifier is more appropriate to solve the SNR task. To answer these questions, a series of elaborate experiments were performed, and the results are explained in detail. | Supervised methods for symptom name recognition in free-text clinical records of traditional Chinese medicine: An empirical study |
S1532046413001500 | Automatic processing of biomedical documents is made difficult by the fact that many of the terms they contain are ambiguous. Word Sense Disambiguation (WSD) systems attempt to resolve these ambiguities and identify the correct meaning. However, the published literature on WSD systems for biomedical documents report considerable differences in performance for different terms. The development of WSD systems is often expensive with respect to acquiring the necessary training data. It would therefore be useful to be able to predict in advance which terms WSD systems are likely to perform well or badly on. This paper explores various methods for estimating the performance of WSD systems on a wide range of ambiguous biomedical terms (including ambiguous words/phrases and abbreviations). The methods include both supervised and unsupervised approaches. The supervised approaches make use of information from labeled training data while the unsupervised ones rely on the UMLS Metathesaurus. The approaches are evaluated by comparing their predictions about how difficult disambiguation will be for ambiguous terms against the output of two WSD systems. We find the supervised methods are the best predictors of WSD difficulty, but are limited by their dependence on labeled training data. The unsupervised methods all perform well in some situations and can be applied more widely. | Determining the difficulty of Word Sense Disambiguation |
S1532046413001512 | The automatic detection of temporal relations between events in electronic medical records has the potential to greatly augment the value of such records for understanding disease progression and patients’ responses to treatments. We present a three-step methodology for labeling temporal relations using machine learning and deterministic rules over an annotated corpus provided by the 2012 i2b2 Shared Challenge. We first create an expanded training network of relations by computing the transitive closure over the annotated data; we then apply hand-written rules and machine learning with a feature set that casts a wide net across potentially relevant lexical and syntactic information; finally, we employ a voting mechanism to resolve global contradictions between the local predictions made by the learned classifier. Results over the testing data illustrate the contributions of initial prediction and conflict resolution. | Temporal relation discovery between events and temporal expressions identified in clinical narrative |
S1532046413001524 | The benefits of using ontology subsets versus full ontologies are well-documented for many applications. In this study, we propose an efficient subset extraction approach for a domain using a biomedical ontology repository with mappings, a cross-ontology, and a source subset from a related domain. As a case study, we extracted a subset of drugs from RxNorm using the UMLS Metathesaurus, the NDF-RT cross-ontology, and the CORE problem list subset of SNOMED CT. The extracted subset, which we termed RxNorm/CORE, was 4% the size of the full RxNorm (0.4% when considering ingredients only). For evaluation, we used CORE and RxNorm/CORE as thesauri for the annotation of clinical documents and compared their performance to that of their respective full ontologies (i.e., SNOMED CT and RxNorm). The wide range in recall of both CORE (29–69%) and RxNorm/CORE (21–35%) suggests that more quantitative research is needed to assess the benefits of using ontology subsets as thesauri in annotation applications. Our approach to subset extraction, however, opens a door to help create other types of clinically useful domain specific subsets and acts as an alternative in scenarios where well-established subset extraction techniques might suffer from difficulties or cannot be applied. | Cross-domain targeted ontology subsets for annotation: The case of SNOMED CORE and RxNorm |
S1532046413001536 | Bioinformatics research relies heavily on the ability to discover and correlate data from various sources. The specialization of life sciences over the past decade, coupled with an increasing number of biomedical datasets available through standardized interfaces, has created opportunities towards new methods in biomedical discovery. Despite the popularity of semantic web technologies in tackling the integrative bioinformatics challenge, there are many obstacles towards its usage by non-technical research audiences. In particular, the ability to fully exploit integrated information needs using improved interactive methods intuitive to the biomedical experts. In this report we present ReVeaLD (a Real-time Visual Explorer and Aggregator of Linked Data), a user-centered visual analytics platform devised to increase intuitive interaction with data from distributed sources. ReVeaLD facilitates query formulation using a domain-specific language (DSL) identified by biomedical experts and mapped to a self-updated catalogue of elements from external sources. ReVeaLD was implemented in a cancer research setting; queries included retrieving data from in silico experiments, protein modeling and gene expression. ReVeaLD was developed using Scalable Vector Graphics and JavaScript and a demo with explanatory video is available at http://www.srvgal78.deri.ie:8080/explorer. A set of user-defined graphic rules controls the display of information through media-rich user interfaces. Evaluation of ReVeaLD was carried out as a game: biomedical researchers were asked to assemble a set of 5 challenge questions and time and interactions with the platform were recorded. Preliminary results indicate that complex queries could be formulated under less than two minutes by unskilled researchers. The results also indicate that supporting the identification of the elements of a DSL significantly increased intuitiveness of the platform and usability of semantic web technologies by domain users. | ReVeaLD: A user-driven domain-specific interactive search platform for biomedical research |
S1532046413001548 | Named entities in the biomedical domain are often written using a Noun Phrase (NP) along with a coordinating conjunction such as ‘and’ and ‘or’. In addition, repeated words among named entity mentions are frequently omitted. It is often difficult to identify named entities. Although various Named Entity Recognition (NER) methods have tried to solve this problem, these methods can only deal with relatively simple elliptical patterns in coordinated NPs. We propose a new NER method for identifying non-elliptical entity mentions with simple or complex ellipses using linguistic rules and an entity mention dictionary. The GENIA and CRAFT corpora were used to evaluate the performance of the proposed system. The GENIA corpus was used to evaluate the performance of the system according to the quality of the dictionary. The GENIA corpus comprises 3434 non-elliptical entity mentions in 1585 coordinated NPs with ellipses. The system achieves 92.11% precision, 95.20% recall, and 93.63% F-score in identification of non-elliptical entity mentions in coordinated NPs. The accuracy of the system in resolving simple and complex ellipses is 94.54% and 91.95%, respectively. The CRAFT corpus was used to evaluate the performance of the system under realistic conditions. The system achieved 78.47% precision, 67.10% recall, and 72.34% F-score in coordinated NPs. The performance evaluations of the system show that it efficiently solves the problem caused by ellipses, and improves NER performance. The algorithm is implemented in PHP and the code can be downloaded from https://code.google.com/p/medtextmining/. | Identifying non-elliptical entity mentions in a coordinated NP with ellipses |
S153204641300155X | The task of recognizing and normalizing protein name mentions in biomedical literature is a challenging task and important for text mining applications such as protein–protein interactions, pathway reconstruction and many more. In this paper, we present ProNormz, an integrated approach for human proteins (HPs) tagging and normalization. In Homo sapiens, a greater number of biological processes are regulated by a large human gene family called protein kinases by post translational phosphorylation. Recognition and normalization of human protein kinases (HPKs) is considered to be important for the extraction of the underlying information on its regulatory mechanism from biomedical literature. ProNormz distinguishes HPKs from other HPs besides tagging and normalization. To our knowledge, ProNormz is the first normalization system available to distinguish HPKs from other HPs in addition to gene normalization task. ProNormz incorporates a specialized synonyms dictionary for human proteins and protein kinases, a set of 15 string matching rules and a disambiguation module to achieve the normalization. Experimental results on benchmark BioCreative II training and test datasets show that our integrated approach achieve a fairly good performance and outperforms more sophisticated semantic similarity and disambiguation systems presented in BioCreative II GN task. As a freely available web tool, ProNormz is useful to developers as extensible gene normalization implementation, to researchers as a standard for comparing their innovative techniques, and to biologists for normalization and categorization of HPs and HPKs mentions in biomedical literature. URL: http://www.biominingbu.org/pronormz. | ProNormz – An integrated approach for human proteins and protein kinases normalization |
S1532046413001561 | Background Despite the apparent potential of online health-promoting communities (OHPC), there is limited guidance available for developers on the basic design features that contribute to successful applications. The aim of this study was to develop a checklist for a pre-launch evaluation of OHPCs incorporating the perspectives of both the user and the health services communities. Methods The study was based on an action research design. Constructs previously applied to evaluate information system success were used as the basis for checklist development. The constructs were adapted for the OHPC context and formatively evaluated in a case study project. Evaluation data were collected from participatory observations and analyzed using qualitative methods. Results The initial OHPC checklist included the constructs information quality, service quality, and subjective norms. The contextual adaptation of the information quality construct resulted in items for content area, trust, and format; the adaptation of the service quality construct in items for staff competence, prompt service and empathy; and the adaptation of the subject norms construct in items for social facilitation, interconnectivity and communication. The formative evaluation demonstrated the critical need to balance the autonomy of the online community with the professional control of health services quality expressed in the information and service quality constructs. Conclusions A pre-launch OHPC evaluation checklist has been designed for use in practical development of health promotion web resources. Research on instruments for OHPC evaluations is warranted. | Pre-launch evaluation checklist for online health-promoting communities |
S1532046413001573 | Objective To determine whether SVM-based classifiers, which are trained on a combination of inclusion and common exclusion articles, are useful to experts reviewing journal articles for inclusion during new systematic reviews. Methods Test collections were built using the annotated reference files from 19 procedure and 4 drug systematic reviews. The classifiers were trained by balanced data sets, which were sampled using random sampling. This approach compared two balanced data sets, one with a combination of included and commonly excluded articles and one with a combination of included and excluded articles. AUCs were used as evaluation metrics. Results The AUCs of the classifiers, which were trained on the balanced data set with included and commonly excluded articles, were significantly higher than those of the classifiers, which were trained on the balanced data set with included and excluded articles. Conclusion Automatic, high-quality article classifiers using machine learning could reduce the workload of experts performing systematic reviews when topic-specific data are scarce. In particular, when used as training data, a combination of included and commonly excluded articles is more helpful than a combination of included and excluded articles. | An SVM-based high-quality article classifier for systematic reviews |
S1532046413001597 | We describe a new method for identification of confident associations within large clinical data sets. The method is a hybrid of two existing methods; Self-Organizing Maps and Association Mining. We utilize Self-Organizing Maps as the initial step to reduce the search space, and then apply Association Mining in order to find association rules. We demonstrate that this procedure has a number of advantages compared to traditional Association Mining; it allows for handling numerical variables without a priori binning and is able to generate variable groups which act as “hotspots” for statistically significant associations. We showcase the method on infertility-related data from Danish military conscripts. The clinical data we analyzed contained both categorical type questionnaire data and continuous variables generated from biological measurements, including missing values. From this data set, we successfully generated a number of interesting association rules, which relate an observation with a specific consequence and the p-value for that finding. Additionally, we demonstrate that the method can be used on non-clinical data containing chemical–disease associations in order to find associations between different phenotypes, such as prostate cancer and breast cancer. | Compass: A hybrid method for clinical and biobank data mining |
S1532046413001603 | Objective Targeted drugs dramatically improve the treatment outcomes in cancer patients; however, these innovative drugs are often associated with unexpectedly high cardiovascular toxicity. Currently, cardiovascular safety represents both a challenging issue for drug developers, regulators, researchers, and clinicians and a concern for patients. While FDA drug labels have captured many of these events, spontaneous reporting systems are a main source for post-marketing drug safety surveillance in ‘real-world’ (outside of clinical trials) cancer patients. In this study, we present approaches to extracting, prioritizing, filtering, and confirming cardiovascular events associated with targeted cancer drugs from the FDA Adverse Event Reporting System (FAERS). Data and methods The dataset includes records of 4,285,097 patients from FAERS. We first extracted drug–cardiovascular event (drug–CV) pairs from FAERS through named entity recognition and mapping processes. We then compared six ranking algorithms in prioritizing true positive signals among extracted pairs using known drug–CV pairs derived from FDA drug labels. We also developed three filtering algorithms to further improve precision. Finally, we manually validated extracted drug–CV pairs using 21 million published MEDLINE records. Results We extracted a total of 11,173 drug–CV pairs from FAERS. We showed that ranking by frequency is significantly more effective than by the five standard signal detection methods (246% improvement in precision for top-ranked pairs). The filtering algorithm we developed further improved overall precision by 91.3%. By manual curation using literature evidence, we show that about 51.9% of the 617 drug–CV pairs that appeared in both FAERS and MEDLINE sentences are true positives. In addition, 80.6% of these positive pairs have not been captured by FDA drug labeling. Conclusions The unique drug–CV association dataset that we created based on FAERS could facilitate our understanding and prediction of cardiotoxic events associated with targeted cancer drugs. | Automatic signal extraction, prioritizing and filtering approaches in detecting post-marketing cardiovascular events associated with targeted cancer drugs from the FDA Adverse Event Reporting System (FAERS) |
S1532046413001615 | Objective To demonstrate the use of aggregated and de-identified electronic health record (EHR) data for multivariate post-marketing pharmacosurveillance in a case study of azathioprine (AZA). Methods Using aggregated, standardized, normalized, and de-identified, population-level data from the Explore platform (Explorys, Inc.) we searched over 10 million individuals, of which 14,580 were prescribed AZA based on RxNorm drug orders. Based on logical observation identifiers names and codes (LOINC) and vital sign data, we examined the following side effects: anemia, cell lysis, fever, hepatotoxicity, hypertension, nephrotoxicity, neutropenia, and neutrophilia. Patients prescribed AZA were compared to patients prescribed one of 11 other anti-rheumatologic drugs to determine the relative risk of side effect pairs. Results Compared to AZA case report trends, hepatotoxicity (marked by elevated transaminases or elevated bilirubin) did not occur as an isolated event more frequently in patients prescribed AZA than other anti-rheumatic agents. While neutropenia occurred in 24% of patients (RR 1.15, 95% CI 1.07–1.23), neutrophilia was also frequent (45%) and increased in patients prescribed AZA (RR 1.28, 95% CI 1.22–1.34). After constructing a pairwise side effect network, neutropenia had no dependencies. A reduced risk of neutropenia was found in patients with co-existing elevations in total bilirubin or liver transaminases, supporting classic clinical knowledge that agranulocytosis is a largely unpredictable phenomenon. Rounding errors propagated in the statistically de-identified datasets for cohorts as small as 40 patients only contributed marginally to the calculated risk. Conclusion Our work demonstrates that aggregated, standardized, normalized and de-identified population level EHR data can provide both sufficient insight and statistical power to detect potential patterns of medication side effect associations, serving as a multivariate and generalizable approach to post-marketing drug surveillance. | Using aggregated, de-identified electronic health record data for multivariate pharmacosurveillance: A case study of azathioprine |
S1532046413001627 | Objective Several countries are in the process of implementing an Electronic Health Record (EHR), but limited physicians’ acceptance of this technology presents a serious threat to its successful implementation. The aim of this study was to identify the main determinants of physician acceptance of EHR in a sample of general practitioners and specialists of the Province of Quebec (Canada). Methods We sent an electronic questionnaire to physician members of the Quebec Medical Association. We tested four theoretical models (Technology acceptance model (TAM), Extended TAM, Psychosocial Model, and Integrated Model) using path analysis and multiple linear regression analysis in order to identify the main determinants of physicians’ intention to use the EHR. We evaluated the modifying effect of sociodemographic characteristics using multi-group analysis of structural weights invariance. Results A total of 157 questionnaires were returned. The four models performed well and explained between 44% and 55% of the variance in physicians’ intention to use the EHR. The Integrated model performed the best and showed that perceived ease of use, professional norm, social norm, and demonstrability of the results are the strongest predictors of physicians’ intention to use the EHR. Age, gender, previous experience and specialty modified the association between those determinants and intention. Conclusions The proposed integrated theoretical model is useful in identifying which factors could motivate physicians from different backgrounds to use the EHR. Physicians who perceive the EHR to be easy to use, coherent with their professional norms, supported by their peers and patients, and able to demonstrate tangible results are more likely to accept this technology. Age, gender, specialty and experience should also be taken into account when developing EHR implementation strategies targeting physicians. | Electronic health record acceptance by physicians: Testing an integrated theoretical model |
S1532046413001639 | Model organisms provide opportunities to design research experiments focused on disease-related processes (e.g., using genetically engineered populations that produce phenotypes of interest). For some diseases, there may be non-obvious model organisms that can help in the study of underlying disease factors. In this study, an approach is presented that leverages knowledge about human diseases and associated biological interactions networks to identify potential model organisms for a given disease category. The approach starts with the identification of functional and interaction patterns of diseases within genetic pathways. Next, these characteristic patterns are matched to interaction networks of candidate model organisms to identify similar subsystems that have characteristic patterns for diseases of interest. The quality of a candidate model organism is then determined by the degree to which the identified subsystems match genetic pathways from validated knowledge. The results of this study suggest that non-obvious model organisms may be identified through the proposed approach. | Structural network analysis of biological networks for assessment of potential disease model organisms |
S1532046413001640 | Many medical conditions are only indirectly observed through symptoms and tests. Developing predictive models for such conditions is challenging since they can be thought of as ‘latent’ variables. They are not present in the data and often get confused with measurements. As a result, building a model that fits data well is not the same as making a prediction that is useful for decision makers. In this paper, we present a methodology for developing Bayesian network (BN) models that predict and reason with latent variables, using a combination of expert knowledge and available data. The method is illustrated by a case study into the prediction of acute traumatic coagulopathy (ATC), a disorder of blood clotting that significantly increases the risk of death following traumatic injuries. There are several measurements for ATC and previous models have predicted one of these measurements instead of the state of ATC itself. Our case study illustrates the advantages of models that distinguish between an underlying latent condition and its measurements, and of a continuing dialogue between the modeller and the domain experts as the model is developed using knowledge as well as data. | Not just data: A method for improving prediction with knowledge |
S1532046413001779 | Clinical records include both coded and free-text fields that interact to reflect complicated patient stories. The information often covers not only the present medical condition and events experienced by the patient, but also refers to relevant events in the past (such as signs, symptoms, tests or treatments). In order to automatically construct a timeline of these events, we first need to extract the temporal relations between pairs of events or time expressions presented in the clinical notes. We designed separate extraction components for different types of temporal relations, utilizing a novel hybrid system that combines machine learning with a graph-based inference mechanism to extract the temporal links. The temporal graph is a directed graph based on parse tree dependencies of the simplified sentences and frequent pattern clues. We generalized the sentences in order to discover patterns that, given the complexities of natural language, might not be directly discoverable in the original sentences. The proposed hybrid system performance reached an F-measure of 0.63, with precision at 0.76 and recall at 0.54 on the 2012 i2b2 Natural Language Processing corpus for the temporal relation (TLink) extraction task, achieving the highest precision and third highest f-measure among participating teams in the TLink track. | Towards generating a patient’s timeline: Extracting temporal relationships from clinical notes |
S1532046413001792 | To date, the scientific process for generating, interpreting, and applying knowledge has received less informatics attention than operational processes for conducting clinical studies. The activities of these scientific processes – the science of clinical research – are centered on the study protocol, which is the abstract representation of the scientific design of a clinical study. The Ontology of Clinical Research (OCRe) is an OWL 2 model of the entities and relationships of study design protocols for the purpose of computationally supporting the design and analysis of human studies. OCRe’s modeling is independent of any specific study design or clinical domain. It includes a study design typology and a specialized module called ERGO Annotation for capturing the meaning of eligibility criteria. In this paper, we describe the key informatics use cases of each phase of a study’s scientific lifecycle, present OCRe and the principles behind its modeling, and describe applications of OCRe and associated technologies to a range of clinical research use cases. OCRe captures the central semantics that underlies the scientific processes of clinical research and can serve as an informatics foundation for supporting the entire range of knowledge activities that constitute the science of clinical research. | The Ontology of Clinical Research (OCRe): An informatics foundation for the science of clinical research |
S1532046413001809 | Objective To quantify the presence of and evaluate an approach for detection of inconsistencies in the formal definitions of SNOMED CT (SCT) concepts utilizing a lexical method. Material and method Utilizing SCT’s Procedure hierarchy, we algorithmically formulated similarity sets: groups of concepts with similar lexical structure of their fully specified name. We formulated five random samples, each with 50 similarity sets, based on the same parameter: number of parents, attributes, groups, all the former as well as a randomly selected control sample. All samples’ sets were reviewed for types of formal definition inconsistencies: hierarchical, attribute assignment, attribute target values, groups, and definitional. Results For the Procedure hierarchy, 2111 similarity sets were formulated, covering 18.1% of eligible concepts. The evaluation revealed that 38 (Control) to 70% (Different relationships) of similarity sets within the samples exhibited significant inconsistencies. The rate of inconsistencies for the sample with different relationships was highly significant compared to Control, as well as the number of attribute assignment and hierarchical inconsistencies within their respective samples. Discussion and conclusion While, at this time of the HITECH initiative, the formal definitions of SCT are only a minor consideration, in the grand scheme of sophisticated, meaningful use of captured clinical data, they are essential. However, significant portion of the concepts in the most semantically complex hierarchy of SCT, the Procedure hierarchy, are modeled inconsistently in a manner that affects their computability. Lexical methods can efficiently identify such inconsistencies and possibly allow for their algorithmic resolution. | Contrasting lexical similarity and formal definitions in SNOMED CT: Consistency and implications |
S1532046413001810 | The US National Institutes of Health (NIH) has developed the Biomedical Translational Research Information System (BTRIS) to support researchers’ access to translational and clinical data. BTRIS includes a data repository, a set of programs for loading data from NIH electronic health records and research data management systems, an ontology for coding the disparate data with a single terminology, and a set of user interface tools that provide access to identified data from individual research studies and data across all studies from which individually identifiable data have been removed. This paper reports on unique design elements of the system, progress to date and user experience after five years of development and operation. | The National Institutes of Health’s Biomedical Translational Research Information System (BTRIS): Design, contents, functionality and experience to date |
S1532046413001834 | Ontologies are widely adopted in the biomedical domain to characterize various resources (e.g. diseases, drugs, scientific publications) with non-ambiguous meanings. By exploiting the structured knowledge that ontologies provide, a plethora of ad hoc and domain-specific semantic similarity measures have been defined over the last years. Nevertheless, some critical questions remain: which measure should be defined/chosen for a concrete application? Are some of the, a priori different, measures indeed equivalent? In order to bring some light to these questions, we perform an in-depth analysis of existing ontology-based measures to identify the core elements of semantic similarity assessment. As a result, this paper presents a unifying framework that aims to improve the understanding of semantic measures, to highlight their equivalences and to propose bridges between their theoretical bases. By demonstrating that groups of measures are just particular instantiations of parameterized functions, we unify a large number of state-of-the-art semantic similarity measures through common expressions. The application of the proposed framework and its practical usefulness is underlined by an empirical analysis of hundreds of semantic measures in a biomedical context. | A framework for unifying ontology-based semantic similarity measures: A study in the biomedical domain |
S1532046413001883 | Background Managing information access in collaborative processes is a critical requirement to team-based biomedical research, clinical education, and patient care. We have previously developed a computation model, Enhanced Role-Based Access Control (EnhancedRBAC), and applied it to coordinate information access in the combined context of team collaboration and workflow for the New York State HIV Clinical Education Initiative (CEI) program. We report in this paper an evaluation study to assess the effectiveness of the EnhancedRBAC model for information access management in collaborative processes when applied to CEI. Methods We designed a cross-sectional study and performed two sets of measurement: (1) degree of agreement between EnhancedRBAC and a control system CEIAdmin based on 9152 study cases, and (2) effectiveness of EnhancedRBAC in terms of sensitivity, specificity, and accuracy based on a gold-standard with 512 sample cases developed by a human expert panel. We applied stratified random sampling, partial factorial design, and blocked randomization to ensure a representative case sample and a high-quality gold-standard. Results With the kappa statistics of four comparisons in the range of 0.80–0.89, EnhancedRBAC has demonstrated a high level of agreement with CEIAdmin. When evaluated against the gold-standard, EnhancedRBAC has achieved sensitivities in the range of 97–100%, specificities at the level of 100%, and accuracies in the range of 98–100%. Conclusions The initial results have shown that the EnhancedRBAC model can be effectively used to manage information access in the combined context of team collaboration and workflow for coordination of clinical education programs. Future research is required to perform longitudinal evaluation studies and to assess the effectiveness of EnhancedRBAC in other applications. | Evaluation of an Enhanced Role-Based Access Control model to manage information access in collaborative processes for a statewide clinical education program |
S1532046413001895 | Rapid, automated determination of the mapping of free text phrases to pre-defined concepts could assist in the annotation of clinical notes and increase the speed of natural language processing systems. The aim of this study was to design and evaluate a token-order-specific naïve Bayes-based machine learning system (RapTAT) to predict associations between phrases and concepts. Performance was assessed using a reference standard generated from 2860 VA discharge summaries containing 567,520 phrases that had been mapped to 12,056 distinct Systematized Nomenclature of Medicine – Clinical Terms (SNOMED CT) concepts by the MCVS natural language processing system. It was also assessed on the manually annotated, 2010 i2b2 challenge data. Performance was established with regard to precision, recall, and F-measure for each of the concepts within the VA documents using bootstrapping. Within that corpus, concepts identified by MCVS were broadly distributed throughout SNOMED CT, and the token-order-specific language model achieved better performance based on precision, recall, and F-measure (0.95±0.15, 0.96±0.16, and 0.95±0.16, respectively; mean±SD) than the bag-of-words based, naïve Bayes model (0.64±0.45, 0.61±0.46, and 0.60±0.45, respectively) that has previously been used for concept mapping. Precision, recall, and F-measure on the i2b2 test set were 92.9%, 85.9%, and 89.2% respectively, using the token-order-specific model. RapTAT required just 7.2ms to map all phrases within a single discharge summary, and mapping rate did not decrease as the number of processed documents increased. The high performance attained by the tool in terms of both accuracy and speed was encouraging, and the mapping rate should be sufficient to support near-real-time, interactive annotation of medical narratives. These results demonstrate the feasibility of rapidly and accurately mapping phrases to a wide range of medical concepts based on a token-order-specific naïve Bayes model and machine learning. | Development and evaluation of RapTAT: A machine learning system for concept mapping of phrases from medical narratives |
S1532046413001901 | Cross-institutional data sharing for cohort discovery is critical to enabling future research. While particularly useful in rare diseases, the ability to target enrollment and to determine if an institution has a sufficient number of patients is valuable in all research, particularly in the initiation of projects and collaborations. An optimal technology solution would work with any source database with minimal resource investment for deployment and would meet all necessary security and confidentiality requirements of participating organizations. We describe a platform-neutral reference implementation to meet these requirements: the Federated Aggregate Cohort Estimator (FACE). FACE was developed and implemented through a collaboration of The University of Alabama at Birmingham (UAB), The Ohio State University (OSU), the University of Massachusetts Medical School (UMMS), and the Denver Health and Hospital Authority (DHHA) a clinical affiliate of the Colorado Clinical and Translational Sciences Institute. The reference implementation of FACE federated diverse SQL data sources and an i2b2 instance to estimate combined research subject availability from three institutions. It used easily-deployed virtual machines and addressed privacy and security concerns for data sharing. | Federated Aggregate Cohort Estimator (FACE): An easy to deploy, vendor neutral, multi-institutional cohort query architecture |
S1532046413001913 | Background Correlation of data within electronic health records is necessary for implementation of various clinical decision support functions, including patient summarization. A key type of correlation is linking medications to clinical problems; while some databases of problem-medication links are available, they are not robust and depend on problems and medications being encoded in particular terminologies. Crowdsourcing represents one approach to generating robust knowledge bases across a variety of terminologies, but more sophisticated approaches are necessary to improve accuracy and reduce manual data review requirements. Objective We sought to develop and evaluate a clinician reputation metric to facilitate the identification of appropriate problem-medication pairs through crowdsourcing without requiring extensive manual review. Approach We retrieved medications from our clinical data warehouse that had been prescribed and manually linked to one or more problems by clinicians during e-prescribing between June 1, 2010 and May 31, 2011. We identified measures likely to be associated with the percentage of accurate problem-medication links made by clinicians. Using logistic regression, we created a metric for identifying clinicians who had made greater than or equal to 95% appropriate links. We evaluated the accuracy of the approach by comparing links made by those physicians identified as having appropriate links to a previously manually validated subset of problem-medication pairs. Results Of 867 clinicians who asserted a total of 237,748 problem-medication links during the study period, 125 had a reputation metric that predicted the percentage of appropriate links greater than or equal to 95%. These clinicians asserted a total of 2464 linked problem-medication pairs (983 distinct pairs). Compared to a previously validated set of problem-medication pairs, the reputation metric achieved a specificity of 99.5% and marginally improved the sensitivity of previously described knowledge bases. Conclusion A reputation metric may be a valuable measure for identifying high quality clinician-entered, crowdsourced data. | Development of a clinician reputation metric to identify appropriate problem-medication pairs in a crowdsourced knowledge base |
S1532046413001925 | Finding, capturing and describing characteristic features represents a key aspect in disorder definition, diagnosis and management. This process is particularly challenging in the case of rare disorders, due to the sparse nature of data and expertise. From a computational perspective, finding characteristic features is associated with some additional major challenges, such as formulating a computationally tractable definition, devising appropriate inference algorithms or defining sound validation mechanisms. In this paper we aim to deal with each of these problems in the context provided by the skeletal dysplasia domain. We propose a clear definition for characteristic phenotypes, we experiment with a novel, class association rule mining algorithm and we discuss our lessons learned from both an automatic and human-based validation of our approach. | Inferring characteristic phenotypes via class association rule mining in the bone dysplasia domain |
S1532046413001937 | Sensitive biomedical data is often collected from distributed sources, involving different information systems and different organizational units. Local autonomy and legal reasons lead to the need of privacy preserving integration concepts. In this article, we focus on anonymization, which plays an important role for the re-use of clinical data and for the sharing of research data. We present a flexible solution for anonymizing distributed data in the semi-honest model. Prior to the anonymization procedure, an encrypted global view of the dataset is constructed by means of a secure multi-party computing (SMC) protocol. This global representation can then be anonymized. Our approach is not limited to specific anonymization algorithms but provides pre- and postprocessing for a broad spectrum of algorithms and many privacy criteria. We present an extensive analytical and experimental evaluation and discuss which types of methods and criteria are supported. Our prototype demonstrates the approach by implementing k-anonymity, ℓ -diversity, t-closeness and δ -presence with a globally optimal de-identification method in horizontally and vertically distributed setups. The experiments show that our method provides highly competitive performance and offers a practical and flexible solution for anonymizing distributed biomedical datasets. | A flexible approach to distributed data anonymization |
S1532046413001949 | Record linkage typically involves the use of dedicated linkage units who are supplied with personally identifying information to determine individuals from within and across datasets. The personally identifying information supplied to linkage units is separated from clinical information prior to release by data custodians. While this substantially reduces the risk of disclosure of sensitive information, some residual risks still exist and remain a concern for some custodians. In this paper we trial a method of record linkage which reduces privacy risk still further on large real world administrative data. The method uses encrypted personal identifying information (bloom filters) in a probability-based linkage framework. The privacy preserving linkage method was tested on ten years of New South Wales (NSW) and Western Australian (WA) hospital admissions data, comprising in total over 26 million records. No difference in linkage quality was found when the results were compared to traditional probabilistic methods using full unencrypted personal identifiers. This presents as a possible means of reducing privacy risks related to record linkage in population level research studies. It is hoped that through adaptations of this method or similar privacy preserving methods, risks related to information disclosure can be reduced so that the benefits of linked research taking place can be fully realised. | Privacy-preserving record linkage on large real world datasets |
S1532046413001950 | Underspecified user needs and frequent lack of a gold standard reference are typical barriers to technology evaluation. To address this problem, this paper presents a two-phase evaluation framework involving usability experts (phase 1) and end-users (phase 2). In phase 1, a cross-system functionality alignment between expert-derived user needs and system functions was performed to inform the choice of “the best available” comparison system to enable a cognitive walkthrough in phase 1 and a comparative effectiveness evaluation in phase 2. During phase 2, five quantitative and qualitative evaluation methods are mixed to assess usability: time-motion analysis, software log, questionnaires – System Usability Scale and the Unified Theory of Acceptance of Use of Technology, think-aloud protocols, and unstructured interviews. Each method contributes data for a unique measure (e.g., time motion analysis contributes task-completion-time; software log contributes action transition frequency). The measures are triangulated to yield complementary insights regarding user-perceived ease-of-use, functionality integration, anxiety during use, and workflow impact. To illustrate its use, we applied this framework in a formative evaluation of a software called Integrated Model for Patient Care and Clinical Trials (IMPACT). We conclude that this mixed-methods evaluation framework enables an integrated assessment of user needs satisfaction and user-perceived usefulness and usability of a novel design. This evaluation framework effectively bridges the gap between co-evolving user needs and technology designs during iterative prototyping and is particularly useful when it is difficult for users to articulate their needs for technology support due to the lack of a baseline. | From expert-derived user needs to user-perceived ease of use and usefulness: A two-phase mixed-methods evaluation framework |
S1532046413001962 | Objective Reducing care variability through guidelines has significantly benefited patients. Nonetheless, guideline-based Clinical Decision Support (CDS) systems are not widely implemented or used, are frequently out-of-date, and cannot address complex care for which guidelines do not exist. Here, we develop and evaluate a complementary approach – using Bayesian Network (BN) learning to generate adaptive, context-specific treatment menus based on local order-entry data. These menus can be used as a draft for expert review, in order to minimize development time for local decision support content. This is in keeping with the vision outlined in the US Health Information Technology Strategic Plan, which describes a healthcare system that learns from itself. Materials and methods We used the Greedy Equivalence Search algorithm to learn four 50-node domain-specific BNs from 11,344 encounters: abdominal pain in the emergency department, inpatient pregnancy, hypertension in the Urgent Visit Clinic, and altered mental state in the intensive care unit. We developed a system to produce situation-specific, rank-ordered treatment menus from these networks. We evaluated this system with a hospital-simulation methodology and computed Area Under the Receiver–Operator Curve (AUC) and average menu position at time of selection. We also compared this system with a similar association-rule-mining approach. Results A short order menu on average contained the next order (weighted average length 3.91–5.83 items). Overall predictive ability was good: average AUC above 0.9 for 25% of order types and overall average AUC .714–.844 (depending on domain). However, AUC had high variance (.50–.99). Higher AUC correlated with tighter clusters and more connections in the graphs, indicating importance of appropriate contextual data. Comparison with an Association Rule Mining approach showed similar performance for only the most common orders with dramatic divergence as orders are less frequent. Discussion and conclusion This study demonstrates that local clinical knowledge can be extracted from treatment data for decision support. This approach is appealing because: it reflects local standards; it uses data already being captured; and it produces human-readable treatment-diagnosis networks that could be curated by a human expert to reduce workload in developing localized CDS content. The BN methodology captured transitive associations and co-varying relationships, which existing approaches do not. It also performs better as orders become less frequent and require more context. This system is a step forward in harnessing local, empirical data to enhance decision support. | Decision support from local data: Creating adaptive order menus from past clinician behavior |
S1532046413001974 | Information encoded in natural language in biomedical literature publications is only useful if efficient and reliable ways of accessing and analyzing that information are available. Natural language processing and text mining tools are therefore essential for extracting valuable information, however, the development of powerful, highly effective tools to automatically detect central biomedical concepts such as diseases is conditional on the availability of annotated corpora. This paper presents the disease name and concept annotations of the NCBI disease corpus, a collection of 793 PubMed abstracts fully annotated at the mention and concept level to serve as a research resource for the biomedical natural language processing community. Each PubMed abstract was manually annotated by two annotators with disease mentions and their corresponding concepts in Medical Subject Headings (MeSH®) or Online Mendelian Inheritance in Man (OMIM®). Manual curation was performed using PubTator, which allowed the use of pre-annotations as a pre-step to manual annotations. Fourteen annotators were randomly paired and differing annotations were discussed for reaching a consensus in two annotation phases. In this setting, a high inter-annotator agreement was observed. Finally, all results were checked against annotations of the rest of the corpus to assure corpus-wide consistency. The public release of the NCBI disease corpus contains 6892 disease mentions, which are mapped to 790 unique disease concepts. Of these, 88% link to a MeSH identifier, while the rest contain an OMIM identifier. We were able to link 91% of the mentions to a single disease concept, while the rest are described as a combination of concepts. In order to help researchers use the corpus to design and test disease identification methods, we have prepared the corpus as training, testing and development sets. To demonstrate its utility, we conducted a benchmarking experiment where we compared three different knowledge-based disease normalization methods with a best performance in F-measure of 63.7%. These results show that the NCBI disease corpus has the potential to significantly improve the state-of-the-art in disease name recognition and normalization research, by providing a high-quality gold standard thus enabling the development of machine-learning based approaches for such tasks. The NCBI disease corpus, guidelines and other associated resources are available at: http://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/. | NCBI disease corpus: A resource for disease name recognition and concept normalization |
S1532046413001986 | Introduction Autonomous chronic disease management requires models that are able to interpret time series data from patients. However, construction of such models by means of machine learning requires the availability of costly health-care data, often resulting in small samples. We analysed data from chronic obstructive pulmonary disease (COPD) patients with the goal of constructing a model to predict the occurrence of exacerbation events, i.e., episodes of decreased pulmonary health status. Methods Data from 10 COPD patients, gathered with our home monitoring system, were used for temporal Bayesian network learning, combined with bootstrapping methods for data analysis of small data samples. For comparison a temporal variant of augmented naive Bayes models and a temporal nodes Bayesian network (TNBN) were constructed. The performances of the methods were first tested with synthetic data. Subsequently, different COPD models were compared to each other using an external validation data set. Results The model learning methods are capable of finding good predictive models for our COPD data. Model averaging over models based on bootstrap replications is able to find a good balance between true and false positive rates on predicting COPD exacerbation events. Temporal naive Bayes offers an alternative that trades some performance for a reduction in computation time and easier interpretation. | Learning Bayesian networks for clinical time series analysis |
S1532046413001998 | Multi Organ Dysfunction Syndrome (MODS) represents a continuum of physiologic derangements and is the major cause of death in the Intensive Care Unit (ICU). Scoring systems for organ failure have become an integral part of critical care practice and play an important role in ICU-based research by tracking disease progression and facilitating patient stratification based on evaluation of illness severity during ICU stay. In this study a Dynamic Bayesian Network (DBN) was applied to model SOFA severity score changes in 79 adult critically ill patients consecutively admitted to the general ICU of the Sant’Andrea University hospital (Rome, Italy) from September 2010 to March 2011, with the aim to identify the most probable sequences of organs failures in the first week after the ICU admission. Approximately 56% of patients were admitted into the ICU with lung failure and about 27% of patients with heart failure. Results suggest that, given the first organ failure at the ICU admission, a sequence of organ failures can be predicted with a certain degree of probability. Sequences involving heart, lung, hematologic system and liver turned out to be the more likely to occur, with slightly different probabilities depending on the day of the week they occur. DBNs could be successfully applied for modeling temporal systems in critical care domain. Capability to predict sequences of likely organ failures makes DBNs a promising prognostic tool, intended to help physicians in undertaking therapeutic decisions in a patient-tailored approach. | Dynamic Bayesian Networks to predict sequences of organ failures in patients admitted to ICU |
S1532046413002001 | Biomedical prediction based on clinical and genome-wide data has become increasingly important in disease diagnosis and classification. To solve the prediction problem in an effective manner for the improvement of clinical care, we develop a novel Artificial Neural Network (ANN) method based on Matrix Pseudo-Inversion (MPI) for use in biomedical applications. The MPI-ANN is constructed as a three-layer (i.e., input, hidden, and output layers) feed-forward neural network, and the weights connecting the hidden and output layers are directly determined based on MPI without a lengthy learning iteration. The LASSO (Least Absolute Shrinkage and Selection Operator) method is also presented for comparative purposes. Single Nucleotide Polymorphism (SNP) simulated data and real breast cancer data are employed to validate the performance of the MPI-ANN method via 5-fold cross validation. Experimental results demonstrate the efficacy of the developed MPI-ANN for disease classification and prediction, in view of the significantly superior accuracy (i.e., the rate of correct predictions), as compared with LASSO. The results based on the real breast cancer data also show that the MPI-ANN has better performance than other machine learning methods (including support vector machine (SVM), logistic regression (LR), and an iterative ANN). In addition, experiments demonstrate that our MPI-ANN could be used for bio-marker selection as well. | A novel artificial neural network method for biomedical prediction based on matrix pseudo-inversion |
S1532046413002013 | Objective Bone sawing or cutting is widely used for bone removal processes in bone surgery. It is an essential skill that surgeons should execute with a high level of experience and sensitive force perception. Surgical training simulators, with virtual and haptic feedback functions, can offer a safe, repeatable and cost-effective alternative to traditional surgeries. In this research, we developed a surgical training simulator with virtual and haptic force feedback for maxillofacial surgery, and we validated the effects on the learning of bone-sawing skills through empirical evaluation. Methods Omega.6 from Force Dimension was employed as the haptic device, and Display300 from SenseGraphices was used as the 3D stereo display. The voxel-based model was constructed using computed tomography (CT) images, and the virtual tools were built through reverse engineering. The multi-point collision detection method was applied for haptic rendering to test the 3D relationship between the virtual tool and the bone voxels. Bone-sawing procedures in maxillofacial surgery were simulated with a virtual environment and real-time haptic feedback. A total of 25 participants (16 novices and 9 experienced surgeons) were included in 2 groups to perform the bone-sawing simulation for assessing the construct validity. Each of the participants completed the same bone-sawing procedure at the predefined maxillary region six times. For each trial, the sawing operative time, the maximal acceleration, and the percentage of the haptic force exceeding the threshold were recorded and analysed to evaluate the validity. After six trials, all of the participants scored the simulator in terms of safe force learning, stable hand control and overall performance to confirm the face validity. Moreover, 10 novices in 2 groups indentified the transfer validity on rapid prototype skull models by comparing the operative time and the maximal acceleration. Results The analysed results of construct validity showed that the two groups significantly reduced their sawing operative times after six trials. Regarding maximal acceleration, the curve significantly descended and reached a plateau after the fifth repetition (novices) or third repetition (surgeons). Regarding safe haptic force, the novices obviously reduced the percentage of the haptic force exceeding the threshold, with statistical significance after four trials, but the surgeons did not show a significant difference. Moreover, the subjectively scored results demonstrated that the proposed simulator was more helpful for the novices than for the experienced surgeons, with scores of 8.31 and 7.22, respectively, for their overall performance. The experimental results on skill transference showed that the experimental group performed bone-sawing operation in lower maximal acceleration than control group with a significant difference (p <0.05). These findings suggested that the simulator training had positive effects on real sawing. Conclusions The evaluation results proved the construct validity, face validity and the transfer validity of the simulator. These results indicated that this simulator was able to produce the effect of learning bone-sawing skill, and it could provide a training alternative for novices. | Development and validation of a surgical training simulator with haptic feedback for learning bone-sawing skill |
S1532046413002025 | Shape deformation as a fundamental geometric operation underpins a wide range of applications, from geometric modelling, medical imaging to biomechanics. In medical imaging, for example, to quantify the difference between two corresponding images, 2D or 3D, one needs to find the deformation between both images. However, such deformations, particularly deforming complex volume datasets, are prone to the problem of foldover, i.e. during deformation, the required property of one-to-one mapping no longer holds for some points. Despite numerous research efforts, the construction of a mathematically robust foldover-free solution subject to positional constraints remains open. In this paper, we address this challenge by developing a radial basis function-based deformation method. In particular we formulate an effective iterative mechanism which ensures the foldover-free property is satisfied all the time. The experimental results suggest that the resulting deformations meet the internal positional constraints. In addition to radial basis functions, this iterative mechanism can also be incorporated into other deformation approaches, e.g. B-spline based FFDs, to develop different deformable approaches for various applications. | Foldover-free shape deformation for biomedicine |
S1532046413002037 | Objective Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: (1) cohort construction, (2) feature construction, (3) cross-validation, (4) feature selection, and (5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. Methods To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which (1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, (2) schedules the tasks in a topological ordering of the graph, and (3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. Results We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3h in parallel compared to 9days if running sequentially. Conclusion This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines that are specialized for health data researchers. | PARAMO: A PARAllel predictive MOdeling platform for healthcare analytic research using electronic health records |
S1532046413002049 | Ontologies are useful tools for sharing and exchanging knowledge. However ontology construction is complex and often time consuming. In this paper, we present a method for building a bilingual domain ontology from textual and termino-ontological resources intended for semantic annotation and information retrieval of textual documents. This method combines two approaches: ontology learning from texts and the reuse of existing terminological resources. It consists of four steps: (i) term extraction from domain specific corpora (in French and English) using textual analysis tools, (ii) clustering of terms into concepts organized according to the UMLS Metathesaurus, (iii) ontology enrichment through the alignment of French and English terms using parallel corpora and the integration of new concepts, (iv) refinement and validation of results by domain experts. These validated results are formalized into a domain ontology dedicated to Alzheimer’s disease and related syndromes which is available online (http://lesim.isped.u-bordeaux2.fr/SemBiP/ressources/ontoAD.owl). The latter currently includes 5765 concepts linked by 7499 taxonomic relationships and 10,889 non-taxonomic relationships. Among these results, 439 concepts absent from the UMLS were created and 608 new synonymous French terms were added. The proposed method is sufficiently flexible to be applied to other domains. | Reuse of termino-ontological resources and text corpora for building a multilingual domain ontology: An application to Alzheimer’s disease |
S1532046413002050 | Background To facilitate research applying Natural Language Processing to clinical documents, tools and resources are needed for the automatic de-identification of Electronic Health Records. Objective This study investigates methods for developing a high-quality reference corpus for the de-identification of clinical documents in French. Methods A corpus comprising a variety of clinical document types covering several medical specialties was pre-processed with two automatic de-identification systems from the MEDINA suite of tools: a rule-based system and a system using Conditional Random Fields (CRF). The pre-annotated documents were revised by two human annotators trained to mark ten categories of Protected Health Information (PHI). The human annotators worked independently and were blind to the system that produced the pre-annotations they were revising.The best pre-annotation system was applied to another random selection of 100 documents.After revision by one annotator, this set was used to train a statistical de-identification system. Results Two gold standard sets of 100 documents were created based on the consensus of two human revisions of the automatic pre-annotations.The annotation experiment showed that (i) automatic pre-annotation obtained with the rule-based system performed better (F =0.813) than the CRF system (F =0.519), (ii) the human annotators spent more time revising the pre-annotations obtained with the rule-based system (from 102 to 160minutes for 50 documents), compared to the CRF system (from 93 to 142minutes for 50 documents), (iii) the quality of human annotation is higher when pre-annotations are obtained with the rule-based system (F-measure ranging from 0.970 to 0.987), compared to the CRF system (F-measure ranging from 0.914 to 0.981).Finally, only 20 documents from the training set were needed for the statistical system to outperform the pre-annotation systems that were trained on corpora from a medical speciality and hospital different from those in the reference corpus developed herein. Conclusion We find that better pre-annotations increase the quality of the reference corpus but require more revision time. A statistical de-identification method outperforms our rule-based system when as little as 20 custom training documents are available. | De-identification of clinical notes in French: towards a protocol for reference corpus development |
S1532046413002062 | Chronic patients must carry out a rigorous control of diverse factors in their lives. Diet, sport activity, medical analysis or blood glucose levels are some of them. This is a hard task, because some of these controls are performed very often, for instance some diabetics measure their glucose levels several times every day, or patients with chronic renal disease, a progressive loss in renal function, should strictly control their blood pressure and diet. In order to facilitate this task to both the patient and the physician, we have developed a web application for chronic diseases control which we have particularized to diabetes. This system, called glUCModel, improves the communication and interaction between patients and doctors, and eventually the quality of life of the former. Through a web application, patients can upload their personal and medical data, which are stored in a centralized database. In this way, doctors can consult this information and have a better control over patient records. glUCModel also presents three novelties in the disease management: a recommender system, an e-learning course and a module for automatic generation of glucose levels model. The recommender system uses Case Based Reasoning. It provides automatic recommendations to the patient, based on the recorded data and physician preferences, to improve their habits and knowledge about the disease. The e-learning course provides patients a space to consult information about the illness, and also to assess their own knowledge about the disease. Blood glucose levels are modeled by means of evolutionary computation, allowing to predict glucose levels using particular features of each patient. glUCModel was developed as a system where a web layer allows the access of the users from any device connected to the Internet, like desktop computers, tablets or mobile phones. | glUCModel: A monitoring and modeling system for chronic diseases applied to diabetes |
S1532046413002074 | With the increasing availability of genomic sequence data, numerous methods have been proposed for finding DNA motifs. The discovery of DNA motifs serves a critical step in many biological applications. However, the privacy implication of DNA analysis is normally neglected in the existing methods. In this work, we propose a private DNA motif finding algorithm in which a DNA owner’s privacy is protected by a rigorous privacy model, known as ∊ -differential privacy. It provides provable privacy guarantees that are independent of adversaries’ background knowledge. Our algorithm makes use of the n -gram model and is optimized for processing large-scale DNA sequences. We evaluate the performance of our algorithm over real-life genomic data and demonstrate the promise of integrating privacy into DNA motif finding. | A private DNA motif finding algorithm |
S1532046414000021 | Privacy has always been a great concern of patients and medical service providers. As a result of the recent advances in information technology and the government’s push for the use of Electronic Health Record (EHR) systems, a large amount of medical data is collected and stored electronically. This data needs to be made available for analysis but at the same time patient privacy has to be protected through de-identification. Although biomedical researchers often describe their research plans when they request anonymized data, most existing anonymization methods do not use this information when de-identifying the data. As a result, the anonymized data may not be useful for the planned research project. This paper proposes a data recipient centered approach to tailor the de-identification method based on input from the recipient of the data. We demonstrate our approach through an anonymization project for biomedical researchers with specific goals to improve the utility of the anonymized data for statistical models used for their research project. The selected algorithm improves a privacy protection method called Condensation by Aggarwal et al. Our methods were tested and validated on real cancer surveillance data provided by the Kentucky Cancer Registry. | A data recipient centered de-identification method to retain statistical attributes |
S1532046414000033 | Medication exposure is an important variable in virtually all clinical research, yet there is great variation in how the data are collected, coded, and analyzed. Coding and classification systems for medication data are heterogeneous in structure, and there is little guidance for implementing them, especially in large research networks and multi-site trials. Current practices for handling medication data in clinical trials have emerged from the requirements and limitations of paper-based data collection, but there are now many electronic tools to enable the collection and analysis of medication data. This paper reviews approaches to coding medication data in multi-site research contexts, and proposes a framework for the classification, reporting, and analysis of medication data. The framework can be used to develop tools for classifying medications in coded data sets to support context appropriate, explicit, and reproducible data analyses by researchers and secondary users in virtually all clinical research domains. | An informatics framework for the standardized collection and analysis of medication data in networked research |
S1532046414000045 | With the current expansion of data linkage research, the challenge is to find the balance between preserving the privacy of person-level data whilst making these data accessible for use to their full potential. We describe a privacy-protecting safe haven and secure remote access system, referred to as the Secure Anonymised Information Linkage (SAIL) Gateway. The Gateway provides data users with a familiar Windows interface and their usual toolsets to access approved anonymously-linked datasets for research and evaluation. We outline the principles and operating model of the Gateway, the features provided to users within the secure environment, and how we are approaching the challenges of making data safely accessible to increasing numbers of research users. The Gateway represents a powerful analytical environment and has been designed to be scalable and adaptable to meet the needs of the rapidly growing data linkage community. | A case study of the Secure Anonymous Information Linkage (SAIL) Gateway: A privacy-protecting remote access system for health-related research and evaluation |
S1532046414000057 | In this study we report on potential drug–drug interactions between drugs occurring in patient clinical data. Results are based on relationships in SemMedDB, a database of structured knowledge extracted from all MEDLINE citations (titles and abstracts) using SemRep. The core of our methodology is to construct two potential drug–drug interaction schemas, based on relationships extracted from SemMedDB. In the first schema, Drug1 and Drug2 interact through Drug1’s effect on some gene, which in turn affects Drug2. In the second, Drug1 affects Gene1, while Drug2 affects Gene2. Gene1 and Gene2, together, then have an effect on some biological function. After checking each drug pair from the medication lists of each of 22 patients, we found 19 known and 62 unknown drug–drug interactions using both schemas. For example, our results suggest that the interaction of Lisinopril, an ACE inhibitor commonly prescribed for hypertension, and the antidepressant sertraline can potentially increase the likelihood and possibly the severity of psoriasis. We also assessed the relationships extracted by SemRep from a linguistic perspective and found that the precision of SemRep was 0.58 for 300 randomly selected sentences from MEDLINE. Our study demonstrates that the use of structured knowledge in the form of relationships from the biomedical literature can support the discovery of potential drug–drug interactions occurring in patient clinical data. Moreover, SemMedDB provides a good knowledge resource for expanding the range of drugs, genes, and biological functions considered as elements in various drug–drug interaction pathways. | Using semantic predications to uncover drug–drug interactions in clinical data |
S1532046414000069 | Extracting information from unstructured clinical narratives is valuable for many clinical applications. Although natural Language Processing (NLP) methods have been profoundly studied in electronic medical records (EMR), few studies have explored NLP in extracting information from Chinese clinical narratives. In this study, we report the development and evaluation of extracting tumor-related information from operation notes of hepatic carcinomas which were written in Chinese. Using 86 operation notes manually annotated by physicians as the training set, we explored both rule-based and supervised machine-learning approaches. Evaluating on unseen 29 operation notes, our best approach yielded 69.6% in precision, 58.3% in recall and 63.5% F-score. | Extracting important information from Chinese Operation Notes with natural language processing methods |
S1532046414000070 | Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. | Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses |
S1532046414000082 | Genetic algorithms are widely used in the estimation of expression profiles from microarrays data. However, these techniques are unable to produce stable and robust solutions suitable to use in clinical and biomedical studies. This paper presents a novel two-stage evolutionary strategy for gene feature selection combining the genetic algorithm with biological information extracted from the KEGG database. A comparative study is carried out over public data from three different types of cancer (leukemia, lung cancer and prostate cancer). Even though the analyses only use features having KEGG information, the results demonstrate that this two-stage evolutionary strategy increased the consistency, robustness and accuracy of a blind discrimination among relapsed and healthy individuals. Therefore, this approach could facilitate the definition of gene signatures for the clinical prognosis and diagnostic of cancer diseases in a near future. Additionally, it could also be used for biological knowledge discovery about the studied disease. | Robust gene signatures from microarray data using genetic algorithms enriched with biological pathway keywords |
S1532046414000094 | Patients’ medical conditions often evolve in complex and seemingly unpredictable ways. Even within a relatively narrow and well-defined episode of care, variations between patients in both their progression and eventual outcome can be dramatic. Understanding the patterns of events observed within a population that most correlate with differences in outcome is therefore an important task in many types of studies using retrospective electronic health data. In this paper, we present a method for interactive pattern mining and analysis that supports ad hoc visual exploration of patterns mined from retrospective clinical patient data. Our approach combines (1) visual query capabilities to interactively specify episode definitions, (2) pattern mining techniques to help discover important intermediate events within an episode, and (3) interactive visualization techniques that help uncover event patterns that most impact outcome and how those associations change over time. In addition to presenting our methodology, we describe a prototype implementation and present use cases highlighting the types of insights or hypotheses that our approach can help uncover. | A methodology for interactive mining and visual analysis of clinical event patterns using electronic health record data |
S1532046414000100 | The protection of privacy of individual-level information in genome-wide association study (GWAS) databases has been a major concern of researchers following the publication of “an attack” on GWAS data by Homer et al. (2008). Traditional statistical methods for confidentiality and privacy protection of statistical databases do not scale well to deal with GWAS data, especially in terms of guarantees regarding protection from linkage to external information. The more recent concept of differential privacy, introduced by the cryptographic community, is an approach that provides a rigorous definition of privacy with meaningful privacy guarantees in the presence of arbitrary external information, although the guarantees may come at a serious price in terms of data utility. Building on such notions, Uhler et al. (2013) proposed new methods to release aggregate GWAS data without compromising an individual’s privacy. We extend the methods developed in Uhler et al. (2013) for releasing differentially-private χ 2 -statistics by allowing for arbitrary number of cases and controls, and for releasing differentially-private allelic test statistics. We also provide a new interpretation by assuming the controls’ data are known, which is a realistic assumption because some GWAS use publicly available data as controls. We assess the performance of the proposed methods through a risk-utility analysis on a real data set consisting of DNA samples collected by the Wellcome Trust Case Control Consortium and compare the methods with the differentially-private release mechanism proposed by Johnson and Shmatikov (2013). | Scalable privacy-preserving data sharing methodology for genome-wide association studies |
S1532046414000112 | Objectives To automatically identify and cluster clinical trials with similar eligibility features. Methods Using the public repository ClinicalTrials.gov as the data source, we extracted semantic features from the eligibility criteria text of all clinical trials and constructed a trial-feature matrix. We calculated the pairwise similarities for all clinical trials based on their eligibility features. For all trials, by selecting one trial as the center each time, we identified trials whose similarities to the central trial were greater than or equal to a predefined threshold and constructed center-based clusters. Then we identified unique trial sets with distinctive trial membership compositions from center-based clusters by disregarding their structural information. Results From the 145,745 clinical trials on ClinicalTrials.gov, we extracted 5,508,491 semantic features. Of these, 459,936 were unique and 160,951 were shared by at least one pair of trials. Crowdsourcing the cluster evaluation using Amazon Mechanical Turk (MTurk), we identified the optimal similarity threshold, 0.9. Using this threshold, we generated 8806 center-based clusters. Evaluation of a sample of the clusters by MTurk resulted in a mean score 4.331±0.796 on a scale of 1–5 (5 indicating “strongly agree that the trials in the cluster are similar”). Conclusions We contribute an automated approach to clustering clinical trials with similar eligibility features. This approach can be potentially useful for investigating knowledge reuse patterns in clinical trial eligibility criteria designs and for improving clinical trial recruitment. We also contribute an effective crowdsourcing method for evaluating informatics interventions. | Clustering clinical trials with similar eligibility criteria features |
S1532046414000124 | Breast cancer is the second leading cause of cancer death in women. Accurate early detection can effectively reduce the mortality rate caused by breast cancer. Masses and microcalcification clusters are an important early signs of breast cancer. However, it is often difficult to distinguish abnormalities from normal breast tissues because of their subtle appearance and ambiguous margins. Computer aided diagnosis (CAD) helps the radiologist in detecting the abnormalities in an efficient way. This paper investigates a new classification approach for detection of breast abnormalities in digital mammograms using Particle Swarm Optimized Wavelet Neural Network (PSOWNN). The proposed abnormality detection algorithm is based on extracting Laws Texture Energy Measures from the mammograms and classifying the suspicious regions by applying a pattern classifier. The method is applied to real clinical database of 216 mammograms collected from mammogram screening centers. The detection performance of the CAD system is analyzed using Receiver Operating Characteristic (ROC) curve. This curve indicates the trade-offs between sensitivity and specificity that is available from a diagnostic system, and thus describes the inherent discrimination capacity of the proposed system. The result shows that the area under the ROC curve of the proposed algorithm is 0.96853 with a sensitivity 94.167% of and specificity of 92.105%. | Computer-aided detection of breast cancer on mammograms: A swarm intelligence optimized wavelet neural network approach |
S1532046414000136 | As more and more electronic clinical information is becoming easier to access for secondary uses such as clinical research, approaches that enable faster and more collaborative research while protecting patient privacy and confidentiality are becoming more important. Clinical text de-identification offers such advantages but is typically a tedious manual process. Automated Natural Language Processing (NLP) methods can alleviate this process, but their impact on subsequent uses of the automatically de-identified clinical narratives has only barely been investigated. In the context of a larger project to develop and investigate automated text de-identification for Veterans Health Administration (VHA) clinical notes, we studied the impact of automated text de-identification on clinical information in a stepwise manner. Our approach started with a high-level assessment of clinical notes informativeness and formatting, and ended with a detailed study of the overlap of select clinical information types and Protected Health Information (PHI). To investigate the informativeness (i.e., document type information, select clinical data types, and interpretation or conclusion) of VHA clinical notes, we used five different existing text de-identification systems. The informativeness was only minimally altered by these systems while formatting was only modified by one system. To examine the impact of de-identification on clinical information extraction, we compared counts of SNOMED-CT concepts found by an open source information extraction application in the original (i.e., not de-identified) version of a corpus of VHA clinical notes, and in the same corpus after de-identification. Only about 1.2–3% less SNOMED-CT concepts were found in de-identified versions of our corpus, and many of these concepts were PHI that was erroneously identified as clinical information. To study this impact in more details and assess how generalizable our findings were, we examined the overlap between select clinical information annotated in the 2010 i2b2 NLP challenge corpus and automatic PHI annotations from our best-of-breed VHA clinical text de-identification system (nicknamed ‘BoB’). Overall, only 0.81% of the clinical information exactly overlapped with PHI, and 1.78% partly overlapped. We conclude that automated text de-identification’s impact on clinical information is small, but not negligible, and that improved clinical acronyms and eponyms disambiguation could significantly reduce this impact. | Text de-identification for privacy protection: A study of its impact on clinical text information content |
S1532046414000148 | Automatic recognition of clinical entities in the narrative text of health records is useful for constructing applications for documentation of patient care, as well as for secondary usage in the form of medical knowledge extraction. There are a number of named entity recognition studies on English clinical text, but less work has been carried out on clinical text in other languages. This study was performed on Swedish health records, and focused on four entities that are highly relevant for constructing a patient overview and for medical hypothesis generation, namely the entities: Disorder, Finding, Pharmaceutical Drug and Body Structure. The study had two aims: to explore how well named entity recognition methods previously applied to English clinical text perform on similar texts written in Swedish; and to evaluate whether it is meaningful to divide the more general category Medical Problem, which has been used in a number of previous studies, into the two more granular entities, Disorder and Finding. Clinical notes from a Swedish internal medicine emergency unit were annotated for the four selected entity categories, and the inter-annotator agreement between two pairs of annotators was measured, resulting in an average F-score of 0.79 for Disorder, 0.66 for Finding, 0.90 for Pharmaceutical Drug and 0.80 for Body Structure. A subset of the developed corpus was thereafter used for finding suitable features for training a conditional random fields model. Finally, a new model was trained on this subset, using the best features and settings, and its ability to generalise to held-out data was evaluated. This final model obtained an F-score of 0.81 for Disorder, 0.69 for Finding, 0.88 for Pharmaceutical Drug, 0.85 for Body Structure and 0.78 for the combined category Disorder+Finding. The obtained results, which are in line with or slightly lower than those for similar studies on English clinical text, many of them conducted using a larger training data set, show that the approaches used for English are also suitable for Swedish clinical text. However, a small proportion of the errors made by the model are less likely to occur in English text, showing that results might be improved by further tailoring the system to clinical Swedish. The entity recognition results for the individual entities Disorder and Finding show that it is meaningful to separate the general category Medical Problem into these two more granular entity types, e.g. for knowledge mining of co-morbidity relations and disorder-finding relations. | Automatic recognition of disorders, findings, pharmaceuticals and body structures from clinical text: An annotation and machine learning study |
S153204641400015X | An ever increasing amount of medical data such as electronic health records, is being collected, stored, shared and managed in large online health information systems and electronic medical record systems (EMR) (Williams et al., 2001; Virtanen, 2009; Huang and Liou, 2007) [1–3]. From such rich collections, data is often published in the form of census and statistical data sets for the purpose of knowledge sharing and enabling medical research. This brings with it an increasing need for protecting individual people privacy, and it becomes an issue of great importance especially when information about patients is exposed to the public. While the concept of data privacy has been comprehensively studied for relational data, models and algorithms addressing the distinct differences and complex structure of XML data are yet to be explored. Currently, the common compromise method is to convert private XML data into relational data for publication. This ad hoc approach results in significant loss of useful semantic information previously carried in the private XML data. Health data often has very complex structure, which is best expressed in XML. In fact, XML is the standard format for exchanging (e.g. HL7 version 3 1 http://www.hl7.org. 1 ) and publishing health information. Lack of means to deal directly with data in XML format is inevitably a serious drawback. In this paper we propose a novel privacy protection model for XML, and an algorithm for implementing this model. We provide general rules, both for transforming a private XML schema into a published XML schema, and for mapping private XML data to the new privacy-protected published XML data. In addition, we propose a new privacy property, δ -dependency, which can be applied to both relational and XML data, and that takes into consideration the hierarchical nature of sensitive data (as opposed to “quasi-identifiers”). Lastly, we provide an implementation of our model, algorithm and privacy property, and perform an experimental analysis, to demonstrate the proposed privacy scheme in practical application. | δ -Dependency for privacy-preserving XML data publishing |
S1532046414000161 | Objective The current study aims to fill the gap in available healthcare de-identification resources by creating a new sharable dataset with realistic Protected Health Information (PHI) without reducing the value of the data for de-identification research. By releasing the annotated gold standard corpus with Data Use Agreement we would like to encourage other Computational Linguists to experiment with our data and develop new machine learning models for de-identification. This paper describes: (1) the modifications required by the Institutional Review Board before sharing the de-identification gold standard corpus; (2) our efforts to keep the PHI as realistic as possible; (3) and the tests to show the effectiveness of these efforts in preserving the value of the modified data set for machine learning model development. Materials and methods In a previous study we built an original de-identification gold standard corpus annotated with true Protected Health Information (PHI) from 3503 randomly selected clinical notes for the 22 most frequent clinical note types of our institution. In the current study we modified the original gold standard corpus to make it suitable for external sharing by replacing HIPAA-specified PHI with newly generated realistic PHI. Finally, we evaluated the research value of this new dataset by comparing the performance of an existing published in-house de-identification system, when trained on the new de-identification gold standard corpus, with the performance of the same system, when trained on the original corpus. We assessed the potential benefits of using the new de-identification gold standard corpus to identify PHI in the i2b2 and PhysioNet datasets that were released by other groups for de-identification research. We also measured the effectiveness of the i2b2 and PhysioNet de-identification gold standard corpora in identifying PHI in our original clinical notes. Results Performance of the de-identification system using the new gold standard corpus as a training set was very close to training on the original corpus (92.56 vs. 93.48 overall F-measures). Best i2b2/PhysioNet/CCHMC cross-training performances were obtained when training on the new shared CCHMC gold standard corpus, although performances were still lower than corpus-specific trainings. Discussion and conclusion We successfully modified a de-identification dataset for external sharing while preserving the de-identification research value of the modified gold standard corpus with limited drop in machine learning de-identification performance. | Preparing an annotated gold standard corpus to share with extramural investigators for de-identification research |
S1532046414000173 | The purpose of this paper is twofold: first, to propose an evolutionary-based method for building a decision model and, second, to assess and validate the model’s performance using five different real-world medical datasets (breast cancer and liver fibrosis) by comparing it with state-of-the-art machine learning techniques. The evolutionary-inspired approach has been used to develop the learning-based decision model in the following manner: the hybridization of algorithms has been considered as “crossover”, while the development of new variants which can be thought of as “mutation”. An appropriate hierarchy of the component algorithms was established based on a statistically built fitness measure. A synergetic decision-making process, based on a weighted voting system, involved the collaboration between the selected algorithms in making the final decision. Well-established statistical performance measures and comparison tests have been extensively used to design and implement the model. Finally, the proposed method has been tested on five medical datasets, out of which four publicly available, and contrasted with state-of-the-art techniques, showing its efficiency in supporting the medical decision-making process. | Evolutionary strategy to develop learning-based decision systems. Application to breast cancer and liver fibrosis stadialization |
S1532046414000185 | Background The popularity of social networks has triggered a number of research efforts on network analyses of research collaborations in the Clinical and Translational Science Award (CTSA) community. Those studies mainly focus on the general understanding of collaboration networks by measuring common network metrics. More fundamental questions about collaborations still remain unanswered such as recognizing “influential” nodes and identifying potential new collaborations that are most rewarding. Methods We analyzed biomedical research collaboration networks (RCNs) constructed from a dataset of research grants collected at a CTSA institution (i.e., University of Arkansas for Medical Sciences (UAMS)) in a comprehensive and systematic manner. First, our analysis covers the full spectrum of a RCN study: from network modeling to network characteristics measurement, from key nodes recognition to potential links (collaborations) suggestion. Second, our analysis employs non-conventional model and techniques including a weighted network model for representing collaboration strength, rank aggregation for detecting important nodes, and Random Walk with Restart (RWR) for suggesting new research collaborations. Results By applying our models and techniques to RCNs at UAMS prior to and after the CTSA, we have gained valuable insights that not only reveal the temporal evolution of the network dynamics but also assess the effectiveness of the CTSA and its impact on a research institution. We find that collaboration networks at UAMS are not scale-free but small-world. Quantitative measures have been obtained to evident that the RCNs at UAMS are moving towards favoring multidisciplinary research. Moreover, our link prediction model creates the basis of collaboration recommendations with an impressive accuracy (AUC: 0.990, MAP@3: 1.48 and MAP@5: 1.522). Last but not least, an open-source visual analytical tool for RCNs is being developed and released through Github. Conclusions Through this study, we have developed a set of techniques and tools for analyzing research collaboration networks and conducted a comprehensive case study focusing on a CTSA institution. Our findings demonstrate the promising future of these techniques and tools in understanding the generative mechanisms of research collaborations and helping identify beneficial collaborations to members in the research community. | Social network analysis of biomedical research collaboration networks in a CTSA institution |
S1532046414000197 | Introduction Existing record linkage methods do not handle missing linking field values in an efficient and effective manner. The objective of this study is to investigate three novel methods for improving the accuracy and efficiency of record linkage when record linkage fields have missing values. Methods By extending the Fellegi–Sunter scoring implementations available in the open-source Fine-grained Record Linkage (FRIL) software system we developed three novel methods to solve the missing data problem in record linkage, which we refer to as: Weight Redistribution, Distance Imputation, and Linkage Expansion. Weight Redistribution removes fields with missing data from the set of quasi-identifiers and redistributes the weight from the missing attribute based on relative proportions across the remaining available linkage fields. Distance Imputation imputes the distance between the missing data fields rather than imputing the missing data value. Linkage Expansion adds previously considered non-linkage fields to the linkage field set to compensate for the missing information in a linkage field. We tested the linkage methods using simulated data sets with varying field value corruption rates. Results The methods developed had sensitivity ranging from .895 to .992 and positive predictive values (PPV) ranging from .865 to 1 in data sets with low corruption rates. Increased corruption rates lead to decreased sensitivity for all methods. Conclusions These new record linkage algorithms show promise in terms of accuracy and efficiency and may be valuable for combining large data sets at the patient level to support biomedical and clinical research. | Improving record linkage performance in the presence of missing linkage data |
S1532046414000203 | Predicting the clinical outcomes of cancer patients is a challenging task in biomedicine. A personalized and refined therapy based on predicting prognostic outcomes of cancer patients has been actively sought in the past decade. Accurate prognostic prediction requires higher-order representations of complex dependencies among genetic factors. However, identifying the co-regulatory roles and functional effects of genetic interactions on cancer prognosis is hindered by the complexity of the interactions. Here we propose a prognostic prediction model based on evolutionary learning that identifies higher-order prognostic biomarkers of cancer clinical outcomes. The proposed model represents the interactions of prognostic genes as a combinatorial space. It adopts a flexible hypergraph structure composed of a large population of hyperedges that encode higher-order relationships among many genetic factors. The hyperedge population is optimized by an evolutionary learning method based on sequential Bayesian sampling. The proposed learning approach effectively balances performance and parsimony of the model using information-theoretic dependency and complexity-theoretic regularization priors. Using MAQC-II project data, we demonstrate that our model can handle high-dimensional data more effectively than state-of-the-art classification models. We also identify potential gene interactions characterizing prognosis and recurrence risk in cancer. | Bayesian evolutionary hypergraph learning for predicting cancer clinical outcomes |
S1532046414000392 | The last decade has seen an exponential growth in the quantity of clinical data collected nationwide, triggering an increase in opportunities to reuse the data for biomedical research. The Vanderbilt research data warehouse framework consists of identified and de-identified clinical data repositories, fee-for-service custom services, and tools built atop the data layer to assist researchers across the enterprise. Providing resources dedicated to research initiatives benefits not only the research community, but also clinicians, patients and institutional leadership. This work provides a summary of our approach in the secondary use of clinical data for research domain, including a description of key components and a list of lessons learned, designed to assist others assembling similar services and infrastructure. | Secondary use of clinical data: The Vanderbilt approach |
S1532046414000409 | The use of genome-wide, sample-matched miRNA (miRNAs)-mRNA expression data provides a powerful tool for the investigation of miRNAs and genes involved in diseases. The identification of miRNA-regulated pathways has been crucial for analysis of the role of miRNAs. However, the classical identification method fails to consider the structural information of pathways and the regulation of miRNAs simultaneously. We proposed a method that simultaneously integrated the change in gene expression and structural information in order to identify pathways. Our method used fold changes in miRNAs and gene products, along with the quantification of the regulatory effect on target genes, to measure the change in gene expression. Topological characteristics were investigated to measure the influence of gene products on entire pathways. Through the analysis of multiple myeloma and prostate cancer expression data, our method was proven to be effective and reliable in identifying disease risk pathways that are regulated by miRNAs. Further analysis showed that the structure of a pathway plays a crucial role in the recognition of the pathway as a factor in disease risk. | The detection of risk pathways, regulated by miRNAs, via the integration of sample-matched miRNA-mRNA profiles and pathway structure |
S1532046414000410 | This paper presents a novel systematic approach for patient-specific classification of long-term Electroencephalography (EEG). The goal is to extract the seizure sections with a high accuracy to ease the Neurologist’s burden of inspecting such long-term EEG data. We aim to achieve this using the minimum feedback from the Neurologist. To accomplish this, we use the majority of the state-of-the-art features proposed in this domain for evolving a collective network of binary classifiers (CNBC) using multi-dimensional particle swarm optimization (MD PSO). Multiple CNBCs are then used to form a CNBC ensemble (CNBC-E), which aggregates epileptic seizure frames from the classification map of each CNBC in order to maximize the sensitivity rate. Finally, a morphological filter forms the final epileptic segments while filtering out the outliers in the form of classification noise. The proposed system is fully generic, which does not require any a priori information about the patient such as the list of relevant EEG channels. The results of the classification experiments, which are performed over the benchmark CHB-MIT scalp long-term EEG database show that the proposed system can achieve all the aforementioned objectives and exhibits a significantly superior performance compared to several other state-of-the-art methods. Using a limited training dataset that is formed by less than 2min of seizure and 24min of non-seizure data on the average taken from the early 25% section of the EEG record of each patient, the proposed system establishes an average sensitivity rate above 89% along with an average specificity rate above 93% over the test set. | Automated patient-specific classification of long-term Electroencephalography |
S1532046414000422 | Evidence Based Medicine (EBM) provides a framework that makes use of the current best evidence in the domain to support clinicians in the decision making process. In most cases, the underlying foundational knowledge is captured in scientific publications that detail specific clinical studies or randomised controlled trials. Over the course of the last two decades, research has been performed on modelling key aspects described within publications (e.g., aims, methods, results), to enable the successful realisation of the goals of EBM. A significant outcome of this research has been the PICO (Population/Problem–Intervention–Comparison–Outcome) structure, and its refined version PIBOSO (Population–Intervention–Background–Outcome–Study Design–Other), both of which provide a formalisation of these scientific artefacts. Subsequently, using these schemes, diverse automatic extraction techniques have been proposed to streamline the knowledge discovery and exploration process in EBM. In this paper, we present a Machine Learning approach that aims to classify sentences according to the PIBOSO scheme. We use a discriminative set of features that do not rely on any external resources to achieve results comparable to the state of the art. A corpus of 1000 structured and unstructured abstracts – i.e., the NICTA-PIBOSO corpus – is used for training and testing. Our best CRF classifier achieves a micro-average F-score of 90.74% and 87.21%, respectively, over structured and unstructured abstracts, which represents an increase of 25.48 percentage points and 26.6 percentage points in F-score when compared to the best existing approaches. | Identifying scientific artefacts in biomedical literature: The Evidence Based Medicine use case |
S1532046414000434 | Objective The aim of this study is to conceptualize a novel approach, which facilitates us to design prototype interfaces for healthcare software. Methods Concepts and techniques from various disciplines were used to conceptualize an interface design approach named MORTARS (Map Original Rhetorical To Adapted Rhetorical Situation). The concepts and techniques included in this approach are (1) rhetorical situation – a concept of philosophy provided by Bitzer (1968); (2) move analysis – an applied linguistic technique provided by Swales (1990) and Bhatia (1993); (3) interface design guidelines – a cognitive and computer science concept provided by Johnson (2010); (4) usability evaluation instrument – an interface evaluation questionnaire provided by Lund (2001); (5) user modeling via stereotyping – a cognitive and computer science concept provided by Rich (1979). A prototype interface for outpatient clinic software was designed to introduce the underlying concepts of MORTARS. The prototype interface was evaluated by thirty-two medical informaticians. Results The medical informaticians found the designed prototype interface to be useful (73.3%), easy to use (71.9%), easy to learn (93.1%), and satisfactory (53.2%). Conclusions MORTARS approach was found to be effective in designing the prototype user interface for the outpatient clinic software. This approach might be further used to design interfaces for various software pertaining to healthcare and other domains. | Conceptualization and application of an approach for designing healthcare software interfaces |
S1532046414000458 | Cloud computing is emerging as the next-generation IT architecture. However, cloud computing also raises security and privacy concerns since the users have no physical control over the outsourced data. This paper focuses on fairly retrieving encrypted private medical records outsourced to remote untrusted cloud servers in the case of medical accidents and disputes. Our goal is to enable an independent committee to fairly recover the original private medical records so that medical investigation can be carried out in a convincing way. We achieve this goal with a fair remote retrieval (FRR) model in which either t investigation committee members cooperatively retrieve the original medical data or none of them can get any information on the medical records. We realize the first FRR scheme by exploiting fair multi-member key exchange and homomorphic privately verifiable tags. Based on the standard computational Diffie–Hellman (CDH) assumption, our scheme is provably secure in the random oracle model (ROM). A detailed performance analysis and experimental results show that our scheme is efficient in terms of communication and computation. | FRR: Fair remote retrieval of outsourced private medical records in electronic health networks |
S1532046414000471 | Patient lists are project-specific sets of patients that can be queried in integrated data repositories (IDR’s). By allowing a set of patients to be an addition to the qualifying conditions of a query, returned results will refer to, and only to, that set of patients. We report a variety of use cases for such lists, including: restricting retrospective chart review to a defined set of patients; following a set of patients for practice management purposes; distributing “honest-brokered” (deidentified) data; adding phenotypes to biosamples; and enhancing the content of study or registry data. Among the capabilities needed to implement patient lists in an IDR are: capture of patient identifiers from a query and feedback of these into the IDR; the existence of a permanent internal identifier in the IDR that is mappable to external identifiers; the ability to add queryable attributes to the IDR; the ability to merge data from multiple queries; and suitable control over user access and de-identification of results. We implemented patient lists in a custom IDR of our own design. We reviewed capabilities of other published IDRs for focusing on sets of patients. The widely used i2b2 IDR platform has various ways to address patient sets, and it could be modified to add the low-overhead version of patient lists that we describe. | Using patient lists to add value to integrated data repositories |
S1532046414000483 | To address the need for greater evidence-based evaluation of Health Information Technology (HIT) systems we introduce a method of usability testing termed tree testing. In a tree test, participants are presented with an abstract hierarchical tree of the system taxonomy and asked to navigate through the tree in completing representative tasks. We apply tree testing to a commercially available health application, demonstrating a use case and providing a comparison with more traditional in-person usability testing methods. Online tree tests (N =54) and in-person usability tests (N =15) were conducted from August to September 2013. Tree testing provided a method to quantitatively evaluate the information structure of a system using various navigational metrics including completion time, task accuracy, and path length. The results of the analyses compared favorably to the results seen from the traditional usability test. Tree testing provides a flexible, evidence-based approach for researchers to evaluate the information structure of HITs. In addition, remote tree testing provides a quick, flexible, and high volume method of acquiring feedback in a structured format that allows for quantitative comparisons. With the diverse nature and often large quantities of health information available, addressing issues of terminology and concept classifications during the early development process of a health information system will improve navigation through the system and save future resources. Tree testing is a usability method that can be used to quickly and easily assess information hierarchy of health information systems. | Tree testing of hierarchical menu structures for health applications |
S1532046414000501 | Objectives This review provided a conceptual framework of sample size calculations in the studies of diagnostic test accuracy in various conditions and test outcomes. Methods The formulae of sample size calculations for estimation of adequate sensitivity/specificity, likelihood ratio and AUC as an overall index of accuracy and also for testing in single modality and comparing two diagnostic tasks have been presented for desired confidence interval. Results The required sample sizes were calculated and tabulated with different levels of accuracies and marginal errors with 95% confidence level for estimating and for various effect sizes with 80% power for purpose of testing as well. The results show how sample size is varied with accuracy index and effect size of interest. Conclusion This would help the clinicians when designing diagnostic test studies that an adequate sample size is chosen based on statistical principles in order to guarantee the reliability of study. | Sample size estimation in diagnostic test studies of biomedical informatics |
S1532046414000537 | Objective We aim to quantify HMG-CoA reductase inhibitor (statin) prescriber-intended exposure-time using a generalizable algorithm that interrogates data stored in the electronic health record (EHR). Materials and methods This study was conducted using the Marshfield Clinic (MC) Personalized Medicine Research Project (PMRP) a central Wisconsin-based population and biobank with, on average, 30years of electronic health data available in the independently-developed MC Cattails MD EHR. Individuals with evidence of statin exposure were identified from the electronic records, and manual chart abstraction of all mentions of prescribed statins was completed. We then performed electronic chart abstraction of prescriber-intended exposure time for statins, using previously identified logic to capture pill-splitting events, normalizing dosages to atorvastatin-equivalent dose. Four models using iterative training sets were tested to capture statin end-dates. Calculated cumulative provider-intended exposures were compared to manually abstracted gold-standard measures of ordered statin prescriptions, and aggregate model results (totals) for training and validation populations were compared. The most successful model was the one with the smallest discordance between modeled and manually abstracted Atorvastatin 10mg/year Equivalents (AEs). Results Of the approximately 20,000 patients enrolled in the PMRP, 6243 were identified with statin exposure during the study period (1997–2011), 59.8% of whom had been prescribed multiple statins over an average of approximately 11years. When the best-fit algorithm was implemented and validated by manual chart review for the statin-ordered population, it was found to capture 95.9% of the correlation between calculated and expected statin provider-intended exposure time for a random validation set, and the best-fit model was able to predict intended statin exposure to within a standard deviation of 2.6 AEs, with a standard error of +0.23 AEs. Conclusion We demonstrate that normalized provider-intended statin exposure time can be estimated using a combination of structured clinical data sources, including a medications ordering system and a clinical appointment coordination system, supplemented with text data from clinical notes. | Development of reusable logic for determination of statin exposure-time from electronic health records |
S1532046414000549 | Real-time Obstructive Sleep Apnea (OSA) episode detection and monitoring are important for society in terms of an improvement in the health of the general population and of a reduction in mortality and healthcare costs. Currently, to diagnose OSA patients undergo PolySomnoGraphy (PSG), a complicated and invasive test to be performed in a specialized center involving many sensors and wires. Accordingly, each patient is required to stay in the same position throughout the duration of one night, thus restricting their movements. This paper proposes an easy, cheap, and portable approach for the monitoring of patients with OSA, which collects single-channel ElectroCardioGram (ECG) data only. It is easy to perform from the patient’s point of view because only one wearable sensor is required, so the patient is not restricted to keeping the same position all night long, and the detection and monitoring can be carried out in any place through the use of a mobile device. Our approach is based on the automatic extraction, from a database containing information about the monitored patient, of explicit knowledge in the form of a set of IF…THEN rules containing typical parameters derived from Heart Rate Variability (HRV) analysis. The extraction is carried out off-line by means of a Differential Evolution algorithm. This set of rules can then be exploited in the real-time mobile monitoring system developed at our Laboratory: the ECG data is gathered by a wearable sensor and sent to a mobile device, where it is processed in real time. Subsequently, HRV-related parameters are computed from this data, and, if their values activate some of the rules describing the occurrence of OSA, an alarm is automatically produced. This approach has been tested on a well-known literature database of OSA patients. The numerical results show its effectiveness in terms of accuracy, sensitivity, and specificity, and the achieved sets of rules evidence the user-friendliness of the approach. Furthermore, the method is compared against other well known classifiers, and its discrimination ability is shown to be higher. | Monitoring Obstructive Sleep Apnea by means of a real-time mobile system based on the automatic extraction of sets of rules through Differential Evolution |
S1532046414000562 | Time motion studies were first described in the early 20th century in industrial engineering, referring to a quantitative data collection method where an external observer captured detailed data on the duration and movements required to accomplish a specific task, coupled with an analysis focused on improving efficiency. Since then, they have been broadly adopted by biomedical researchers and have become a focus of attention due to the current interest in clinical workflow related factors. However, attempts to aggregate results from these studies have been difficult, resulting from a significant variability in the implementation and reporting of methods. While efforts have been made to standardize the reporting of such data and findings, a lack of common understanding on what “time motion studies” are remains, which not only hinders reviews, but could also partially explain the methodological variability in the domain literature (duration of the observations, number of tasks, multitasking, training rigor and reliability assessments) caused by an attempt to cluster dissimilar sub-techniques. A crucial milestone towards the standardization and validation of time motion studies corresponds to a common understanding, accompanied by a proper recognition of the distinct techniques it encompasses. Towards this goal, we conducted a review of the literature aiming at identifying what is being referred to as “time motion studies”. We provide a detailed description of the distinct methods used in articles referenced or classified as “time motion studies”, and conclude that currently it is used not only to define the original technique, but also to describe a broad spectrum of studies whose only common factor is the capture and/or analysis of the duration of one or more events. To maintain alignment with the existing broad scope of the term, we propose a disambiguation approach by preserving the expanded conception, while recommending the use of a specific qualifier “continuous observation time motion studies” to refer to variations of the original method (the use of an external observer recording data continuously). In addition, we present a more granular naming for sub-techniques within continuous observation time motion studies, expecting to reduce the methodological variability within each sub-technique and facilitate future results aggregation. | Time motion studies in healthcare: What are we talking about? |
S1532046414000574 | Computer-assisted image retrieval applications could assist radiologist interpretations by identifying similar images in large archives as a means to providing decision support. However, the semantic gap between low-level image features and their high level semantics may impair the system performances. Indeed, it can be challenging to comprehensively characterize the images using low-level imaging features to fully capture the visual appearance of diseases on images, and recently the use of semantic terms has been advocated to provide semantic descriptions of the visual contents of images. However, most of the existing image retrieval strategies do not consider the intrinsic properties of these terms during the comparison of the images beyond treating them as simple binary (presence/absence) features. We propose a new framework that includes semantic features in images and that enables retrieval of similar images in large databases based on their semantic relations. It is based on two main steps: (1) annotation of the images with semantic terms extracted from an ontology, and (2) evaluation of the similarity of image pairs by computing the similarity between the terms using the Hierarchical Semantic-Based Distance (HSBD) coupled to an ontological measure. The combination of these two steps provides a means of capturing the semantic correlations among the terms used to characterize the images that can be considered as a potential solution to deal with the semantic gap problem. We validate this approach in the context of the retrieval and the classification of 2D regions of interest (ROIs) extracted from computed tomographic (CT) images of the liver. Under this framework, retrieval accuracy of more than 0.96 was obtained on a 30-images dataset using the Normalized Discounted Cumulative Gain (NDCG) index that is a standard technique used to measure the effectiveness of information retrieval algorithms when a separate reference standard is available. Classification results of more than 95% were obtained on a 77-images dataset. For comparison purpose, the use of the Earth Mover’s Distance (EMD), which is an alternative distance metric that considers all the existing relations among the terms, led to results retrieval accuracy of 0.95 and classification results of 93% with a higher computational cost. The results provided by the presented framework are competitive with the state-of-the-art and emphasize the usefulness of the proposed methodology for radiology image retrieval and classification. | A hierarchical knowledge-based approach for retrieving similar medical images described with semantic annotations |
S1532046414000586 | Purpose The purpose of this study was to determine the degree of overlap between the International Classification for Nursing Practice (ICNP®) and the Systematized Nomenclature of Medicine–Clinical Terms (SNOMED–CT), with a specific focus on nursing problems, as a first step towards harmonization of content between the two terminologies. Methods Work within this study was divided across two ICNP subsets. The first subset (n =238) was made up of ICNP diagnosis/outcome concepts that had been included in previous experimental mapping activities with Clinical Care Classification (CCC) and NANDA-International (NANDA-I). These ICNP concepts and their equivalent concepts within CCC and NANDA-I were used within the Unified Medical Language System (UMLS) framework to derive automatically candidate mappings to SNOMED–CT for validation by two reviewers. The second subset (n =565) included all other ICNP diagnosis/outcome concepts plus those concepts from the first subset where the candidate mappings were rejected. Mappings from the second subset to SNOMED–CT were manually identified independently by the same two reviewers. Differences between the reviewers were resolved through discussion. The observed agreement between the two reviewers was calculated along with the inter-rater reliability using Cohen’s Kappa (κ). Results For the first semi-automated mapping, according to the two reviewers the great majority of ICNP concepts (91.6%) correctly mapped to SNOMED–CT in UMLS. There was a good level of agreement between the reviewers in this part of the exercise (κ =0.7). For the second manual mapping, nearly two-thirds of ICNP concepts (61.4%) could not be mapped to any SNOMED–CT concept. There was only a moderate level of agreement between the reviewers (κ =0.45). While most of the mappings were one-to-one mappings, there were ambiguities in both terminologies which led to difficulties. The absence of mappings was due to a large extent to differences in content coverage, although lexical variations and semantic differences also played a part. Conclusions This study demonstrated a degree of overlap between ICNP and SNOMED–CT; it also identified significant differences in content coverage. The results from the semi-automated mapping were encouraging, particularly for ‘older’ ICNP content. The results from the manual mapping were less favorable suggesting a need for further enhancement of both terminologies, content development within SNOMED–CT and further research on mechanisms for harmonization. | Inter-terminology mapping of nursing problems |
S1532046414000598 | The integration of medical data coming from multiple sources is important in clinical research. Amongst others, it enables the discovery of appropriate subjects in patient-oriented research and the identification of innovative results in epidemiological studies. At the same time, the integration of medical data faces significant ethical and legal challenges that impose access constraints. Some of these issues can be addressed by making available aggregated instead of raw record-level data. In many cases however, there is still a need for controlling access even to the resulting aggregated data, e.g., due to data provider’s policies. In this paper we present the Linked Medical Data Access Control (LiMDAC) framework that capitalizes on Linked Data technologies to enable controlling access to medical data across distributed sources with diverse access constraints. The LiMDAC framework consists of three Linked Data models, namely the LiMDAC metadata model, the LiMDAC user profile model, and the LiMDAC access policy model. It also includes an architecture that exploits these models. Based on the framework, a proof-of-concept platform is developed and its performance and functionality are evaluated by employing two usage scenarios. | The linked medical data access control framework |
S1532046414000604 | Purpose Effective communication between patients and health services providers is a key aspect for optimizing and maintaining these services. This work describes a system for the automatic evaluation of users’ perception of the quality of SmsCup, a reminder system for outpatient visits based on short message service (SMS). The final purpose is the creation of a closed-loop control system for the outpatient service, where patients’ complaints and comments represent a feedback that can be used for a better implementation of the service itself. Methods SmsCup was adopted since about eight years by an Italian healthcare organization, with very good results in reducing the no-show (missing visits) phenomenon. During these years, a number of citizens, even if not required, sent a message back, with comments about the service. The automatic interpretation of the content of those SMS may be useful for monitoring and improving service performances.Yet, due to the complex nature of SMS language, their interpretation represents an ongoing challenge. The proposed system uses conditional random fields as the information extraction method for classifying messages into several semantic categories. The categories refer to appreciation of the service or complaints of various types. Then, the system analyzes the extracted content and provides feedback to the service providers, making them learning and acting on this basis. Results At each step, the content of the messages reveals the actual state of the service as well as the efficacy of corrective actions previously undertaken. Our evaluations showed that: (i) the SMS classification system has achieved good overall performance with an average F1-measure and an overall accuracy of about 92%; (ii) the notification of the patients’ feedbacks to service providers showed a positive impact on service functioning. Conclusions Our study proposed an interactive patient-centered system for continuous monitoring of the service quality. It has demonstrated the feasibility of a tool for the analysis and notification of the patients’ feedback on their service experiences, which would support a more regular access to the service. | Patients’ involvement in e-health services quality assessment: A system for the automatic interpretation of SMS-based patients’ feedback |
S1532046414000616 | In this work, we present a novel application of time series discretization using evolutionary programming for the classification of precancerous cervical lesions. The approach optimizes the number of intervals in which the length and amplitude of the time series should be compressed, preserving the important information for classification purposes. Using evolutionary programming, the search for a good discretization scheme is guided by a cost function which considers three criteria: the entropy regarding the classification, the complexity measured as the number of different strings needed to represent the complete data set, and the compression rate assessed as the length of the discrete representation. This discretization approach is evaluated using a time series data based on temporal patterns observed during a classical test used in cervical cancer detection; the classification accuracy reached by our method is compared with the well-known times series discretization algorithm SAX and the dimensionality reduction method PCA. Statistical analysis of the classification accuracy shows that the discrete representation is as efficient as the complete raw representation for the present application, reducing the dimensionality of the time series length by 97%. This representation is also very competitive in terms of classification accuracy when compared with similar approaches. | Application of time series discretization using evolutionary programming for classification of precancerous cervical lesions |
S1532046414000628 | Objectives The role of social media as a source of timely and massive information has become more apparent since the era of Web 2.0.Multiple studies illustrated the use of information in social media to discover biomedical and health-related knowledge.Most methods proposed in the literature employ traditional document classification techniques that represent a document as a bag of words.These techniques work well when documents are rich in text and conform to standard English; however, they are not optimal for social media data where sparsity and noise are norms.This paper aims to address the limitations posed by the traditional bag-of-word based methods and propose to use heterogeneous features in combination with ensemble machine learning techniques to discover health-related information, which could prove to be useful to multiple biomedical applications, especially those needing to discover health-related knowledge in large scale social media data.Furthermore, the proposed methodology could be generalized to discover different types of information in various kinds of textual data. Methodology Social media data is characterized by an abundance of short social-oriented messages that do not conform to standard languages, both grammatically and syntactically.The problem of discovering health-related knowledge in social media data streams is then transformed into a text classification problem, where a text is identified as positive if it is health-related and negative otherwise.We first identify the limitations of the traditional methods which train machines with N-gram word features, then propose to overcome such limitations by utilizing the collaboration of machine learning based classifiers, each of which is trained to learn a semantically different aspect of the data.The parameter analysis for tuning each classifier is also reported. Data sets Three data sets are used in this research.The first data set comprises of approximately 5000 hand-labeled tweets, and is used for cross validation of the classification models in the small scale experiment, and for training the classifiers in the real-world large scale experiment.The second data set is a random sample of real-world Twitter data in the US.The third data set is a random sample of real-world Facebook Timeline posts. Evaluations Two sets of evaluations are conducted to investigate the proposed model’s ability to discover health-related information in the social media domain: small scale and large scale evaluations.The small scale evaluation employs 10-fold cross validation on the labeled data, and aims to tune parameters of the proposed models, and to compare with the stage-of-the-art method.The large scale evaluation tests the trained classification models on the native, real-world data sets, and is needed to verify the ability of the proposed model to handle the massive heterogeneity in real-world social media. Findings The small scale experiment reveals that the proposed method is able to mitigate the limitations in the well established techniques existing in the literature, resulting in performance improvement of 18.61% (F-measure).The large scale experiment further reveals that the baseline fails to perform well on larger data with higher degrees of heterogeneity, while the proposed method is able to yield reasonably good performance and outperform the baseline by 46.62% (F-Measure) on average. | An ensemble heterogeneous classification methodology for discovering health-related knowledge in social media messages |
S153204641400063X | The ubiquity of Online Social Networks (OSNs) is creating new sources for healthcare information, particularly in the context of pharmaceutical drugs. We aimed to examine the impact of a given OSN’s characteristics on the content of pharmaceutical drug discussions from that OSN. We compared the effect of four distinguishing characteristics from ten different OSNs on the content of their pharmaceutical drug discussions: (1) General versus Health OSN; (2) OSN moderation; (3) OSN registration requirements; and (4) OSNs with a question and answer format. The effects of these characteristics were measured both quantitatively and qualitatively. Our results show that an OSN’s characteristics indeed affect the content of its discussions. Based on their information needs, healthcare providers may use our findings to pick the right OSNs or to advise patients regarding their needs. Our results may also guide the creation of new and more effective domain-specific health OSNs. Further, future researchers of online healthcare content in OSNs may find our results informative while choosing OSNs as data sources. We reported several findings about the impact of OSN characteristics on the content of pharmaceutical drug discussion, and synthesized these findings into actionable items for both healthcare providers and future researchers of healthcare discussions on OSNs. Future research on the impact of OSN characteristics could include user demographics, quality and safety of information, and efficacy of OSN usage. | Pharmaceutical drugs chatter on Online Social Networks |
S1532046414000641 | Motivation Gene set enrichment analysis (GSEA) annotates gene microarray data with functional information from the biomedical literature to improve gene–disease association prediction. We hypothesize that supplementing GSEA with comprehensive gene function catalogs built automatically using information extracted from the scientific literature will significantly enhance GSEA prediction quality. Methods Gold standard gene sets for breast cancer (BrCa) and colorectal cancer (CRC) were derived from the literature. Two gene function catalogs (CMeSH and CUMLS) were automatically generated. 1. By using Entrez Gene to associate all recorded human genes with PubMed article IDs. 2. Using the genes mentioned in each PubMed article and associating each with the article’s MeSH terms (in CMeSH) and extracted UMLS concepts (in CUMLS). Microarray data from the Gene Expression Omnibus for BrCa and CRC was then annotated using CMeSH and CUMLS and for comparison, also with several pre-existing catalogs (C2, C4 and C5 from the Molecular Signatures Database). Ranking was done using, a standard GSEA implementation (GSEA-p). Gene function predictions for enriched array data were evaluated against the gold standard by measuring area under the receiver operating characteristic curve (AUC). Results Comparison of ranking using the literature enrichment catalogs, the pre-existing catalogs as well as five randomly generated catalogs show the literature derived enrichment catalogs are more effective. The AUC for BrCa using the unenriched gene expression dataset was 0.43, increasing to 0.89 after gene set enrichment with CUMLS. The AUC for CRC using the unenriched gene expression dataset was 0.54, increasing to 0.9 after enrichment with CMeSH. C2 increased AUC (BrCa 0.76, CRC 0.71) but C4 and C5 performed poorly (between 0.35 and 0.5). The randomly generated catalogs also performed poorly, equivalent to random guessing. Discussion Gene set enrichment significantly improved prediction of gene–disease association. Selection of enrichment catalog had a substantial effect on prediction accuracy. The literature based catalogs performed better than the MSigDB catalogs, possibly because they are more recent. Catalogs generated automatically from the literature can be kept up to date. Conclusion Prediction of gene–disease association is a fundamental task in biomedical research. GSEA provides a promising method when using literature-based enrichment catalogs. Availability The literature based catalogs generated and used in this study are available from http://www2.chi.unsw.edu.au/literature-enrichment. | Gene–disease association with literature based enrichment |
S1532046414000653 | Objective The profusion of data accumulating in the form of medical records could be of great help for developing medical decision support systems. The objective of this paper is to present a methodology for designing data-driven medical diagnostic tools, based on neural network classifiers. Methods The proposed approach adopts the radial basis function (RBF) neural network architecture and the non-symmetric fuzzy means (NSFM) training algorithm, which presents certain advantages including better approximation capabilities and shorter computational times. The novelty in this work consists of adapting the NSFM algorithm to train RBF classifiers, and suitably tailoring the evolutionary simulated annealing (ESA) technique to optimize the produced RBF models. The integration of ESA is critical as it helps the optimization procedure to escape from local minima, which could arise from the application of the traditional simulated annealing algorithm, and thus discover improved solutions. The resulting method is evaluated in nine different medical benchmark datasets, where the common objective is to train a suitable classifier. The evaluation includes a comparison with two different schemes for training classifiers, including a standard RBF training technique and support vector machines (SVMs). Accuracy% and the Matthews Correlation Coefficient (MCC) are used for comparing the performance of the three classifiers. Results Results show that the use of ESA helps to greatly improve the performance of the NSFM algorithm and provide satisfactory classification accuracy. In almost all benchmark datasets, the best solution found by the ESA-NSFM algorithm outperforms the results produced by the SFM algorithm and SVMs, considering either the accuracy% or the MCC criterion. Furthermore, in the majority of datasets, the average solution of the ESA-NSFM population is statistically significantly higher in terms of accuracy% and MCC at the 95% confidence level, compared to the global optimum solution that its rivals could achieve. As far as computational times are concerned, the proposed approach was found to be faster compared to SVMs. Conclusions The results of this study suggest that the ESA-NSFM algorithm can form the basis of a generic method for knowledge extraction from data originating from different kinds of medical records. Testing the proposed approach on a number of benchmark datasets, indicates that it provides increased diagnostic accuracy in comparison with two different classifier training methods. | A medical diagnostic tool based on radial basis function classifiers and evolutionary simulated annealing |
S1532046414000665 | Identifying new drug target (DT) proteins is important in pharmaceutical and biomedical research. General machine learning method (GMLM) classifiers perform fairly well at prediction if the training dataset is well prepared. However, a common problem in preparing the training dataset is the lack of a negative dataset. To address this problem, we proposed two methods that can help GMLM better select the negative training dataset from the test dataset. The prediction accuracy was improved with the training dataset from the proposed strategies. The classifier identified 1797 and 227 potential DT proteins, some of which were mentioned in previous research, which added correlative weight to the new method. Practically, these two sets of potential DT proteins or their homologues are worth considering. | Screening drug target proteins based on sequence information |
S1532046414000677 | In light of the heightened problems of polysemy, synonymy, and hyponymy in clinical text, we hypothesize that patient cohort identification can be improved by using a large, in-domain clinical corpus for query expansion. We evaluate the utility of four auxiliary collections for the Text REtrieval Conference task of IR-based cohort retrieval, considering the effects of collection size, the inherent difficulty of a query, and the interaction between the collections. Each collection was applied to aid in cohort retrieval from the Pittsburgh NLP Repository by using a mixture of relevance models. Measured by mean average precision, performance using any auxiliary resource (MAP=0.386 and above) is shown to improve over the baseline query likelihood model (MAP=0.373). Considering subsets of the Mayo Clinic collection, we found that after including 2.5 billion term instances, retrieval is not improved by adding more instances. However, adding the Mayo Clinic collection did improve performance significantly over any existing setup, with a system using all four auxiliary collections obtaining the best results (MAP=0.4223). Because optimal results in the mixture of relevance models would require selective sampling of the collections, the common sense approach of “use all available data” is inappropriate. However, we found that it was still beneficial to add the Mayo corpus to any mixture of relevance models. On the task of IR-based cohort identification, query expansion with the Mayo Clinic corpus resulted in consistent and significant improvements. As such, any IR query expansion with access to a large clinical corpus could benefit from the additional resource. Additionally, we have shown that more data is not necessarily better, implying that there is value in collection curation. | Using large clinical corpora for query expansion in text-based cohort identification |
S1532046414000781 | Proliferation of health information technologies creates opportunities to improve clinical and public health, including high quality, safer care and lower costs. To maximize such potential benefits, health information technologies must readily and reliably exchange information with other systems. However, evidence from public health surveillance programs in two states suggests that operational clinical information systems often fail to use available standards, a barrier to semantic interoperability. Furthermore, analysis of existing policies incentivizing semantic interoperability suggests they have limited impact and are fragmented. In this essay, we discuss three approaches for increasing semantic interoperability to support national goals for using health information technologies. A clear, comprehensive strategy requiring collaborative efforts by clinical and public health stakeholders is suggested as a guide for the long road towards better population health data and outcomes. | The long road to semantic interoperability in support of public health: Experiences from two states |
S1532046414000793 | Although MedDRA has obvious advantages over previous terminologies for coding adverse drug reactions and discovering potential signals using data mining techniques, its terminological organization constrains users to search terms according to predefined categories. Adding formal definitions to MedDRA would allow retrieval of terms according to a case definition that may correspond to novel categories that are not currently available in the terminology. To achieve semantic reasoning with MedDRA, we have associated formal definitions to MedDRA terms in an OWL file named OntoADR that is the result of our first step for providing an “ontologized” version of MedDRA. MedDRA five-levels original hierarchy was converted into a subsumption tree and formal definitions of MedDRA terms were designed using several methods: mappings to SNOMED-CT, semi-automatic definition algorithms or a fully manual way. This article presents the main steps of OntoADR conception process, its structure and content, and discusses problems and limits raised by this attempt to “ontologize” MedDRA. | Formalizing MedDRA to support semantic reasoning on adverse drug reaction terms |
S153204641400080X | Objective Publications are a key data source for investigator profiles and research networking systems. We developed ReCiter, an algorithm that automatically extracts bibliographies from PubMed using institutional information about the target investigators. Methods ReCiter executes a broad query against PubMed, groups the results into clusters that appear to constitute distinct author identities and selects the cluster that best matches the target investigator. Using information about investigators from one of our institutions, we compared ReCiter results to queries based on author name and institution and to citations extracted manually from the Scopus database. Five judges created a gold standard using citations of a random sample of 200 investigators. Results About half of the 10,471 potential investigators had no matching citations in PubMed, and about 45% had fewer than 70 citations. Interrater agreement (Fleiss’ kappa) for the gold standard was 0.81. Scopus achieved the best recall (sensitivity) of 0.81, while name-based queries had 0.78 and ReCiter had 0.69. ReCiter attained the best precision (positive predictive value) of 0.93 while Scopus had 0.85 and name-based queries had 0.31. Discussion ReCiter accesses the most current citation data, uses limited computational resources and minimizes manual entry by investigators. Generation of bibliographies using named-based queries will not yield high accuracy. Proprietary databases can perform well but requite manual effort. Automated generation with higher recall is possible but requires additional knowledge about investigators. | Automatic generation of investigator bibliographies for institutional research networking systems |
S1532046414000811 | Drug therapeutic indications and side-effects are both measurable patient phenotype changes in response to the treatment. Inferring potential drug therapeutic indications and identifying clinically interesting drug side-effects are both important and challenging tasks. Previous studies have utilized either chemical structures or protein targets to predict indications and side-effects. In this study, we compared drug therapeutic indication prediction using various information including chemical structures, protein targets and side-effects. We also compared drug side-effect prediction with various information sources including chemical structures, protein targets and therapeutic indication. Prediction performance based on 10-fold cross-validation demonstrates that drug side-effects and therapeutic indications are the most predictive information source for each other. In addition, we extracted 6706 statistically significant indication-side-effect associations from all known drug-disease and drug-side-effect relationships. We further developed a novel user interface that allows the user to interactively explore these associations in the form of a dynamic bipartitie graph. Many relationship pairs provide explicit repositioning hypotheses (e.g., drugs causing postural hypotension are potential candidates for hypertension) and clear adverse-reaction watch lists (e.g., drugs for heart failure possibly cause impotence). All data sets and highly correlated disease-side-effect relationships are available at http://astro.temple.edu/∼tua87106/druganalysis.html. | Exploring the associations between drug side-effects and therapeutic indications |
S1532046414000823 | The anonymization of health data streams is important to protect these data against potential privacy breaches. A large number of research studies aiming at offering privacy in the context of data streams has been recently conducted. However, the techniques that have been proposed in these studies generate a significant delay during the anonymization process, since they concentrate on applying existing privacy models (e.g., k-anonymity and l-diversity) to batches of data extracted from data streams in a period of time. In this paper, we present delay-free anonymization, a framework for preserving the privacy of electronic health data streams. Unlike existing works, our method does not generate an accumulation delay, since input streams are anonymized immediately with counterfeit values. We further devise late validation for increasing the data utility of the anonymization results and managing the counterfeit values. Through experiments, we show the efficiency and effectiveness of the proposed method for the real-time release of data streams. | A framework to preserve the privacy of electronic health data streams |
S1532046414000847 | Electronic health record (EHR) data show promise for deriving new ways of modeling human disease states. Although EHR researchers often use numerical values of laboratory tests as features in disease models, a great deal of information is contained in the context within which a laboratory test is taken. For example, the same numerical value of a creatinine test has different interpretation for a chronic kidney disease patient and a patient with acute kidney injury. We study whether EHR research studies are subject to biased results and interpretations if laboratory measurements taken in different contexts are not explicitly separated. We show that the context of a laboratory test measurement can often be captured by the way the test is measured through time. We perform three tasks to study the properties of these temporal measurement patterns. In the first task, we confirm that laboratory test measurement patterns provide additional information to the stand-alone numerical value. The second task identifies three measurement pattern motifs across a set of 70 laboratory tests performed for over 14,000 patients. Of these, one motif exhibits properties that can lead to biased research results. In the third task, we demonstrate the potential for biased results on a specific example. We conduct an association study of lipase test values to acute pancreatitis. We observe a diluted signal when using only a lipase value threshold, whereas the full association is recovered when properly accounting for lipase measurements in different contexts (leveraging the lipase measurement patterns to separate the contexts). Aggregating EHR data without separating distinct laboratory test measurement patterns can intermix patients with different diseases, leading to the confounding of signals in large-scale EHR analyses. This paper presents a methodology for leveraging measurement frequency to identify and reduce laboratory test biases. | Identifying and mitigating biases in EHR laboratory tests |
S1532046414000859 | Background The ability to predict acuity (patients’ care needs), would provide a powerful tool for health care managers to allocate resources. Such estimations and predictions for the care process can be produced from the vast amounts of healthcare data using information technology and computational intelligence techniques. Tactical decision-making and resource allocation may also be supported with different mathematical optimization models. Methods This study was conducted with a data set comprising electronic nursing narratives and the associated Oulu Patient Classification (OPCq) acuity. A mathematical model for the automated assignment of patient acuity scores was utilized and evaluated with the pre-processed data from 23,528 electronic patient records. The methods to predict patient’s acuity were based on linguistic pre-processing, vector-space text modeling, and regularized least-squares regression. Results The experimental results show that it is possible to obtain accurate predictions about patient acuity scores for the coming day based on the assigned scores and nursing notes from the previous day. Making same-day predictions leads to even better results, as access to the nursing notes for the same day boosts the predictive performance. Furthermore, textual nursing notes allow for more accurate predictions than previous acuity scores. The best results are achieved by combining both of these information sources. The developed model achieves a concordance index of 0.821 when predicting the patient acuity scores for the following day, given the scores and text recorded on the previous day. Conclusions By applying language technology to electronic patient documents it is possible to accurately predict the value of the acuity scores of the coming day based on the previous daýs assigned scores and nursing notes. | Predicting patient acuity from electronic patient records |
S1532046414000860 | While the study of privacy preserving data publishing has drawn a lot of interest, some recent work has shown that existing mechanisms do not limit all inferences about individuals. This paper is a positive note in response to this finding. We point out that not all inference attacks should be countered, in contrast to all existing works known to us, and based on this we propose a model called SPLU. This model protects sensitive information, by which we refer to answers for aggregate queries with small sums, while queries with large sums are answered with higher accuracy. Using SPLU, we introduce a sanitization algorithm to protect data while maintaining high data utility for queries with large sums. Empirical results show that our method behaves as desired. | Small sum privacy and large sum utility in data publishing |
S1532046414000884 | Increasingly, confidential medical records are being stored in data centers hosted by hospitals or large companies. As sophisticated algorithms for predictive analysis on medical data continue to be developed, it is likely that, in the future, more and more computation will be done on private patient data. While encryption provides a tool for assuring the privacy of medical information, it limits the functionality for operating on such data. Conventional encryption methods used today provide only very restricted possibilities or none at all to operate on encrypted data without decrypting it first. Homomorphic encryption provides a tool for handling such computations on encrypted data, without decrypting the data, and without even needing the decryption key. In this paper, we discuss possible application scenarios for homomorphic encryption in order to ensure privacy of sensitive medical data. We describe how to privately conduct predictive analysis tasks on encrypted data using homomorphic encryption. As a proof of concept, we present a working implementation of a prediction service running in the cloud (hosted on Microsoft’s Windows Azure), which takes as input private encrypted health data, and returns the probability for suffering cardiovascular disease in encrypted form. Since the cloud service uses homomorphic encryption, it makes this prediction while handling only encrypted data, learning nothing about the submitted confidential medical data. | Private predictive analysis on encrypted medical data |
S1532046414000896 | Objective Create an automated algorithm for predicting elderly patients’ medication-related risks for readmission and validate it by comparing results with a manual analysis of the same patient population. Materials and methods Outcome and Assessment Information Set (OASIS) and medication data were reused from a previous, manual study of 911 patients from 15 Medicare-certified home health care agencies. The medication data was converted into standardized drug codes using APIs managed by the National Library of Medicine (NLM), and then integrated in an automated algorithm that calculates patients’ high risk medication regime scores (HRMRs). A comparison of the results between algorithm and manual process was conducted to determine how frequently the HRMR scores were derived which are predictive of readmission. Results HRMR scores are composed of polypharmacy (number of drugs), Potentially Inappropriate Medications (PIM) (drugs risky to the elderly), and Medication Regimen Complexity Index (MRCI) (complex dose forms, instructions or administration). The algorithm produced polypharmacy, PIM, and MRCI scores that matched with 99%, 87% and 99% of the scores, respectively, from the manual analysis. Discussion Imperfect match rates resulted from discrepancies in how drugs were classified and coded by the manual analysis vs. the automated algorithm. HRMR rules lack clarity, resulting in clinical judgments for manual coding that were difficult to replicate in the automated analysis. Conclusion The high comparison rates for the three measures suggest that an automated clinical tool could use patients’ medication records to predict their risks of avoidable readmissions. | Automation of a high risk medication regime algorithm in a home health care population |
S1532046414000902 | Background Advanced Cardiac Life Support (ACLS) is a series of team-based, sequential and time constrained interventions, requiring effective communication and coordination of activities that are performed by the care provider team on a patient undergoing cardiac arrest or respiratory failure. The state-of-the-art ACLS training is conducted in a face-to-face environment under expert supervision and suffers from several drawbacks including conflicting care provider schedules and high cost of training equipment. Objective The major objective of the study is to describe, including the design, implementation, and evaluation of a novel approach of delivering ACLS training to care providers using the proposed virtual reality simulator that can overcome the challenges and drawbacks imposed by the traditional face-to-face training method. Methods We compare the efficacy and performance outcomes associated with traditional ACLS training with the proposed novel approach of using a virtual reality (VR) based ACLS training simulator. One hundred and forty-eight (148) ACLS certified clinicians, translating into 26 care provider teams, were enrolled for this study. Each team was randomly assigned to one of the three treatment groups: control (traditional ACLS training), persuasive (VR ACLS training with comprehensive feedback components), or minimally persuasive (VR ACLS training with limited feedback components). The teams were tested across two different ACLS procedures that vary in the degree of task complexity: ventricular fibrillation or tachycardia (VFib/VTach) and pulseless electric activity (PEA). Results The difference in performance between control and persuasive groups was not statistically significant (P = .37 for PEA and P = .1 for VFib/VTach). However, the difference in performance between control and minimally persuasive groups was significant (P = .05 for PEA and P = .02 for VFib/VTach). The pre-post comparison of performances of the groups showed that control (P = .017 for PEA, P = .01 for VFib/VTach) and persuasive (P = .02 for PEA, P = .048 for VFib/VTach) groups improved their performances significantly, whereas minimally persuasive group did not (P = .45 for PEA, P = .46 for VFib/VTach). Results also suggest that the benefit of persuasiveness is constrained by the potentially interruptive nature of these features. Conclusions Our results indicate that the VR-based ACLS training with proper feedback components can provide a learning experience similar to face-to-face training, and therefore could serve as a more easily accessed supplementary training tool to the traditional ACLS training. Our findings also suggest that the degree of persuasive features in VR environments have to be designed considering the interruptive nature of the feedback elements. | Collaborative virtual reality based advanced cardiac life support training simulator using virtual reality principles |
S1532046414000914 | Background A myriad of new tools and algorithms have been developed to help public health professionals analyze and visualize the complex data used in infectious disease control. To better understand approaches to meet these users’ information needs, we conducted a systematic literature review focused on the landscape of infectious disease visualization tools for public health professionals, with a special emphasis on geographic information systems (GIS), molecular epidemiology, and social network analysis. The objectives of this review are to: (1) identify public health user needs and preferences for infectious disease information visualization tools; (2) identify existing infectious disease information visualization tools and characterize their architecture and features; (3) identify commonalities among approaches applied to different data types; and (4) describe tool usability evaluation efforts and barriers to the adoption of such tools. Methods We identified articles published in English from January 1, 1980 to June 30, 2013 from five bibliographic databases. Articles with a primary focus on infectious disease visualization tools, needs of public health users, or usability of information visualizations were included in the review. Results A total of 88 articles met our inclusion criteria. Users were found to have diverse needs, preferences and uses for infectious disease visualization tools, and the existing tools are correspondingly diverse. The architecture of the tools was inconsistently described, and few tools in the review discussed the incorporation of usability studies or plans for dissemination. Many studies identified concerns regarding data sharing, confidentiality and quality. Existing tools offer a range of features and functions that allow users to explore, analyze, and visualize their data, but the tools are often for siloed applications. Commonly cited barriers to widespread adoption included lack of organizational support, access issues, and misconceptions about tool use. Discussion and conclusion As the volume and complexity of infectious disease data increases, public health professionals must synthesize highly disparate data to facilitate communication with the public and inform decisions regarding measures to protect the public’s health. Our review identified several themes: consideration of users’ needs, preferences, and computer literacy; integration of tools into routine workflow; complications associated with understanding and use of visualizations; and the role of user trust and organizational support in the adoption of these tools. Interoperability also emerged as a prominent theme, highlighting challenges associated with the increasingly collaborative and interdisciplinary nature of infectious disease control and prevention. Future work should address methods for representing uncertainty and missing data to avoid misleading users as well as strategies to minimize cognitive overload. | Visualization and analytics tools for infectious disease epidemiology: A systematic review |
S1532046414000938 | Introduction Medical documentation is a time-consuming task and there is a growing number of documentation requirements. In order to improve documentation, harmonization and standardization based on existing forms and medical concepts are needed. Systematic analysis of forms can contribute to standardization building upon new methods for automated comparison of forms. Objectives of this research are quantification and comparison of data elements for breast and prostate cancer to discover similarities, differences and reuse potential between documentation sets. In addition, common data elements for each entity should be identified by automated comparison of forms. Materials and methods A collection of 57 forms regarding prostate and breast cancer from quality management, registries, clinical documentation of two university hospitals (Erlangen, Münster), research datasets, certification requirements and trial documentation were transformed into the Operational Data Model (ODM). These ODM-files were semantically enriched with concept codes and analyzed with the compareODM algorithm. Comparison results were aggregated and lists of common concepts were generated. Grid images, dendrograms and spider charts were used for illustration. Results Overall, 1008 data elements for prostate cancer and 1232 data elements for breast cancer were analyzed. Average routine documentation consists of 390 data elements per disease entity and site. Comparisons of forms identified up to 20 comparable data elements in cancer conference forms from both hospitals. Urology forms contain up to 53 comparable data elements with quality management and up to 21 with registry forms. Urology documentation of both hospitals contains up to 34 comparable items with international common data elements. Clinical documentation sets share up to 24 comparable data elements with trial documentation. Within clinical documentation administrative items are most common comparable items. Selected common medical concepts are contained in up to 16 forms. Discussion The amount of documentation for cancer patients is enormous. There is an urgent need for standardized structured single source documentation. Semantic annotation is time-consuming, but enables automated comparison between different form types, hospital sites and even languages. This approach can help to identify common data elements in medical documentation. Standardization of forms and building up forms on the basis of coding systems is desirable. Several comparable data elements within the analyzed forms demonstrate the harmonization potential, which would enable better data reuse. Conclusion Identifying common data elements in medical forms from different settings with systematic and automated form comparison is feasible. | The need for harmonized structured documentation and chances of secondary use – Results of a systematic analysis with automated form comparison for prostate and breast cancer |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.