FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S153204641400094X | Background and purpose Poor device design that fails to adequately account for user needs, cognition, and behavior is often responsible for use errors resulting in adverse events. This poor device design is also often latent, and could be responsible for “No Fault Found” (NFF) reporting, in which medical devices sent for repair by clinical users are found to be operating as intended. Unresolved NFF reports may contribute to incident under reporting, clinical user frustration, and biomedical engineering technologist inefficacy. This study uses human factors engineering methods to investigate the relationship between NFF reporting frequency and device usability. Material and methods An analysis of medical equipment maintenance data was conducted to identify devices with a high NFF reporting frequency. Subsequently, semi-structured interviews and heuristic evaluations were performed in order to identify potential usability issues. Finally, usability testing was conducted in order to validate that latent usability related design faults result in a higher frequency of NFF reporting. Results The analysis of medical equipment maintenance data identified six devices with a high NFF reporting frequency. Semi-structured interviews, heuristic evaluations and usability testing revealed that usability issues caused a significant portion of the NFF reports. Other factors suspected to contribute to increased NFF reporting include accessory issues, intermittent faults and environmental issues. Usability testing conducted on three of the devices revealed 23 latent usability related design faults. Conclusions These findings demonstrate that latent usability related design faults manifest themselves as an increase in NFF reporting and that devices containing usability related design faults can be identified through an analysis of medical equipment maintenance data. | The use of fault reporting of medical equipment to identify latent design flaws |
S1532046414000951 | Interpretation of cardiotocogram (CTG) is a difficult task since its evaluation is complicated by a great inter- and intra-individual variability. Previous studies have predominantly analyzed clinicians’ agreement on CTG evaluation based on quantitative measures (e.g. kappa coefficient) that do not offer any insight into clinical decision making. In this paper we aim to examine the agreement on evaluation in detail and provide data-driven analysis of clinical evaluation. For this study, nine obstetricians provided clinical evaluation of 634 CTG recordings (each ca. 60min long). We studied the agreement on evaluation and its dependence on the increasing number of clinicians involved in the final decision. We showed that despite of large number of clinicians the agreement on CTG evaluations is difficult to reach. The main reason is inherent inter- and intra-observer variability of CTG evaluation. Latent class model provides better and more natural way to aggregate the CTG evaluation than the majority voting especially for larger number of clinicians. Significant improvement was reached in particular for the pathological evaluation – giving a new insight into the process of CTG evaluation. Further, the analysis of latent class model revealed that clinicians unconsciously use four classes when evaluating CTG recordings, despite the fact that the clinical evaluation was based on FIGO guidelines where three classes are defined. | Analysis of obstetricians’ decision making on CTG recordings |
S1532046414000975 | Cost-benefit analysis is a prerequisite for making good business decisions. In the business environment, companies intend to make profit from maximizing information utility of published data while having an obligation to protect individual privacy. In this paper, we quantify the trade-off between privacy and data utility in health data publishing in terms of monetary value. We propose an analytical cost model that can help health information custodians (HICs) make better decisions about sharing person-specific health data with other parties. We examine relevant cost factors associated with the value of anonymized data and the possible damage cost due to potential privacy breaches. Our model guides an HIC to find the optimal value of publishing health data and could be utilized for both perturbative and non-perturbative anonymization techniques. We show that our approach can identify the optimal value for different privacy models, including K-anonymity, LKC-privacy, and ∊ -differential privacy, under various anonymization algorithms and privacy parameters through extensive experiments on real-life data. | Quantifying the costs and benefits of privacy-preserving health data publishing |
S1532046414000987 | Objective Most of the information in Electronic Health Records (EHRs) is represented in free textual form. Practitioners searching EHRs need to phrase their queries carefully, as the record might use synonyms or other related words. In this paper we show that an automatic query expansion method based on the Unified Medicine Language System (UMLS) Metathesaurus improves the results of a robust baseline when searching EHRs. Materials and methods The method uses a graph representation of the lexical units, concepts and relations in the UMLS Metathesaurus. It is based on random walks over the graph, which start on the query terms. Random walks are a well-studied discipline in both Web and Knowledge Base datasets. Results Our experiments over the TREC Medical Record track show improvements in both the 2011 and 2012 datasets over a strong baseline. Discussion Our analysis shows that the success of our method is due to the automatic expansion of the query with extra terms, even when they are not directly related in the UMLS Metathesaurus. The terms added in the expansion go beyond simple synonyms, and also add other kinds of topically related terms. Conclusions Expansion of queries using related terms in the UMLS Metathesaurus beyond synonymy is an effective way to overcome the gap between query and document vocabularies when searching for patient cohorts. | Improving search over Electronic Health Records using UMLS-based query expansion through random walks |
S1532046414000999 | Continuous data collection and analysis have been shown essential to achieving improvement in healthcare. However, the data required for local improvement initiatives are often not readily available from hospital Electronic Health Record (EHR) systems or not routinely collected. Furthermore, improvement teams are often restricted in time and funding thus requiring inexpensive and rapid tools to support their work. Hence, the informatics challenge in healthcare local improvement initiatives consists of providing a mechanism for rapid modelling of the local domain by non-informatics experts, including performance metric definitions, and grounded in established improvement techniques. We investigate the feasibility of a model-driven software approach to address this challenge, whereby an improvement model designed by a team is used to automatically generate required electronic data collection instruments and reporting tools. To that goal, we have designed a generic Improvement Data Model (IDM) to capture the data items and quality measures relevant to the project, and constructed Web Improvement Support in Healthcare (WISH), a prototype tool that takes user-generated IDM models and creates a data schema, data collection web interfaces, and a set of live reports, based on Statistical Process Control (SPC) for use by improvement teams. The software has been successfully used in over 50 improvement projects, with more than 700 users. We present in detail the experiences of one of those initiatives, Chronic Obstructive Pulmonary Disease project in Northwest London hospitals. The specific challenges of improvement in healthcare are analysed and the benefits and limitations of the approach are discussed. | Model-driven approach to data collection and reporting for quality improvement |
S1532046414001002 | Devising an accurate prediction algorithm that can predict the difficulty level of cases for individuals and then selects suitable cases for them is essential to the development of a personalized training system. In this paper, we propose a novel approach, called Performance Weighted Collaborative Filtering (PWCF), to predict the difficulty level of each case for individuals. The main idea of PWCF is to assign an optimal weight to each rating used for predicting the difficulty level of a target case for a trainee, rather than using an equal weight for all ratings as in traditional collaborative filtering methods. The assigned weight is a function of the performance level of the trainee at which the rating was made. The PWCF method and the traditional method are compared using two datasets. The experimental data are then evaluated by means of the MAE metric. Our experimental results show that PWCF outperforms the traditional methods by 8.12% and 17.05%, respectively, over the two datasets, in terms of prediction precision. This suggests that PWCF is a viable method for the development of personalized training systems in radiology education. | A Performance Weighted Collaborative Filtering algorithm for personalized radiology education |
S1532046414001026 | One of the major bottlenecks in applying conventional neural networks to the medical field is that it is very difficult to interpret, in a physically meaningful way, because the learned knowledge is numerically encoded in the trained synaptic weights. In one of our previous works, we proposed a class of Hyper-Rectangular Composite Neural Networks (HRCNNs) of which synaptic weights can be interpreted as a set of crisp If-Then rules; however, a trained HRCNN may result in some ineffective If-Then rules which can only justify very few positive examples (i.e., poor generalization). This motivated us to propose a PSO-based Fuzzy Hyper-Rectangular Composite Neural Network (PFHRCNN) which applies particle swarm optimization (PSO) to trim the rules generated by a trained HRCNN while the recognition performance will not be degraded or even be improved. The performance of the proposed PFHRCNN is demonstrated on three benchmark medical databases including liver disorders data set, the breast cancer data set and the Parkinson’s disease data set. | A PSO-based rule extractor for medical diagnosis |
S1532046414001191 | The Health Insurance Portability and Accountability Act (HIPAA) Safe Harbor method requires removal of 18 types of protected health information (PHI) from clinical documents to be considered “de-identified” prior to use for research purposes. Human review of PHI elements from a large corpus of clinical documents can be tedious and error-prone. Indeed, multiple annotators may be required to consistently redact information that represents each PHI class. Automated de-identification has the potential to improve annotation quality and reduce annotation time. For instance, using machine-assisted annotation by combining de-identification system outputs used as pre-annotations and an interactive annotation interface to provide annotators with PHI annotations for “curation” rather than manual annotation from “scratch” on raw clinical documents. In order to assess whether machine-assisted annotation improves the reliability and accuracy of the reference standard quality and reduces annotation effort, we conducted an annotation experiment. In this annotation study, we assessed the generalizability of the VA Consortium for Healthcare Informatics Research (CHIR) annotation schema and guidelines applied to a corpus of publicly available clinical documents called MTSamples. Specifically, our goals were to (1) characterize a heterogeneous corpus of clinical documents manually annotated for risk-ranked PHI and other annotation types (clinical eponyms and person relations), (2) evaluate how well annotators apply the CHIR schema to the heterogeneous corpus, (3) compare whether machine-assisted annotation (experiment) improves annotation quality and reduces annotation time compared to manual annotation (control), and (4) assess the change in quality of reference standard coverage with each added annotator’s annotations. | Evaluating the effects of machine pre-annotation and an interactive annotation interface on manual de-identification of clinical text |
S1532046414001208 | Institutional Review Boards (IRBs) are a critical component of clinical research and can become a significant bottleneck due to the dramatic increase, in both volume and complexity of clinical research. Despite the interest in developing clinical research informatics (CRI) systems and supporting data standards to increase clinical research efficiency and interoperability, informatics research in the IRB domain has not attracted much attention in the scientific community. The lack of standardized and structured application forms across different IRBs causes inefficient and inconsistent proposal reviews and cumbersome workflows. These issues are even more prominent in multi-institutional clinical research that is rapidly becoming the norm. This paper proposes and evaluates a domain analysis model for electronic IRB (eIRB) systems, paving the way for streamlined clinical research workflow via integration with other CRI systems and improved IRB application throughput via computer-assisted decision support. | A domain analysis model for eIRB systems: Addressing the weak link in clinical research informatics |
S153204641400121X | Objectives New DNA sequencing technologies have revolutionized the search for genetic disruptions. Targeted sequencing of all protein coding regions of the genome, called exome analysis, is actively used in research-oriented genetics clinics, with the transition to exomes as a standard procedure underway. This transition is challenging; identification of potentially causal mutation(s) amongst ∼106 variants requires specialized computation in combination with expert assessment. This study analyzes the usability of user interfaces for clinical exome analysis software. There are two study objectives: (1) To ascertain the key features of successful user interfaces for clinical exome analysis software based on the perspective of expert clinical geneticists, (2) To assess user-system interactions in order to reveal strengths and weaknesses of existing software, inform future design, and accelerate the clinical uptake of exome analysis. Methods Surveys, interviews, and cognitive task analysis were performed for the assessment of two next-generation exome sequence analysis software packages. The subjects included ten clinical geneticists who interacted with the software packages using the “think aloud” method. Subjects’ interactions with the software were recorded in their clinical office within an urban research and teaching hospital. All major user interface events (from the user interactions with the packages) were time-stamped and annotated with coding categories to identify usability issues in order to characterize desired features and deficiencies in the user experience. Results We detected 193 usability issues, the majority of which concern interface layout and navigation, and the resolution of reports. Our study highlights gaps in specific software features typical within exome analysis. The clinicians perform best when the flow of the system is structured into well-defined yet customizable layers for incorporation within the clinical workflow. The results highlight opportunities to dramatically accelerate clinician analysis and interpretation of patient genomic data. Conclusion We present the first application of usability methods to evaluate software interfaces in the context of exome analysis. Our results highlight how the study of user responses can lead to identification of usability issues and challenges and reveal software reengineering opportunities for improving clinical next-generation sequencing analysis. While the evaluation focused on two distinctive software tools, the results are general and should inform active and future software development for genome analysis software. As large-scale genome analysis becomes increasingly common in healthcare, it is critical that efficient and effective software interfaces are provided to accelerate clinical adoption of the technology. Implications for improved design of such applications are discussed. | Usability study of clinical exome analysis software: Top lessons learned and recommendations |
S1532046414001221 | Mobile applications have proven to be promising tools for supporting people in adhering to their health goals. Although coaching and reminder apps abound, few of them are based on established theories of behavior change. In the present work, a behavior change support system is presented that uses a computational model based on multiple psychological theories of behavior change. The system determines the user’s reason for non-adherence using a mobile phone app and an online lifestyle diary. The user automatically receives generated messages with persuasive, tailored content. The system was designed to support chronic patients with type 2 diabetes, HIV, and cardiovascular disease, but can be applied to many health and lifestyle domains. The main focus of this work is the development of the model and the underlying reasoning method. Furthermore, the implementation of the system and some preliminary results of its functioning will be discussed. | Intelligent mobile support for therapy adherence and behavior change |
S1532046414001233 | Machine learning methods usually assume that training data and test data are drawn from the same distribution. However, this assumption often cannot be satisfied in the task of clinical concept extraction. The main aim of this paper was to use training data from one institution to build a concept extraction model for data from another institution with a different distribution. An instance-based transfer learning method, TrAdaBoost, was applied in this work. To prevent the occurrence of a negative transfer phenomenon with TrAdaBoost, we integrated it with Bagging, which provides a “softer” weights update mechanism with only a tiny amount of training data from the target domain. Two data sets named BETH and PARTNERS from the 2010 i2b2/VA challenge as well as BETHBIO, a data set we constructed ourselves, were employed to show the effectiveness of our work’s transfer ability. Our method outperforms the baseline model by 2.3% and 4.4% when the baseline model is trained by training data that are combined from the source domain and the target domain in two experiments of BETH vs. PARTNERS and BETHBIO vs. PARTNERS, respectively. Additionally, confidence intervals for the performance metrics suggest that our method’s results have statistical significance. Moreover, we explore the applicability of our method for further experiments. With our method, only a tiny amount of labeled data from the target domain is required to build a concept extraction model that produces better performance. | Transfer learning based clinical concept extraction on data from multiple sources |
S1532046414001245 | The DDIExtraction Shared Task 2013 is the second edition of the DDIExtraction Shared Task series, a community-wide effort to promote the implementation and comparative assessment of natural language processing (NLP) techniques in the field of the pharmacovigilance domain, in particular, to address the extraction of drug–drug interactions (DDI) from biomedical texts. This edition has been the first attempt to compare the performance of Information Extraction (IE) techniques specific for each of the basic steps of the DDI extraction pipeline. To attain this aim, two main tasks were proposed: the recognition and classification of pharmacological substances and the detection and classification of drug–drug interactions. DDIExtraction 2013 was held from January to June 2013 and attracted wide attention with a total of 14 teams (6 of the teams participated in the drug name recognition task, while 8 participated in the DDI extraction task) from 7 different countries. For the task of the recognition and classification of pharmacological names, the best system achieved an F1 of 71.5%, while, for the detection and classification of DDIs, the best result was an F1 of 65.1%. The results show advances in the state of the art and demonstrate that significant challenges remain to be resolved. This paper focuses on the second task (extraction of DDIs) and examines its main challenges, which have yet to be resolved. | Lessons learnt from the DDIExtraction-2013 Shared Task |
S1532046414001269 | The dissemination of Electronic Health Record (EHR) data, beyond the originating healthcare institutions, can enable large-scale, low-cost medical studies that have the potential to improve public health. Thus, funding bodies, such as the National Institutes of Health (NIH) in the U.S., encourage or require the dissemination of EHR data, and a growing number of innovative medical investigations are being performed using such data. However, simply disseminating EHR data, after removing identifying information, may risk privacy, as patients can still be linked with their record, based on diagnosis codes. This paper proposes the first approach that prevents this type of data linkage using disassociation, an operation that transforms records by splitting them into carefully selected subsets. Our approach preserves privacy with significantly lower data utility loss than existing methods and does not require data owners to specify diagnosis codes that may lead to identity disclosure, as these methods do. Consequently, it can be employed when data need to be shared broadly and be used in studies, beyond the intended ones. Through extensive experiments using EHR data, we demonstrate that our method can construct data that are highly useful for supporting various types of clinical case count studies and general medical analysis tasks. | Disassociation for electronic health record privacy |
S1532046414001270 | Objective This research is motivated by the issue of classifying illnesses of chronically ill patients for decision support in clinical settings. Our main objective is to propose multi-label classification of multivariate time series contained in medical records of chronically ill patients, by means of quantization methods, such as bag of words (BoW), and multi-label classification algorithms. Our second objective is to compare supervised dimensionality reduction techniques to state-of-the-art multi-label classification algorithms. The hypothesis is that kernel methods and locality preserving projections make such algorithms good candidates to study multi-label medical time series. Methods We combine BoW and supervised dimensionality reduction algorithms to perform multi-label classification on health records of chronically ill patients. The considered algorithms are compared with state-of-the-art multi-label classifiers in two real world datasets. Portavita dataset contains 525 diabetes type 2 (DT2) patients, with co-morbidities of DT2 such as hypertension, dyslipidemia, and microvascular or macrovascular issues. MIMIC II dataset contains 2635 patients affected by thyroid disease, diabetes mellitus, lipoid metabolism disease, fluid electrolyte disease, hypertensive disease, thrombosis, hypotension, chronic obstructive pulmonary disease (COPD), liver disease and kidney disease. The algorithms are evaluated using multi-label evaluation metrics such as hamming loss, one error, coverage, ranking loss, and average precision. Results Non-linear dimensionality reduction approaches behave well on medical time series quantized using the BoW algorithm, with results comparable to state-of-the-art multi-label classification algorithms. Chaining the projected features has a positive impact on the performance of the algorithm with respect to pure binary relevance approaches. Conclusions The evaluation highlights the feasibility of representing medical health records using the BoW for multi-label classification tasks. The study also highlights that dimensionality reduction algorithms based on kernel methods, locality preserving projections or both are good candidates to deal with multi-label classification tasks in medical time series with many missing values and high label density. | Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms |
S1532046414001282 | This study presents a Web platform (http://3dfd.ujaen.es) for computing and analyzing the 3D fractal dimension (3DFD) from volumetric data in an efficient, visual and interactive way. The Web platform is specially designed for working with magnetic resonance images (MRIs) of the brain. The program estimates the 3DFD by calculating the 3D box-counting of the entire volume of the brain, and also of its 3D skeleton. All of this is done in a graphical, fast and optimized way by using novel technologies like CUDA and WebGL. The usefulness of the Web platform presented is demonstrated by its application in a case study where an analysis and characterization of groups of 3D MR images is performed for three neurodegenerative diseases: Multiple Sclerosis, Intrauterine Growth Restriction and Alzheimer’s disease. To the best of our knowledge, this is the first Web platform that allows the users to calculate, visualize, analyze and compare the 3DFD from MRI images in the cloud. | A Web platform for the interactive visualization and analysis of the 3D fractal dimension of MRI data |
S1532046414001361 | Our conceptual model demonstrates our goal to investigate the impact of clinical decision support (CDS) utilization on cancer screening improvement strategies in the community health care (CHC) setting. We employed a dual modeling technique using both statistical and computational modeling to evaluate impact. Our statistical model used the Spearman’s Rho test to evaluate the strength of relationship between our proximal outcome measures (CDS utilization) against our distal outcome measure (provider self-reported cancer screening improvement). Our computational model relied on network evolution theory and made use of a tool called Construct-TM to model the use of CDS measured by the rate of organizational learning. We employed the use of previously collected survey data from community health centers Cancer Health Disparities Collaborative (HDCC). Our intent is to demonstrate the added valued gained by using a computational modeling tool in conjunction with a statistical analysis when evaluating the impact a health information technology, in the form of CDS, on health care quality process outcomes such as facility-level screening improvement. Significant simulated disparities in organizational learning over time were observed between community health centers beginning the simulation with high and low clinical decision support capability. | Using computational modeling to assess the impact of clinical decision support on cancer screening improvement strategies within the community health centers |
S1532046414001373 | Motivation One of the important aspects of the data classification problem lies in making the most appropriate selection of features. The set of variables should be small and, at the same time, should provide reliable discrimination of the classes. The method for the discriminating power evaluation that enables a comparison between different sets of variables will be useful in the search for the set of variables. Results A new approach to feature selection is presented. Two methods of evaluation of the data discriminating power of a feature set are suggested. Both of the methods implement self-organizing maps (SOMs) and the newly introduced exponents of the degree of data clusterization on the SOM. The first method is based on the comparison of intraclass and interclass distances on the map. Another method concerns the evaluation of the relative number of best matching unit’s (BMUs) nearest neighbors of the same class. Both methods make it possible to evaluate the discriminating power of a feature set in cases when this set provides nonlinear discrimination of the classes. Availability Current algorithms in program code can be downloaded for free at http://mekler.narod.ru/Science/Articles_support.html, as well as the supporting data files. | Quality assessment of data discrimination using self-organizing maps |
S1532046414001385 | Systems approaches to studying drug-side-effect (drug-SE) associations are emerging as an active research area for drug target discovery, drug repositioning, and drug toxicity prediction. However, currently available drug-SE association databases are far from being complete. Herein, in an effort to increase the data completeness of current drug-SE relationship resources, we present an automatic learning approach to accurately extract drug-SE pairs from the vast amount of published biomedical literature, a rich knowledge source of side effect information for commercial, experimental, and even failed drugs. For the text corpus, we used 119,085,682 MEDLINE sentences and their parse trees. We used known drug-SE associations derived from US Food and Drug Administration (FDA) drug labels as prior knowledge to find relevant sentences and parse trees. We extracted syntactic patterns associated with drug-SE pairs from the resulting set of parse trees. We developed pattern-ranking algorithms to prioritize drug-SE-specific patterns. We then selected a set of patterns with both high precisions and recalls in order to extract drug-SE pairs from the entire MEDLINE. In total, we extracted 38,871 drug-SE pairs from MEDLINE using the learned patterns, the majority of which have not been captured in FDA drug labels to date. On average, our knowledge-driven pattern-learning approach in extracting drug-SE pairs from MEDLINE has achieved a precision of 0.833, a recall of 0.407, and an F1 of 0.545. We compared our approach to a support vector machine (SVM)-based machine learning and a co-occurrence statistics-based approach. We show that the pattern-learning approach is largely complementary to the SVM- and co-occurrence-based approaches with significantly higher precision and F1 but lower recall. We demonstrated by correlation analysis that the extracted drug side effects correlate positively with both drug targets, metabolism, and indications. | Automatic construction of a large-scale and accurate drug-side-effect association knowledge base from biomedical literature |
S1532046414001397 | The ease with which whole genome sequence (WGS) information can be obtained is rapidly approaching the point where it can become useful for routine clinical care. However, significant barriers will inhibit widespread adoption unless clinicians are able to effectively integrate this information into patient care and decision-making. Electronic health records (EHR) and clinical decision support (CDS) systems may play a critical role in this integration. A previously published technical desiderata focused primarily on the integration of genomic data into the EHR. This manuscript extends the previous desiderata by specifically addressing needs related to the integration of genomic information with CDS. The objective of this study is to develop and validate a guiding set of technical desiderata for supporting the clinical use of WGS through CDS. A panel of domain experts in genomics and CDS developed a proposed set of seven additional requirements. These desiderata were reviewed by 63 experts in genomics and CDS through an online survey and refined based on the experts’ comments. These additional desiderata provide important guiding principles for the technical development of CDS capabilities for the clinical use of WGS information. | Technical desiderata for the integration of genomic data with clinical decision support |
S1532046414001403 | The dissemination of Electronic Health Records (EHRs) can be highly beneficial for a range of medical studies, spanning from clinical trials to epidemic control studies, but it must be performed in a way that preserves patients’ privacy. This is not straightforward, because the disseminated data need to be protected against several privacy threats, while remaining useful for subsequent analysis tasks. In this work, we present a survey of algorithms that have been proposed for publishing structured patient data, in a privacy-preserving way. We review more than 45 algorithms, derive insights on their operation, and highlight their advantages and disadvantages. We also provide a discussion of some promising directions for future research in this area. | Publishing data from electronic health records while preserving privacy: A survey of algorithms |
S1532046414001415 | There are now domains where information is recorded over a period of time, leading to sequences of data known as time series. In many domains, like medicine, time series analysis requires to focus on certain regions of interest, known as events, rather than analyzing the whole time series. In this paper, we propose a framework for knowledge discovery in both one-dimensional and multidimensional time series containing events. We show how our approach can be used to classify medical time series by means of a process that identifies events in time series, generates time series reference models of representative events and compares two time series by analyzing the events they have in common. We have applied our framework on time series generated in the areas of electroencephalography (EEG) and stabilometry. Framework performance was evaluated in terms of classification accuracy, and the results confirmed that the proposed schema has potential for classifying EEG and stabilometric signals. The proposed framework is useful for discovering knowledge from medical time series containing events, such as stabilometric and electroencephalographic time series. These results would be equally applicable to other medical domains generating iconographic time series, such as, for example, electrocardiography (ECG). | A general framework for time series data mining based on event analysis: Application to the medical domains of electroencephalography and stabilometry |
S1532046414001427 | Biomedical taxonomies, thesauri and ontologies in the form of the International Classification of Diseases as a taxonomy or the National Cancer Institute Thesaurus as an OWL-based ontology, play a critical role in acquiring, representing and processing information about human health. With increasing adoption and relevance, biomedical ontologies have also significantly increased in size. For example, the 11th revision of the International Classification of Diseases, which is currently under active development by the World Health Organization contains nearly 50 , 000 classes representing a vast variety of different diseases and causes of death. This evolution in terms of size was accompanied by an evolution in the way ontologies are engineered. Because no single individual has the expertise to develop such large-scale ontologies, ontology-engineering projects have evolved from small-scale efforts involving just a few domain experts to large-scale projects that require effective collaboration between dozens or even hundreds of experts, practitioners and other stakeholders. Understanding the way these different stakeholders collaborate will enable us to improve editing environments that support such collaborations. In this paper, we uncover how large ontology-engineering projects, such as the International Classification of Diseases in its 11th revision, unfold by analyzing usage logs of five different biomedical ontology-engineering projects of varying sizes and scopes using Markov chains. We discover intriguing interaction patterns (e.g., which properties users frequently change after specific given ones) that suggest that large collaborative ontology-engineering projects are governed by a few general principles that determine and drive development. From our analysis, we identify commonalities and differences between different projects that have implications for project managers, ontology editors, developers and contributors working on collaborative ontology-engineering projects and tools in the biomedical domain. | Discovering Beaten Paths in Collaborative Ontology-Engineering Projects using Markov Chains |
S1532046414001439 | In systematic reviews, the growing number of published studies imposes a significant screening workload on reviewers. Active learning is a promising approach to reduce the workload by automating some of the screening decisions, but it has been evaluated for a limited number of disciplines. The suitability of applying active learning to complex topics in disciplines such as social science has not been studied, and the selection of useful criteria and enhancements to address the data imbalance problem in systematic reviews remains an open problem. We applied active learning with two criteria (certainty and uncertainty) and several enhancements in both clinical medicine and social science (specifically, public health) areas, and compared the results in both. The results show that the certainty criterion is useful for finding relevant documents, and weighting positive instances is promising to overcome the data imbalance problem in both data sets. Latent dirichlet allocation (LDA) is also shown to be promising when little manually-assigned information is available. Active learning is effective in complex topics, although its efficiency is limited due to the difficulties in text classification. The most promising criterion and weighting method are the same regardless of the review topic, and unsupervised techniques like LDA have a possibility to boost the performance of active learning without manual annotation. | Reducing systematic review workload through certainty-based screening |
S1532046414001440 | Epilepsy is a common serious neurological disorder with a complex set of possible phenotypes ranging from pathologic abnormalities to variations in electroencephalogram. This paper presents a system called Phenotype Exaction in Epilepsy (PEEP) for extracting complex epilepsy phenotypes and their correlated anatomical locations from clinical discharge summaries, a primary data source for this purpose. PEEP generates candidate phenotype and anatomical location pairs by embedding a named entity recognition method, based on the Epilepsy and Seizure Ontology, into the National Library of Medicine’s MetaMap program. Such candidate pairs are further processed using a correlation algorithm. The derived phenotypes and correlated locations have been used for cohort identification with an integrated ontology-driven visual query interface. To evaluate the performance of PEEP, 400 de-identified discharge summaries were used for development and an additional 262 were used as test data. PEEP achieved a micro-averaged precision of 0.924, recall of 0.931, and F 1 -measure of 0.927 for extracting epilepsy phenotypes. The performance on the extraction of correlated phenotypes and anatomical locations shows a micro-averaged F 1 -measure of 0.856 (Precision: 0.852, Recall: 0.859). The evaluation demonstrates that PEEP is an effective approach to extracting complex epilepsy phenotypes for cohort identification. | Complex epilepsy phenotype extraction from narrative clinical discharge summaries |
S1532046414001452 | Background Design patterns, in the context of software development and ontologies, provide generalized approaches and guidance to solving commonly occurring problems, or addressing common situations typically informed by intuition, heuristics and experience. While the biomedical literature contains broad coverage of specific phenotype algorithm implementations, no work to date has attempted to generalize common approaches into design patterns, which may then be distributed to the informatics community to efficiently develop more accurate phenotype algorithms. Methods Using phenotyping algorithms stored in the Phenotype KnowledgeBase (PheKB), we conducted an independent iterative review to identify recurrent elements within the algorithm definitions. We extracted and generalized recurrent elements in these algorithms into candidate patterns. The authors then assessed the candidate patterns for validity by group consensus, and annotated them with attributes. Results A total of 24 electronic Medical Records and Genomics (eMERGE) phenotypes available in PheKB as of 1/25/2013 were downloaded and reviewed. From these, a total of 21 phenotyping patterns were identified, which are available as an online data supplement. Conclusions Repeatable patterns within phenotyping algorithms exist, and when codified and cataloged may help to educate both experienced and novice algorithm developers. The dissemination and application of these patterns has the potential to decrease the time to develop algorithms, while improving portability and accuracy. | Design patterns for the development of electronic health record-driven phenotype extraction algorithms |
S1532046414001464 | The adoption of ITs by medical organisations makes possible the compilation of large amounts of healthcare data, which are quite often needed to be released to third parties for research or business purposes. Many of this data are of sensitive nature, because they may include patient-related documents such as electronic healthcare records. In order to protect the privacy of individuals, several legislations on healthcare data management, which state the kind of information that should be protected, have been defined. Traditionally, to meet with current legislations, a manual redaction process is applied to patient-related documents in order to remove or black-out sensitive terms. This process is costly and time-consuming and has the undesired side effect of severely reducing the utility of the released content. Automatic methods available in the literature usually propose ad-hoc solutions that are limited to protect specific types of structured information (e.g. e-mail addresses, social security numbers, etc.); as a result, they are hardly applicable to the sensitive entities stated in current regulations that do not present those structural regularities (e.g. diseases, symptoms, treatments, etc.). To tackle these limitations, in this paper we propose an automatic sanitisation method for textual medical documents (e.g. electronic healthcare records) that is able to protect, regardless of their structure, sensitive entities (e.g. diseases) and also those semantically related terms (e.g. symptoms) that may disclose the former ones. Contrary to redaction schemes based on term removal, our approach improves the utility of the protected output by replacing sensitive terms with appropriate generalisations retrieved from several medical and general-purpose knowledge bases. Experiments conducted on highly sensitive documents and in coherency with current regulations on healthcare data privacy show promising results in terms of the practical privacy and utility of the protected output. | Utility-preserving privacy protection of textual healthcare documents |
S1532046414001476 | Objective The amount of information for clinicians and clinical researchers is growing exponentially. Text summarization reduces information as an attempt to enable users to find and understand relevant source texts more quickly and effortlessly. In recent years, substantial research has been conducted to develop and evaluate various summarization techniques in the biomedical domain. The goal of this study was to systematically review recent published research on summarization of textual documents in the biomedical domain. Materials and methods MEDLINE (2000 to October 2013), IEEE Digital Library, and the ACM digital library were searched. Investigators independently screened and abstracted studies that examined text summarization techniques in the biomedical domain. Information is derived from selected articles on five dimensions: input, purpose, output, method and evaluation. Results Of 10,786 studies retrieved, 34 (0.3%) met the inclusion criteria. Natural language processing (17; 50%) and a hybrid technique comprising of statistical, Natural language processing and machine learning (15; 44%) were the most common summarization approaches. Most studies (28; 82%) conducted an intrinsic evaluation. Discussion This is the first systematic review of text summarization in the biomedical domain. The study identified research gaps and provides recommendations for guiding future research on biomedical text summarization. Conclusion Recent research has focused on a hybrid technique comprising statistical, language processing and machine learning techniques. Further research is needed on the application and evaluation of text summarization in real research or patient care settings. | Text summarization in the biomedical domain: A systematic review of recent research |
S1532046414001488 | The rapidly increasing availability of electronic health records (EHRs) from multiple heterogeneous sources has spearheaded the adoption of data-driven approaches for improved clinical research, decision making, prognosis, and patient management. Unfortunately, EHR data do not always directly and reliably map to medical concepts that clinical researchers need or use. Some recent studies have focused on EHR-derived phenotyping, which aims at mapping the EHR data to specific medical concepts; however, most of these approaches require labor intensive supervision from experienced clinical professionals. Furthermore, existing approaches are often disease-centric and specialized to the idiosyncrasies of the information technology and/or business practices of a single healthcare organization. In this paper, we propose Limestone, a nonnegative tensor factorization method to derive phenotype candidates with virtually no human supervision. Limestone represents the data source interactions naturally using tensors (a generalization of matrices). In particular, we investigate the interaction of diagnoses and medications among patients. The resulting tensor factors are reported as phenotype candidates that automatically reveal patient clusters on specific diagnoses and medications. Using the proposed method, multiple phenotypes can be identified simultaneously from data. We demonstrate the capability of Limestone on a cohort of 31,815 patient records from the Geisinger Health System. The dataset spans 7years of longitudinal patient records and was initially constructed for a heart failure onset prediction study. Our experiments demonstrate the robustness, stability, and the conciseness of Limestone-derived phenotypes. Our results show that using only 40 phenotypes, we can outperform the original 640 features (169 diagnosis categories and 471 medication types) to achieve an area under the receiver operator characteristic curve (AUC) of 0.720 (95% CI 0.715 to 0.725). Moreover, in consultation with a medical expert, we confirmed 82% of the top 50 candidates automatically extracted by Limestone are clinically meaningful. | Limestone: High-throughput candidate phenotype generation via tensor factorization |
S153204641400149X | One of the main reasons that leads to a low adoption rate of telemedicine systems is poor usability. An aspect that influences usability during the reporting of findings is the input mode, e.g., if a free-text (FT) or a structured report (SR) interface is employed. The objective of our study is to compare the usability of FT and ST telemedicine systems, specifically in terms of user satisfaction, efficiency and general usability. We comparatively evaluate the usability of these two input modes in a telecardiology system for issuing electrocardiography reports in the context of a statewide telemedicine system in Brazil with more than 350.000 performed tele-electrocardiography examinations. We adopted a multiple method research strategy, applying three different kinds of usability evaluations: user satisfaction was evaluated through interviews with seven medical professionals using the System Usability Scale (SUS) questionnaire and specific questions related to adequacy and user experience. Efficiency was evaluated by estimating execution time using the Keystroke-Level Model (KLM). General usability was assessed based on the conformity of the systems to a set of e-health specific usability heuristics. The results of this comparison provide a first indication that a structured report (SR) input mode for such a system is more satisfactory and efficient with a larger conformity to usability heuristics than free-text (FT) input. User satisfaction using the SUS questionnaire has been scored in average with 58.8 and 77.5 points for the FT and SR system, respectively, which means that the SR system was rated 18.65 points higher than the FT system. In terms of efficiency, the completion of a findings report using the SR mode is estimated to take 8.5s, 3.74 times faster than using the FT system (31.8s). The SR system also demonstrated less violations to usability heuristics (8 points) in comparison to 14 points observed in the FT system. These results provide a first indication that the usage of structured reporting as an input mode in telecardiology systems may enhance usability. This also seems to confirm the advantages of the usage of structured reporting, as already described in the literature for other areas such as teleradiology. | Does the use of structured reporting improve usability? A comparative evaluation of the usability of two approaches for findings reporting in a large-scale telecardiology context |
S1532046414001506 | Insights about patterns of system use are often gained through the analysis of system log files, which record the actual behavior of users. In a clinical context, however, few attempts have been made to typify system use through log file analysis. The present study offers a framework for identifying, describing, and discerning among patterns of use of a clinical information retrieval system. We use the session attributes of volume, diversity, granularity, duration, and content to define a multidimensional space in which each specific session can be positioned. We also describe an analytical method for identifying the common archetypes of system use in this multidimensional space. We demonstrate the value of the proposed framework with a log file of the use of a health information exchange (HIE) system by physicians in an emergency department (ED) of a large Israeli hospital. The analysis reveals five distinct patterns of system use, which have yet to be described in the relevant literature. The results of this study have the potential to inform the design of HIE systems for efficient and effective use, thus increasing their contribution to the clinical decision-making process. | Use patterns of health information exchange through a multidimensional lens: Conceptual framework and empirical validation |
S1532046414001518 | Obstructive sleep apnea (OSA) is a serious sleep disorder which is characterized by frequent obstruction of the upper airway, often resulting in oxygen desaturation. The serious negative impact of OSA on human health makes monitoring and diagnosing it a necessity. Currently, polysomnography is considered the gold standard for diagnosing OSA, which requires an expensive attended overnight stay at a hospital with considerable wiring between the human body and the system. In this paper, we implement a reliable, comfortable, inexpensive, and easily available portable device that allows users to apply the OSA test at home without the need for attended overnight tests. The design takes advantage of a smatrphone’s built-in sensors, pervasiveness, computational capabilities, and user-friendly interface to screen OSA. We use three main sensors to extract physiological signals from patients which are (1) an oximeter to measure the oxygen level, (2) a microphone to record the respiratory effort, and (3) an accelerometer to detect the body’s movement. Finally, we examine our system’s ability to screen the disease as compared to the gold standard by testing it on 15 samples. The results showed that 100% of patients were correctly identified as having the disease, and 85.7% of patients were correctly identified as not having the disease. These preliminary results demonstrate the effectiveness of the developed system when compared to the gold standard and emphasize the important role of smartphones in healthcare. | Classifying obstructive sleep apnea using smartphones |
S153204641400152X | Objective Electronic medical records (EMRs) data is increasingly incorporated into genome–phenome association studies. Investigators hope to share data, but there are concerns it may be “re-identified” through the exploitation of various features, such as combinations of standardized clinical codes. Formal anonymization algorithms (e.g., k-anonymization) can prevent such violations, but prior studies suggest that the size of the population available for anonymization may influence the utility of the resulting data. We systematically investigate this issue using a large-scale biorepository and EMR system through which we evaluate the ability of researchers to learn from anonymized data for genome–phenome association studies under various conditions. Methods We use a k-anonymization strategy to simulate a data protection process (on data sets containing clinical codes) for resources of similar size to those found at nine academic medical institutions within the United States. Following the protection process, we replicate an existing genome–phenome association study and compare the discoveries using the protected data and the original data through the correlation ( r 2 ) of the p-values of association significance. Results Our investigation shows that anonymizing an entire dataset with respect to the population from which it is derived yields significantly more utility than small study-specific datasets anonymized unto themselves. When evaluated using the correlation of genome–phenome association strengths on anonymized data versus original data, all nine simulated sites, results from largest-scale anonymizations (population ∼ 100 , 000 ) retained better utility to those on smaller sizes (population ∼ 6000 – 75 , 000 ). We observed a general trend of increasing r 2 for larger data set sizes: r 2 = 0.9481 for small-sized datasets, r 2 = 0.9493 for moderately-sized datasets, r 2 = 0.9934 for large-sized datasets. Conclusions This research implies that regardless of the overall size of an institution’s data, there may be significant benefits to anonymization of the entire EMR, even if the institution is planning on releasing only data about a specific cohort of patients. | Size matters: How population size influences genotype–phenotype association studies in anonymized data |
S1532046414001531 | Purpose Data generated in the care of patients are widely used to support clinical research and quality improvement, which has hastened the development of self-service query tools. User interface design for such tools, execution of query activity, and underlying application architecture have not been widely reported, and existing tools reflect a wide heterogeneity of methods and technical frameworks. We describe the design, application architecture, and use of a self-service model for enterprise data delivery within Duke Medicine. Methods Our query platform, the Duke Enterprise Data Unified Content Explorer (DEDUCE), supports enhanced data exploration, cohort identification, and data extraction from our enterprise data warehouse (EDW) using a series of modular environments that interact with a central keystone module, Cohort Manager (CM). A data-driven application architecture is implemented through three components: an application data dictionary, the concept of “smart dimensions”, and dynamically-generated user interfaces. Results DEDUCE CM allows flexible hierarchies of EDW queries within a grid-like workspace. A cohort “join” functionality allows switching between filters based on criteria occurring within or across patient encounters. To date, 674 users have been trained and activated in DEDUCE, and logon activity shows a steady increase, with variability between months. A comparison of filter conditions and export criteria shows that these activities have different patterns of usage across subject areas. Conclusions Organizations with sophisticated EDWs may find that users benefit from development of advanced query functionality, complimentary to the user interfaces and infrastructure used in other well-published models. Driven by its EDW context, the DEDUCE application architecture was also designed to be responsive to source data and to allow modification through alterations in metadata rather than programming, allowing an agile response to source system changes. | Modular design, application architecture, and usage of a self-service model for enterprise data delivery: The Duke Enterprise Data Unified Content Explorer (DEDUCE) |
S1532046414001543 | Objective Electronic health records (EHR) offer medical and pharmacogenomics research unprecedented opportunities to identify and classify patients at risk. EHRs are collections of highly inter-dependent records that include biological, anatomical, physiological, and behavioral observations. They comprise a patient’s clinical phenome, where each patient has thousands of date-stamped records distributed across many relational tables. Development of EHR computer-based phenotyping algorithms require time and medical insight from clinical experts, who most often can only review a small patient subset representative of the total EHR records, to identify phenotype features. In this research we evaluate whether relational machine learning (ML) using inductive logic programming (ILP) can contribute to addressing these issues as a viable approach for EHR-based phenotyping. Methods Two relational learning ILP approaches and three well-known WEKA (Waikato Environment for Knowledge Analysis) implementations of non-relational approaches (PART, J48, and JRIP) were used to develop models for nine phenotypes. International Classification of Diseases, Ninth Revision (ICD-9) coded EHR data were used to select training cohorts for the development of each phenotypic model. Accuracy, precision, recall, F-Measure, and Area Under the Receiver Operating Characteristic (AUROC) curve statistics were measured for each phenotypic model based on independent manually verified test cohorts. A two-sided binomial distribution test (sign test) compared the five ML approaches across phenotypes for statistical significance. Results We developed an approach to automatically label training examples using ICD-9 diagnosis codes for the ML approaches being evaluated. Nine phenotypic models for each ML approach were evaluated, resulting in better overall model performance in AUROC using ILP when compared to PART (p =0.039), J48 (p =0.003) and JRIP (p =0.003). Discussion ILP has the potential to improve phenotyping by independently delivering clinically expert interpretable rules for phenotype definitions, or intuitive phenotypes to assist experts. Conclusion Relational learning using ILP offers a viable approach to EHR-driven phenotyping. | Relational machine learning for electronic health record-driven phenotyping |
S1532046414001555 | This paper describes the creation of a comprehensive conceptualization of object models used in medical image simulation, suitable for major imaging modalities and simulators. The goal is to create an application ontology that can be used to annotate the models in a repository integrated in the Virtual Imaging Platform (VIP), to facilitate their sharing and reuse. Annotations make the anatomical, physiological and pathophysiological content of the object models explicit. In such an interdisciplinary context we chose to rely on a common integration framework provided by a foundational ontology, that facilitates the consistent integration of the various modules extracted from several existing ontologies, i.e. FMA, PATO, MPATH, RadLex and ChEBI. Emphasis is put on methodology for achieving this extraction and integration. The most salient aspects of the ontology are presented, especially the organization in model layers, as well as its use to browse and query the model repository. | OntoVIP: An ontology for the annotation of object models used for medical image simulation |
S1532046414001567 | This paper introduces a function dedicated to the estimation of total energy expenditure (TEE) of daily activities based on data from accelerometers integrated into smartphones. The use of mass-market sensors such as accelerometers offers a promising solution for the general public due to the growing smartphone market over the last decade. The TEE estimation function quality was evaluated using data from intensive numerical experiments based, first, on 12 volunteers equipped with a smartphone and two research sensors (Armband and Actiheart) in controlled conditions (CC) and, then, on 30 other volunteers in free-living conditions (FLC). The TEE given by these two sensors in both conditions and estimated from the metabolic equivalent tasks (MET) in CC served as references during the creation and evaluation of the function. The TEE mean gap in absolute value between the function and the three references was 7.0%, 16.4% and 2.7% in CC, and 17.0% and 23.7% according to Armband and Actiheart, respectively, in FLC. This is the first step in the definition of a new feedback mechanism that promotes self-management and daily-efficiency evaluation of physical activity as part of an information system dedicated to the prevention of chronic diseases. | A smartphone-driven methodology for estimating physical activities and energy expenditure in free living conditions |
S1532046414001580 | Pharmacovigilance involves continually monitoring drug safety after drugs are put to market. To aid this process; algorithms for the identification of strongly correlated drug/adverse drug reaction (ADR) pairs from data sources such as adverse event reporting systems or Electronic Health Records have been developed. These methods are generally statistical in nature, and do not draw upon the large volumes of knowledge embedded in the biomedical literature. In this paper, we investigate the ability of scalable Literature Based Discovery (LBD) methods to identify side effects of pharmaceutical agents. The advantage of LBD methods is that they can provide evidence from the literature to support the plausibility of a drug/ADR association, thereby assisting human review to validate the signal, which is an essential component of pharmacovigilance. To do so, we draw upon vast repositories of knowledge that has been extracted from the biomedical literature by two Natural Language Processing tools, MetaMap and SemRep. We evaluate two LBD methods that scale comfortably to the volume of knowledge available in these repositories. Specifically, we evaluate Reflective Random Indexing (RRI), a model based on concept-level co-occurrence, and Predication-based Semantic Indexing (PSI), a model that encodes the nature of the relationship between concepts to support reasoning analogically about drug-effect relationships. An evaluation set was constructed from the Side Effect Resource 2 (SIDER2), which contains known drug/ADR relations, and models were evaluated for their ability to “rediscover” these relations. In this paper, we demonstrate that both RRI and PSI can recover known drug-adverse event associations. However, PSI performed better overall, and has the additional advantage of being able to recover the literature underlying the reasoning pathways it used to make its predictions. | Identifying plausible adverse drug reactions using knowledge extracted from the literature |
S1532046414001592 | Purpose To elevate the level of care to the community it is essential to provide usable tools for healthcare professionals to extract knowledge from clinical data. In this paper a generic translation algorithm is proposed to translate a restricted natural language query (RNLQ) to a standard query language like SQL (Structured Query Language). Methods A special purpose clinical data analytics language (CliniDAL) has been introduced which provides scheme of six classes of clinical questioning templates. A translation algorithm is proposed to translate the RNLQ of users to SQL queries based on a similarity-based Top-k algorithm which is used in the mapping process of CliniDAL. Also a two layer rule-based method is used to interpret the temporal expressions of the query, based on the proposed temporal model. The mapping and translation algorithms are generic and thus able to work with clinical databases in three data design models, including Entity-Relationship (ER), Entity–Attribute–Value (EAV) and XML, however it is only implemented for ER and EAV design models in the current work. Results It is easy to compose a RNLQ via CliniDAL’s interface in which query terms are automatically mapped to the underlying data models of a Clinical Information System (CIS) with an accuracy of more than 84% and the temporal expressions of the query comprising absolute times, relative times or relative events can be automatically mapped to time entities of the underlying CIS and to normalized temporal comparative values. Conclusion The proposed solution of CliniDAL using the generic mapping and translation algorithms which is enhanced by a temporal analyzer component provides a simple mechanism for composing RNLQ for extracting knowledge from CISs with different data design models for analytics purposes. | Restricted natural language based querying of clinical databases |
S1532046414001609 | Automated medical diagnosis models are now ubiquitous, and research for developing new ones is constantly growing. They play an important role in medical decision-making, helping physicians to provide a fast and accurate diagnosis. Due to their adaptive learning and nonlinear mapping properties, the artificial neural networks are widely used to support the human decision capabilities, avoiding variability in practice and errors based on lack of experience. Among the most common learning approaches, one can mention either the classical back-propagation algorithm based on the partial derivatives of the error function with respect to the weights, or the Bayesian learning method based on posterior probability distribution of weights, given training data. This paper proposes a novel training technique gathering together the error-correction learning, the posterior probability distribution of weights given the error function, and the Goodman–Kruskal Gamma rank correlation to assembly them in a Bayesian learning strategy. This study had two main purposes; firstly, to develop anovel learning technique based on both the Bayesian paradigm and the error back-propagation, and secondly,to assess its effectiveness. The proposed model performance is compared with those obtained by traditional machine learning algorithms using real-life breast and lung cancer, diabetes, and heart attack medical databases. Overall, the statistical comparison results indicate that thenovellearning approach outperforms the conventional techniques in almost all respects. | Error-correction learning for artificial neural networks using the Bayesian paradigm. Application to automated medical diagnosis |
S1532046414001610 | Objective: Automatic summarization of biomedical literature usually relies on domain knowledge from external sources to build rich semantic representations of the documents to be summarized. In this paper, we investigate the impact of the knowledge source used on the quality of the summaries that are generated. Materials and methods: We present a method for representing a set of documents relevant to a given biological entity or topic as a semantic graph of domain concepts and relations. Different graphs are created by using different combinations of ontologies and vocabularies within the UMLS (including GO, SNOMED-CT, HUGO and all available vocabularies in the UMLS) to retrieve domain concepts, and different types of relationships (co-occurrence and semantic relations from the UMLS Metathesaurus and Semantic Network) are used to link the concepts in the graph. The different graphs are next used as input to a summarization system that produces summaries composed of the most relevant sentences from the original documents. Results and conclusions: Our experiments demonstrate that the choice of the knowledge source used to model the text has a significant impact on the quality of the automatic summaries. In particular, we find that, when summarizing gene-related literature, using GO, SNOMED-CT and HUGO to extract domain concepts results in significantly better summaries than using all available vocabularies in the UMLS. This finding suggests that successful biomedical summarization requires the selection of the appropriate knowledge source, whose coverage, specificity and relations must be in accordance to the type of the documents to summarize. | Comparing different knowledge sources for the automatic summarization of biomedical literature |
S1532046414001622 | Objectives To develop a method for investigating co-authorship patterns and author team characteristics associated with the publications in high-impact journals through the integration of public MEDLINE data and institutional scientific profile data. Methods For all current researchers at Columbia University Medical Center, we extracted their publications from MEDLINE authored between years 2007 and 2011 and associated journal impact factors, along with author academic ranks and departmental affiliations obtained from Columbia University Scientific Profiles (CUSP). Chi-square tests were performed on co-authorship patterns, with Bonferroni correction for multiple comparisons, to identify team composition characteristics associated with publication impact factors. We also developed co-authorship networks for the 25 most prolific departments between years 2002 and 2011 and counted the internal and external authors, inter-connectivity, and centrality of each department. Results Papers with at least one author from a basic science department are significantly more likely to appear in high-impact journals than papers authored by those from clinical departments alone. Inclusion of at least one professor on the author list is strongly associated with publication in high-impact journals, as is inclusion of at least one research scientist. Departmental and disciplinary differences in the ratios of within- to outside-department collaboration and overall network cohesion are also observed. Conclusions Enrichment of co-authorship patterns with author scientific profiles helps uncover associations between author team characteristics and appearance in high-impact journals. These results may offer implications for mentoring junior biomedical researchers to publish on high-impact journals, as well as for evaluating academic progress across disciplines in modern academic medical centers. | Associating co-authorship patterns with publications in high-impact journals |
S1532046414001798 | Background Information in Electronic Health Records (EHRs) are being promoted for use in clinical decision support, patient registers, measurement and improvement of integration and quality of care, and translational research. To do this EHR-derived data product creators need to logically integrate patient data with information and knowledge from diverse sources and contexts. Objective To examine the accuracy of an ontological multi-attribute approach to create a Type 2 Diabetes Mellitus (T2DM) register to support integrated care. Methods Guided by Australian best practice guidelines, the T2DM diagnosis and management ontology was conceptualized, contextualized and validated by clinicians; it was then specified, formalized and implemented. The algorithm was standardized against the domain ontology in SNOMED CT-AU. Accuracy of the implementation was measured in 4 datasets of varying sizes (927–12,057 patients) and an integrated dataset (23,793 patients). Results were cross-checked with sensitivity and specificity calculated with 95% confidence intervals. Results Incrementally integrating Reason for Visit (RFV), medication (Rx), and pathology in the algorithm identified nearly100% of T2DM cases. Incrementally integrating the four datasets improved accuracy; controlling for sample size, data incompleteness and duplicates. Manual validation confirmed the accuracy of the algorithm. Conclusion Integrating multiple data elements within an EHR using ontology-based case-finding algorithms can improve the accuracy of the diagnosis and compensate for suboptimal data quality, and hence creating a dataset that is more fit-for-purpose. This clinical and pragmatic application of ontologies to EHR data improves the integration of data and the potential for better use of data to improve the quality of care. | Integrating electronic health record information to support integrated care: Practical application of ontologies to improve the accuracy of diabetes disease registers |
S1532046414001804 | Objective Clinical pathways (CPs) are widely studied methods to standardize clinical intervention and improve medical quality. However, standard care plans defined in current CPs are too general to execute in a practical healthcare environment. The purpose of this study was to create hospital-specific personalized CPs by explicitly expressing and replenishing the general knowledge of CPs by applying semantic analysis and reasoning to historical clinical data. Methods A semantic data model was constructed to semantically store clinical data. After querying semantic clinical data, treatment procedures were extracted. Four properties were self-defined for local ontology construction and semantic transformation, and three Jena rules were proposed to achieve error correction and pathway order recognition. Semantic reasoning was utilized to establish the relationship between data orders and pathway orders. Results A clinical pathway for deviated nasal septum was used as an example to illustrate how to combine standard care plans and practical treatment procedures. A group of 224 patients with 11,473 orders was transformed to a semantic data model, which was stored in RDF format. Long term order processing and error correction made the treatment procedures more consistent with clinical practice. The percentage of each pathway order with different probabilities was calculated to declare the commonality between the standard care plans and practical treatment procedures. Detailed treatment procedures with pathway orders, deduced pathway orders, and orders with probability greater than 80% were provided to efficiently customize the CPs. Conclusions This study contributes to the practical application of pathway specifications recommended by the Ministry of Health of China and provides a generic framework for the hospital-specific customization of standard care plans defined by CPs or clinical guidelines. | Creating hospital-specific customized clinical pathways by applying semantic reasoning to clinical data |
S1532046414001816 | Complex clinical decisions require the decision maker to evaluate multiple factors that may interact with each other. Many clinical studies, however, report ‘univariate’ relations between a single factor and outcome. Such univariate statistics are often insufficient to provide useful support for complex clinical decisions even when they are pooled using meta-analysis. More useful decision support could be provided by evidence-based models that take the interaction between factors into account. In this paper, we propose a method of integrating the univariate results of a meta-analysis with a clinical dataset and expert knowledge to construct multivariate Bayesian network (BN) models. The technique reduces the size of the dataset needed to learn the parameters of a model of a given complexity. Supplementing the data with the meta-analysis results avoids the need to either simplify the model – ignoring some complexities of the problem – or to gather more data. The method is illustrated by a clinical case study into the prediction of the viability of severely injured lower extremities. The case study illustrates the advantages of integrating combined evidence into BN development: the BN developed using our method outperformed four different data-driven structure learning methods, and a well-known scoring model (MESS) in this domain. | Combining data and meta-analysis to build Bayesian networks for clinical decision support |
S1532046414001828 | In this paper we describe an efficient tool based on natural language processing for classifying the detail state of pulmonary embolism (PE) recorded in CT pulmonary angiography reports. The classification tasks include: PE present vs. absent, acute PE vs. others, central PE vs. others, and subsegmental PE vs. others. Statistical learning algorithms were trained with features extracted using the NLP tool and gold standard labels obtained via chart review from two radiologists. The areas under the receiver operating characteristic curves (AUC) for the four tasks were 0.998, 0.945, 0.987, and 0.986, respectively. We compared our classifiers with bag-of-words Naive Bayes classifiers, a standard text mining technology, which gave AUC 0.942, 0.765, 0.766, and 0.712, respectively. | Classification of CT pulmonary angiography reports by presence, chronicity, and location of pulmonary embolism with natural language processing |
S153204641400183X | This paper proposes the all-IP WSNs (wireless sensor networks) for real-time patient monitoring. In this paper, the all-IP WSN architecture based on gateway trees is proposed and the hierarchical address structure is presented. Based on this architecture, the all-IP WSN can perform routing without route discovery. Moreover, a mobile node is always identified by a home address and it does not need to be configured with a care-of address during the mobility process, so the communication disruption caused by the address change is avoided. Through the proposed scheme, a physician can monitor the vital signs of a patient at any time and at any places, and according to the IPv6 address he can also obtain the location information of the patient in order to perform effective and timely treatment. Finally, the proposed scheme is evaluated based on the simulation, and the simulation data indicate that the proposed scheme might effectively reduce the communication delay and control cost, and lower the packet loss rate. | All-IP wireless sensor networks for real-time patient monitoring |
S1532046414001841 | We developed Resource Description Framework (RDF)-induced InfluGrams (RIIG) â an informatics formalism to uncover complex relationships among biomarker proteins and biological pathways using the biomedical knowledge bases. We demonstrate an application of RIIG in morphoproteomics, a theranostic technique aimed at comprehensive analysis of protein circuitries to design effective therapeutic strategies in personalized medicine setting. RIIG uses an RDF âÂÂmashupâ knowledge base that integrates publicly available pathway and protein data with ontologies. To mine for RDF-induced Influence Links, RIIG introduces notions of RDF relevancy and RDF collider, which mimic conditional independence and âÂÂexplaining awayâ mechanism in probabilistic systems. Using these notions and constraint-based structure learning algorithms, the formalism generates the morphoproteomic diagrams, which we call InfluGrams, for further analysis by experts. RIIG was able to recover up to 90% of predefined influence links in a simulated environment using synthetic data and outperformed a naïve Monte Carlo sampling of random links. In clinical cases of Acute Lymphoblastic Leukemia (ALL) and Mesenchymal Chondrosarcoma, a significant level of concordance between the RIIG-generated and expert-built morphoproteomic diagrams was observed. In a clinical case of Squamous Cell Carcinoma, RIIG allowed selection of alternative therapeutic targets, the validity of which was supported by a systematic literature review. We have also illustrated an ability of RIIG to discover novel influence links in the general case of the ALL. Applications of the RIIG formalism demonstrated its potential to uncover patient-specific complex relationships among biological entities to find effective drug targets in a personalized medicine setting. We conclude that RIIG provides an effective means not only to streamline morphoproteomic studies, but also to bridge curated biomedical knowledge and causal reasoning with the clinical data in general. | Uncovering influence links in molecular knowledge networks to streamline personalized medicine |
S1532046414001853 | Drug–disease treatment relationships, i.e., which drug(s) are indicated to treat which disease(s), are among the most frequently sought information in PubMed®. Such information is useful for feeding the Google Knowledge Graph, designing computational methods to predict novel drug indications, and validating clinical information in EMRs. Given the importance and utility of this information, there have been several efforts to create repositories of drugs and their indications. However, existing resources are incomplete. Furthermore, they neither label indications in a structured way nor differentiate them by drug-specific properties such as dosage form, and thus do not support computer processing or semantic interoperability. More recently, several studies have proposed automatic methods to extract structured indications from drug descriptions; however, their performance is limited by natural language challenges in disease named entity recognition and indication selection. In response, we report LabeledIn: a human-reviewed, machine-readable and source-linked catalog of labeled indications for human drugs. More specifically, we describe our semi-automatic approach to derive LabeledIn from drug descriptions through human annotations with aids from automatic methods. As the data source, we use the drug labels (or package inserts) submitted to the FDA by drug manufacturers and made available in DailyMed. Our machine-assisted human annotation workflow comprises: (i) a grouping method to remove redundancy and identify representative drug labels to be used for human annotation, (ii) an automatic method to recognize and normalize mentions of diseases in drug labels as candidate indications, and (iii) a two-round annotation workflow for human experts to judge the pre-computed candidates and deliver the final gold standard. In this study, we focused on 250 highly accessed drugs in PubMed Health, a newly developed public web resource for consumers and clinicians on prevention and treatment of diseases. These 250 drugs corresponded to more than 8000 drug labels (500 unique) in DailyMed in which 2950 candidate indications were pre-tagged by an automatic tool. After being reviewed independently by two experts, 1618 indications were selected, and additional 97 (missed by computer) were manually added, with an inter-annotator agreement of 88.35% as measured by the Kappa coefficient. Our final annotation results in LabeledIn consist of 7805 drug–disease treatment relationships where drugs are represented as a triplet of ingredient, dose form, and strength. A systematic comparison of LabeledIn with an existing computer-derived resource revealed significant discrepancies, confirming the need to involve humans in the creation of such a resource. In addition, LabeledIn is unique in that it contains detailed textual context of the selected indications in drug labels, making it suitable for the development of advanced computational methods for the automatic extraction of indications from free text. Finally, motivated by the studies on drug nomenclature and medication errors in EMRs, we adopted a fine-grained drug representation scheme, which enables the automatic identification of drugs with indications specific to certain dose forms or strengths. Future work includes expanding our coverage to more drugs and integration with other resources. The LabeledIn dataset and the annotation guidelines are available at http://ftp.ncbi.nlm.nih.gov/pub/lu/LabeledIn/. | LabeledIn: Cataloging labeled indications for human drugs |
S1532046414001865 | Discovering the regulatory relationships between microRNAs (miRNAs) and mRNAs is an important problem that interests many biologists and medical researchers. A number of computational methods have been proposed to infer miRNAâÂÂmRNA regulatory relationships, and are mostly based on the statistical associations between miRNAs and mRNAs discovered in observational data. The miRNAâÂÂmRNA regulatory relationships identified by these methods can be both direct and indirect regulations. However, differentiating direct regulatory relationships from indirect ones is important for biologists in experimental designs. In this paper, we present a causal discovery based framework (called DirectTarget) to infer direct miRNAâÂÂmRNA causal regulatory relationships in heterogeneous data, including expression profiles of miRNAs and mRNAs, and miRNA target information. DirectTarget is applied to the Epithelial to Mesenchymal Transition (EMT) datasets. The validation by experimentally confirmed target databases suggests that the proposed method can effectively identify direct miRNAâÂÂmRNA regulatory relationships. To explore the upstream regulators of miRNA regulation, we further identify the causal feedforward patterns (CFFPs) of TFâÂÂmiRNAâÂÂmRNA to provide insights into the miRNA regulation in EMT. DirectTarget has the potential to be applied to other datasets to elucidate the direct miRNAâÂÂmRNA causal regulatory relationships and to explore the regulatory patterns. | Identifying direct miRNAâÂÂmRNA causal regulatory relationships in heterogeneous data |
S1532046414001877 | Background Hospital readmission risk prediction remains a motivated area of investigation and operations in light of the hospital readmissions reduction program through CMS. Multiple models of risk have been reported with variable discriminatory performances, and it remains unclear how design factors affect performance. Objectives To study the effects of varying three factors of model development in the prediction of risk based on health record data: (1) reason for readmission (primary readmission diagnosis); (2) available data and data types (e.g. visit history, laboratory results, etc); (3) cohort selection. Methods Regularized regression (LASSO) to generate predictions of readmissions risk using prevalence sampling. Support Vector Machine (SVM) used for comparison in cohort selection testing. Calibration by model refitting to outcome prevalence. Results Predicting readmission risk across multiple reasons for readmission resulted in ROC areas ranging from 0.92 for readmission for congestive heart failure to 0.71 for syncope and 0.68 for all-cause readmission. Visit history and laboratory tests contributed the most predictive value; contributions varied by readmission diagnosis. Cohort definition affected performance for both parametric and nonparametric algorithms. Compared to all patients, limiting the cohort to patients whose index admission and readmission diagnoses matched resulted in a decrease in average ROC from 0.78 to 0.55 (difference in ROC 0.23, p value 0.01). Calibration plots demonstrate good calibration with low mean squared error. Conclusion Targeting reason for readmission in risk prediction impacted discriminatory performance. In general, laboratory data and visit history data contributed the most to prediction; data source contributions varied by reason for readmission. Cohort selection had a large impact on model performance, and these results demonstrate the difficulty of comparing results across different studies of predictive risk modeling. | The effects of data sources, cohort selection, and outcome definition on a predictive model of risk of thirty-day hospital readmissions |
S1532046414001889 | Background As patient’s length of stay in waiting lists increases, governments are looking for strategies to control the problem. Agreements were created with private providers to diminish the workload in the public sector. However, the growth of the private sector is not following the demand for care. Given this context, new management strategies have to be considered in order to minimize patient length of stay in waiting lists while reducing the costs and increasing (or at least maintaining) the quality of care. Method Appointment scheduling systems are today known to be proficient in the optimization of health care services. Their utilization is focused on increasing the usage of human resources, medical equipment and reducing the patient waiting times. In this paper, a simulation-based optimization approach to the Patient Admission Scheduling Problem is presented. Modeling tools and simulation techniques are used in the optimization of a diagnostic imaging department. Results The proposed techniques have demonstrated to be effective in the evaluation of diagnostic imaging workflows. A simulated annealing algorithm was used to optimize the patient admission sequence towards minimizing the total completion and total waiting of patients. The obtained results showed average reductions of 5% on the total completion and 38% on the patients’ total waiting time. | An optimization based on simulation approach to the patient admission scheduling problem using a linear programing algorithm |
S1532046414001890 | Published manuscripts with MESH âÂÂInformaticsâ did not follow increases in the percentage of office-based physicians using electronic medical records between 2001 and 2012. | Trends in informatics publications and health policy |
S1532046414001919 | Bayesian Networks have been used for the inference of transcriptional regulatory relationships among genes, and are valuable for obtaining biological insights. However, finding optimal Bayesian Network (BN) is NP-hard. Thus, heuristic approaches have sought to effectively solve this problem. In this work, we develop a hybrid search method combining Simulated Annealing with a Greedy Algorithm (SAGA). SAGA explores most of the search space by undergoing a two-phase search: first with a Simulated Annealing search and then with a Greedy search. Three sets of background-corrected and normalized microarray datasets were used to test the algorithm. BN structure learning was also conducted using the datasets, and other established search methods as implemented in BANJO (Bayesian Network Inference with Java Objects). The Bayesian Dirichlet Equivalence (BDe) metric was used to score the networks produced with SAGA. SAGA predicted transcriptional regulatory relationships among genes in networks that evaluated to higher BDe scores with high sensitivities and specificities. Thus, the proposed method competes well with existing search algorithms for Bayesian Network structure learning of transcriptional regulatory networks. | SAGA: A hybrid search algorithm for Bayesian Network structure learning of transcriptional regulatory networks |
S1532046414001920 | Outbreaks of infectious disease can pose a significant threat to human health. Thus, detecting and characterizing outbreaks quickly and accurately remains an important problem. This paper describes a Bayesian framework that links clinical diagnosis of individuals in a population to epidemiological modeling of disease outbreaks in the population. Computer-based diagnosis of individuals who seek healthcare is used to guide the search for epidemiological models of population disease that explain the pattern of diagnoses well. We applied this framework to develop a system that detects influenza outbreaks from emergency department (ED) reports. The system diagnoses influenza in individuals probabilistically from evidence in ED reports that are extracted using natural language processing. These diagnoses guide the search for epidemiological models of influenza that explain the pattern of diagnoses well. Those epidemiological models with a high posterior probability determine the most likely outbreaks of specific diseases; the models are also used to characterize properties of an outbreak, such as its expected peak day and estimated size. We evaluated the method using both simulated data and data from a real influenza outbreak. The results provide support that the approach can detect and characterize outbreaks early and well enough to be valuable. We describe several extensions to the approach that appear promising. | A method for detecting and characterizing outbreaks of infectious disease from clinical reports |
S1532046414001932 | The success of many population studies is determined by proper matching of cases to controls. Some of the confounding and bias that afflict electronic health record (EHR)-based observational studies may be reduced by creating effective methods for finding adequate controls. We implemented a method to match case and control populations to compensate for sparse and unequal data collection practices common in EHR data. We did this by matching the healthcare utilization of patients after observing that more complete data was collected on high healthcare utilization patients vs. low healthcare utilization patients. In our results, we show that many of the anomalous differences in population comparisons are mitigated using this matching method compared to other traditional age and gender-based matching. As an example, the comparison of the disease associations of ulcerative colitis and Crohn’s disease show differences that are not present when the controls are chosen in a random or even a matched age/gender/race algorithm. In conclusion, the use of healthcare utilization-based matching algorithms to find adequate controls greatly enhanced the accuracy of results in EHR studies. Full source code and documentation of the control matching methods is available at https://community.i2b2.org/wiki/display/conmat/. | Evaluation of matched control algorithms in EHR-based phenotyping studies: A case study of inflammatory bowel disease comorbidities |
S1532046414001944 | Background and objective The importance of data standards when integrating clinical research data has been recognized. The common data element (CDE) is a consensus-based data element for data harmonization and sharing between clinical researchers, it can support data standards adoption and mapping. However, the lack of a suitable methodology has become a barrier to data standard adoption. Our aim was to demonstrate an approach that allowed clinical researchers to design electronic case report forms (eCRFs) that complied with the data standard. Methods We used a multi-technique approach, including information retrieval, natural language processing and an ontology-based knowledgebase to facilitate data standard adoption using the eCRF design. The approach took research questions as query texts with the aim of retrieving and associating relevant CDEs with the research questions. Results The approach was implemented using a CDE-based eCRF builder, which was evaluated using CDE- related questions from CRFs used in the Parkinson Disease Biomarker Program, as well as CDE-unrelated questions from a technique support website. Our approach had a precision of 0.84, a recall of 0.80, a F-measure of 0.82 and an error of 0.31. Using the 303 testing CDE-related questions, our approach responded and provided suggested CDEs for 88.8% (269/303) of the study questions with a 90.3% accuracy (243/269). The reason for any missed and failed responses was also analyzed. Conclusion This study demonstrates an approach that helps to cross the barrier that inhibits data standard adoption in eCRF building and our evaluation reveals the approach has satisfactory performance. Our CDE-based form builder provides an alternative perspective regarding data standard compliant eCRF design. | A multi-technique approach to bridge electronic case report form design and data standard adoption |
S1532046414001956 | Background The web-based systems available for multi-centre clinical trials do not combine clinical data collection (Electronic Health Records, EHRs) with signal processing storage and analysis tools. However, in pathophysiological research, the correlation between clinical data and signals is crucial for uncovering the underlying neurophysiological mechanisms. A specific example is the investigation of the mechanisms of action for Deep Brain Stimulation (DBS) used for Parkinson’s Disease (PD); the neurosignals recorded from the DBS target structure and clinical data must be investigated. Objective The aim of this study is the development and testing of a new system dedicated to a multi-centre study of Parkinson’s Disease that integrates biosignal analysis tools and data collection in a shared and secure environment. Methods We designed a web-based platform (WebBioBank) for managing the clinical data and biosignals of PD patients treated with DBS in different clinical research centres. Homogeneous data collection was ensured in the different centres (Operative Units, OUs). The anonymity of the data was preserved using unique identifiers associated with patients (ID BAC). The patients’ personal details and their equivalent ID BACs were archived inside the corresponding OU and were not uploaded on the web-based platform; data sharing occurred using the ID BACs. The system allowed researchers to upload different signal processing functions (in a .dll extension) onto the web-based platform and to combine them to define dedicated algorithms. Results Four clinical research centres used WebBioBank for 1year. The clinical data from 58 patients treated using DBS were managed, and 186 biosignals were uploaded and classified into 4 categories based on the treatment (pharmacological and/or electrical). The user’s satisfaction mean score exceeded the satisfaction threshold. Conclusions WebBioBank enabled anonymous data sharing for a clinical study conducted at multiple centres and demonstrated the capabilities of the signal processing chain configuration as well as its effectiveness and efficiency for integrating the neurophysiological results with clinical data in multi-centre studies, which will allow the future collection of homogeneous data in large cohorts of patients. | WebBioBank: A new platform for integrating clinical forms and shared neurosignal analyses to support multi-centre studies in Parkinson’s Disease |
S1532046414001968 | Objective To standardize and objectivize treatment response assessment in oncology, guidelines have been proposed that are driven by radiological measurements, which are typically communicated in free-text reports defying automated processing. We study through inter-annotator agreement and natural language processing (NLP) algorithm development the task of pairing measurements that quantify the same finding across consecutive radiology reports, such that each measurement is paired with at most one other (“partial uniqueness”). Methods and materials Ground truth is created based on 283 abdomen and 311 chest CT reports of 50 patients each. A pre-processing engine segments reports and extracts measurements. Thirteen features are developed based on volumetric similarity between measurements, semantic similarity between their respective narrative contexts and structural properties of their report positions. A Random Forest classifier (RF) integrates all features. A “mutual best match” (MBM) post-processor ensures partial uniqueness. Results In an end-to-end evaluation, RF has precision 0.841, recall 0.807, F-measure 0.824 and AUC 0.971; with MBM, which performs above chance level (P <0.001), it has precision 0.899, recall 0.776, F-measure 0.833 and AUC 0.935. RF (RF+MBM) has error-free performance on 52.7% (57.4%) of report pairs. Discussion Inter-annotator agreement of three domain specialists with the ground truth (κ >0.960) indicates that the task is well defined. Domain properties and inter-section differences are discussed to explain superior performance in abdomen. Enforcing partial uniqueness has mixed but minor effects on performance. Conclusion A combined machine learning–filtering approach is proposed for pairing measurements, which can support prospective (supporting treatment response assessment) and retrospective purposes (data mining). | A natural language processing pipeline for pairing measurements uniquely across free-text CT reports |
S153204641400197X | Background Gene name recognition and normalization is, together with detection of other named entities, a crucial step in biomedical text mining and the underlying basis for development of more advanced techniques like extraction of complex events. While the current state of the art solutions achieve highly promising results on average, performance can drop significantly for specific genes with highly ambiguous synonyms. Depending on the topic of interest, this can cause the need for extensive manual curation of such text mining results. Our goal was to enhance this curation step based on tools widely used in pharmaceutical industry utilizing the text processing and classification capabilities of the Konstanz Information Miner (KNIME) along with publicly available sources. Results F-score achieved on gene specific test corpora for highly ambiguous genes could be improved from values close to zero, due to very low precision, to values >0.9 for several cases. Interestingly the presented approach even resulted in an increased F-score for genes showing already good results in initial gene name normalization. For most test cases, we could significantly improve precision, while retaining a high recall. Conclusions We could show that KNIME can be used to assist in manual curation of text mining results containing high numbers of false positive hits. Our results also indicate that it could be beneficial for future development in the field of gene name normalization to create gene specific training corpora based on incorrectly identified genes common to current state of the art algorithms. | Automated curation of gene name normalization results using the Konstanz information miner |
S1532046414001981 | Purpose In high precision photon radiotherapy and in hadrontherapy, it is crucial to minimize the occurrence of geometrical deviations with respect to the treatment plan in each treatment session. To this end, point-based infrared (IR) optical tracking for patient set-up quality assessment is performed. Such tracking depends on external fiducial points placement. The main purpose of our work is to propose a new algorithm based on simulated annealing and augmented Lagrangian pattern search (SAPS), which is able to take into account prior knowledge, such as spatial constraints, during the optimization process. Material and methods The SAPS algorithm was tested on data related to head and neck and pelvic cancer patients, and that were fitted with external surface markers for IR optical tracking applied for patient set-up preliminary correction. The integrated algorithm was tested considering optimality measures obtained with Computed Tomography (CT) images (i.e. the ratio between the so-called target registration error and fiducial registration error, TRE/FRE) and assessing the marker spatial distribution. Comparison has been performed with randomly selected marker configuration and with the GETS algorithm (Genetic Evolutionary Taboo Search), also taking into account the presence of organs at risk. Results The results obtained with SAPS highlight improvements with respect to the other approaches: (i) TRE/FRE ratio decreases; (ii) marker distribution satisfies both marker visibility and spatial constraints. We have also investigated how the TRE/FRE ratio is influenced by the number of markers, obtaining significant TRE/FRE reduction with respect to the random configurations, when a high number of markers is used. Conclusions The SAPS algorithm is a valuable strategy for fiducial configuration optimization in IR optical tracking applied for patient set-up error detection and correction in radiation therapy, showing that taking into account prior knowledge is valuable in this optimization process. Further work will be focused on the computational optimization of the SAPS algorithm toward fast point-of-care applications. | Optimal marker placement in hadrontherapy: Intelligent optimization strategies with augmented Lagrangian pattern search |
S1532046414001993 | Creating electronic health records that support the uniquely complex and varied needs of healthcare presents formidable challenges. To address some of these challenges we created a new model for healthcare information systems, embodied in MedWISE, 2 Medical widget-based information sharing environment. 2 a widget-based highly configurable electronic health record (EHR) platform. Founded on the idea that providing clinician users with greater control of the EHR may result in greater fit to user needs and preferences, MedWISE allows drag/drop user configurations and the sharing of user-created elements such as custom laboratory result panels and user-created interface tabs. After reviewing the current state of EHR configurability, we describe the philosophical, theoretical and practical rationales for our model, and the specific functionality of MedWISE. The alternative approach may have several advantages for human–computer interaction, efficiency, cognition, and fit of EHR tools to different contexts and tasks. We discuss potential issues raised by this approach. | The clinician in the Driver’s Seat: Part 1 – A drag/drop user-composable electronic health record platform |
S1532046414002007 | Background Therapy for certain medical conditions occurs in a stepwise fashion, where one medication is recommended as initial therapy and other medications follow. Sequential pattern mining is a data mining technique used to identify patterns of ordered events. Objective To determine whether sequential pattern mining is effective for identifying temporal relationships between medications and accurately predicting the next medication likely to be prescribed for a patient. Design We obtained claims data from Blue Cross Blue Shield of Texas for patients prescribed at least one diabetes medication between 2008 and 2011, and divided these into a training set (90% of patients) and test set (10% of patients). We applied the CSPADE algorithm to mine sequential patterns of diabetes medication prescriptions both at the drug class and generic drug level and ranked them by the support statistic. We then evaluated the accuracy of predictions made for which diabetes medication a patient was likely to be prescribed next. Results We identified 161,497 patients who had been prescribed at least one diabetes medication. We were able to mine stepwise patterns of pharmacological therapy that were consistent with guidelines. Within three attempts, we were able to predict the medication prescribed for 90.0% of patients when making predictions by drug class, and for 64.1% when making predictions at the generic drug level. These results were stable under 10-fold cross validation, ranging from 89.1%–90.5% at the drug class level and 63.5–64.9% at the generic drug level. Using 1 or 2 items in the patient’s medication history led to more accurate predictions than not using any history, but using the entire history was sometimes worse. Conclusion Sequential pattern mining is an effective technique to identify temporal relationships between medications and can be used to predict next steps in a patient’s medication regimen. Accurate predictions can be made without using the patient’s entire medication history. | The use of sequential pattern mining to predict next prescribed medications |
S1532046414002020 | Purpose Although tailoring health education messages to individual characteristics of patients has shown promising results, most patient education materials still take a one-size-fits-all approach. The aim of this study was to develop a method for tailoring health education messages to patients’ preferences for various message features, using the concept of personas. This is a preliminary study focused on education for coronary heart disease (CHD) patients. Methods This study used a three-step approach. First, we created personas by (i) performing k-means cluster analysis on data from an online survey that assessed the preferences of 213 CHD patients for various message features and, (ii) creating a vivid description of the preferences per patient cluster in an iterative process with the research team. Second, we developed adaptation rules to tailor existing educational messages to the resulting personas. Third, we conducted a pilot validation by adapting nine existing educational messages to each of the personas. These messages and the resulting personas were then presented to a separate group of 38 CHD patients who visited the cardiology outpatient clinic. They were first asked to choose their most preferred, second most preferred, and least preferred persona. Subsequently, they were asked to rate three of the adapted messages; one for every of the persona choices. Results We created five personas that pertained to five patient clusters. Personas varied mainly on preferences for medical or lay language, current or future temporal perspective, and including or excluding explicit health risks. Fifty-five different adaptation rules were developed, primarily describing adaptations to the message’s perspective, level of detail, sentence structure, and terminology. Most participants in the validation study could identify with one of the five personas, although some of them found it hard to choose. On average, 68.5% of all participants rated the messages that matched their most preferred persona more positively than, or in the same way as, the messages that matched their least preferred persona. Conclusions The persona-based method developed in this study can be used to create a manageable set of patient-centered tailored messages, while additionally using the developed personas to assess patients’ preferences. | Using personas to tailor educational messages to the preferences of coronary heart disease patients |
S1532046414002032 | Travel distance to medical care has been assessed using a variety of geographic methods. Network analyses are less common, but may generate more accurate estimates of travel costs. We compared straight-line distances and driving distance, as well as average drive time and travel time on a public transit network for 1789 persons diagnosed with HIV between 2010 and 2012 to identify differences overall, and by distinct geographic areas of Philadelphia. Paired t-tests were used to assess differences across methods, and analysis of variance was used to assess between-group differences. Driving distances were significantly longer than straight-line distances (p <0.001) and transit times were significantly longer than driving times (p <0.001). Persons living in the northeast section of the city traveled greater distances, and at greater cost of time and effort, than persons in all other areas of the city (p <0.001). Persons living in the northwest section of the city traveled farther and longer than all other areas except the northeast (p <0.0001). Network analyses that include public transit will likely produce a more realistic estimate of the travel costs, and may improve models to predict medical care outcomes. | Comparison of geographic methods to assess travel patterns of persons diagnosed with HIV in Philadelphia: How close is close enough? |
S1532046414002044 | Patient monitors in modern hospitals have become ubiquitous but they generate an excessive number of false alarms causing alarm fatigue. Our previous work showed that combinations of frequently co-occurring monitor alarms, called SuperAlarm patterns, were capable of predicting in-hospital code blue events at a lower alarm frequency. In the present study, we extend the conceptual domain of a SuperAlarm to incorporate laboratory test results along with monitor alarms so as to build an integrated data set to mine SuperAlarm patterns. We propose two approaches to integrate monitor alarms with laboratory test results and use a maximal frequent itemsets mining algorithm to find SuperAlarm patterns. Under an acceptable false positive rate FPRmax , optimal parameters including the minimum support threshold and the length of time window for the algorithm to find the combinations of monitor alarms and laboratory test results are determined based on a 10-fold cross-validation set. SuperAlarm candidates are generated under these optimal parameters. The final SuperAlarm patterns are obtained by further removing the candidates with false positive rate> FPRmax . The performance of SuperAlarm patterns are assessed using an independent test data set. First, we calculate the sensitivity with respect to prediction window and the sensitivity with respect to lead time. Second, we calculate the false SuperAlarm ratio (ratio of the hourly number of SuperAlarm triggers for control patients to that of the monitor alarms, or that of regular monitor alarms plus laboratory test results if the SuperAlarm patterns contain laboratory test results) and the work-up to detection ratio, WDR (ratio of the number of patients triggering any SuperAlarm patterns to that of code blue patients triggering any SuperAlarm patterns). The experiment results demonstrate that when varying FPRmax between 0.02 and 0.15, the SuperAlarm patterns composed of monitor alarms along with the last two laboratory test results are triggered at least once for [56.7–93.3%] of code blue patients within an 1-h prediction window before code blue events and for [43.3–90.0%] of code blue patients at least 1-h ahead of code blue events. However, the hourly number of these SuperAlarm patterns occurring in control patients is only [2.0–14.8%] of that of regular monitor alarms with WDR varying between 2.1 and 6.5 in a 12-h window. For a given FPRmax threshold, the SuperAlarm set generated from the integrated data set has higher sensitivity and lower WDR than the SuperAlarm set generated from the regular monitor alarm data set. In addition, the McNemar’s test also shows that the performance of the SuperAlarm set from the integrated data set is significantly different from that of the SuperAlarm set from the regular monitor alarm data set. We therefore conclude that the SuperAlarm patterns generated from the integrated data set are better at predicting code blue events. | Integrating monitor alarms with laboratory test results to enhance patient deterioration prediction |
S1532046414002172 | Systems approaches to analyzing disease phenotype networks in combination with protein functional interaction networks have great potential in illuminating disease pathophysiological mechanisms. While many genetic networks are readily available, disease phenotype networks remain largely incomplete. In this study, we built a large-scale Disease Manifestation Network (DMN) from 50,543 highly accurate disease-manifestation semantic relationships in the United Medical Language System (UMLS). Our new phenotype network contains 2305 nodes and 373,527 weighted edges to represent the disease phenotypic similarities. We first compared DMN with the networks representing genetic relationships among diseases, and demonstrated that the phenotype clustering in DMN reflects common disease genetics. Then we compared DMN with a widely-used disease phenotype network in previous gene discovery studies, called mimMiner, which was extracted from the textual descriptions in Online Mendelian Inheritance in Man (OMIM). We demonstrated that DMN contains different knowledge from the existing phenotype data source. Finally, a case study on Marfan syndrome further proved that DMN contains useful information and can provide leads to discover unknown disease causes. Integrating DMN in systems approaches with mimMiner and other data offers the opportunities to predict novel disease genetics. We made DMN publicly available at nlp/case.edu/public/data/DMN. | Comparative analysis of a novel disease phenotype network based on clinical manifestations |
S1532046414002202 | Genotyping Next Generation Sequencing (NGS) data of a diploid genome aims to assign the zygosity of identified variants through comparison with a reference genome. Current methods typically employ probabilistic models that rely on the pileup of bases at each locus and on a priori knowledge. We present a new algorithm, called Kimimila (KInetic Modeling based on InforMation theory to Infer Labels of Alleles), which is able to assign reads to alleles by using a distance geometry approach and to infer the variant genotypes accurately, without any kind of assumption. The performance of the model has been assessed on simulated and real data of the 1000 Genomes Project and the results have been compared with several commonly used genotyping methods, i.e., GATK, Samtools, VarScan, FreeBayes and Atlas2. Despite our algorithm does not make use of a priori knowledge, the percentage of correctly genotyped variants is comparable to these algorithms. Furthermore, our method allows the user to split the reads pool depending on the inferred allele origin. | A kinetic model-based algorithm to classify NGS short reads by their allele origin |
S1532046414002214 | Anticancer drug-associated side effect knowledge often exists in multiple heterogeneous and complementary data sources. A comprehensive anticancer drug–side effect (drug–SE) relationship knowledge base is important for computation-based drug target discovery, drug toxicity predication and drug repositioning. In this study, we present a two-step approach by combining table classification and relationship extraction to extract drug–SE pairs from a large number of high-profile oncological full-text articles. The data consists of 31,255 tables downloaded from the Journal of Oncology (JCO). We first trained a statistical classifier to classify tables into SE-related and -unrelated categories. We then extracted drug–SE pairs from SE-related tables. We compared drug side effect knowledge extracted from JCO tables to that derived from FDA drug labels. Finally, we systematically analyzed relationships between anti-cancer drug-associated side effects and drug-associated gene targets, metabolism genes, and disease indications. The statistical table classifier is effective in classifying tables into SE-related and -unrelated (precision: 0.711; recall: 0.941; F1: 0.810). We extracted a total of 26,918 drug–SE pairs from SE-related tables with a precision of 0.605, a recall of 0.460, and a F1 of 0.520. Drug–SE pairs extracted from JCO tables is largely complementary to those derived from FDA drug labels; as many as 84.7% of the pairs extracted from JCO tables have not been included a side effect database constructed from FDA drug labels. Side effects associated with anticancer drugs positively correlate with drug target genes, drug metabolism genes, and disease indications. | Combining automatic table classification and relationship extraction in extracting anticancer drug–side effect pairs from full-text articles |
S1532046414002226 | User-composable approaches provide clinicians with the control to design and assemble information elements on screen via drag/drop. They hold considerable promise for enhancing the electronic-health-records (EHRs) user experience. We previously described this novel approach to EHR design and our illustrative system, MedWISE. The purpose of this paper is to describe clinician users’ intelligent uses of space during completion of real patient case studies in a laboratory setting using MedWISE. Thirteen clinicians at a quaternary academic medical center used the system to review four real patient cases. We analyzed clinician utterances, behaviors, screen layouts (i.e., interface designs), and their perceptions associated with completing patient case studies. Clinicians effectively used the system to review all cases. Two coding schemata pertaining to human–computer interaction and diagnostic reasoning were used to analyze the data. Users adopted three main interaction strategies: rapidly gathering items on screen and reviewing (‘opportunistic selection’ approach); creating highly structured screens (‘structured’ approach); and interacting with small groups of items in sequence as their case review progressed (‘dynamic stage’ approach). They also used spatial arrangement in ways predicted by theory and research on workplace spatial arrangement. This includes assignment of screen regions for particular purposes (24% of spatial codes), juxtaposition to facilitate calculation or other cognitive tasks (‘epistemic action’), and grouping elements with common meanings or relevance to the diagnostic facets of the case (20.3%). A left-to-right progression of orienting materials, data, and action items or reflection space was a commonly observed pattern. Widget selection was based on user assessment of what information was useful or relevant. We developed and tested an illustrative system that gives clinicians greater control of the EHR, and demonstrated its feasibility for case review by typical clinicians. Producing the simplifying inventions, such as user-composable platforms that shift control to the user, may serve to promote productive EHR use and enhance its value as an instrument of patient care. | The clinician in the driver’s seat: Part 2 – Intelligent uses of space in a drag/drop user-composable electronic health record |
S1532046414002238 | Introduction Existing methods such as correlation plots and cluster heat maps are insufficient in the visual exploration of multiple associations between genetics and phenotype, which is of importance to achieve a better understanding of the pathophysiology of psychiatric and other illnesses. The implementation of a combined presentation of effect size and statistical significance in a graphical method, added to the ordering of the variables based on the effect-ordered data display principle was deemed useful by the authors to facilitate in the process of recognizing meaningful patterns in these associations. Materials and methods The requirements, analyses and graphical presentation of the feature-expression heat map are described. The graphs display associations of two sets of ordered variables where a one-way direction is assumed. The associations are depicted as circles representing a combination of effect size (color) and statistical significance (radius). Results An example dataset is presented and relation to other methods, limitations, areas of application and possible future enhancements are discussed. Conclusion The feature-expression heat map is a useful graphical instrument to explore associations in complex biological systems where one-way direction is assumed, such as genotype-phenotype pathophysiological models. | Feature-expression heat maps – A new visual method to explore complex associations between two variable sets |
S153204641400224X | Risk stratification is instrumental to modern clinical decision support systems. Comprehensive risk stratification should be able to provide the clinicians with not only the accurate assessment of a patient’s risk but also the clinical context to be acted upon. However, existing risk stratification techniques mainly focus on predicting the risk score for individual patients; at the cohort level, they offer little insight beyond a flat score-based segmentation. This essentially reduces a patient to a score and thus removes him/her from his/her clinical context. To address this limitation, in this paper we propose a bilinear model for risk stratification that simultaneously captures the three key aspects of risk stratification: (1) it predicts the risk of each individual patient; (2) it stratifies the patient cohort based on not only the risk score but also the clinical characteristics; and (3) it embeds all patients into clinical contexts with clear interpretation. We apply our model to a cohort of 4977 patients, 1127 among which were diagnosed with Congestive Heart Failure (CHF). We demonstrate that our model cannot only accurately predict the onset risk of CHF but also provide rich and actionable clinical insights into the patient cohort. | Towards actionable risk stratification: A bilinear approach |
S1532046414002251 | Provide a detailed understanding of the information workflow processes related to translating health promotion materials for limited English proficiency individuals in order to inform the design of context-driven machine translation (MT) tools for public health (PH). We applied a cognitive work analysis framework to investigate the translation information workflow processes of two large health departments in Washington State. Researchers conducted interviews, performed a task analysis, and validated results with PH professionals to model translation workflow and identify functional requirements for a translation system for PH. The study resulted in a detailed description of work related to translation of PH materials, an information workflow diagram, and a description of attitudes towards MT technology. We identified a number of themes that hold design implications for incorporating MT in PH translation practice. A PH translation tool prototype was designed based on these findings. This study underscores the importance of understanding the work context and information workflow for which systems will be designed. Based on themes and translation information workflow processes, we identified key design guidelines for incorporating MT into PH translation work. Primary amongst these is that MT should be followed by human review for translations to be of high quality and for the technology to be adopted into practice. The time and costs of creating multilingual health promotion materials are barriers to translation. PH personnel were interested in MT’s potential to improve access to low-cost translated PH materials, but expressed concerns about ensuring quality. We outline design considerations and a potential machine translation tool to best fit MT systems into PH practice. | Modeling workflow to design machine translation applications for public health practice |
S1532046414002263 | Objectives To describe the IMI EHR4CR project which is designing and developing, and aims to demonstrate, a scalable, widely acceptable and efficient approach to interoperability between EHR systems and clinical research systems. Methods The IMI EHR4CR project is combining and extending several previously isolated state-of-the-art technical components through a new approach to develop a platform for reusing EHR data to support medical research. This will be achieved through multiple but unified initiatives across different major disease areas (e.g. cardiovascular, cancer) and clinical research use cases (protocol feasibility, patient identification and recruitment, clinical trial execution and serious adverse event reporting), with various local and national stakeholders across several countries and therefore under various legal frameworks. Results An initial instance of the platform has been built, providing communication, security and terminology services to the eleven participating hospitals and ten pharmaceutical companies located in seven European countries. Proof-of-concept demonstrators have been built and evaluated for the protocol feasibility and patient recruitment scenarios. The specifications of the clinical trial execution and the adverse event reporting scenarios have been documented and reviewed. Conclusions Through a combination of a consortium that brings collectively many years of experience from previous relevant EU projects and of the global conduct of clinical trials, of an approach to ethics that engages many important stakeholders across Europe to ensure acceptability, of a robust iterative design methodology for the platform services that is anchored on requirements of an underlying Service Oriented Architecture that has been designed to be scalable and adaptable, EHR4CR could be well placed to deliver a sound, useful and well accepted pan-European solution for the reuse of hospital EHR data to support clinical research studies. | Using electronic health records for clinical research: The case of the EHR4CR project |
S1532046414002287 | Introduction Physicians’ adoption seems to be a significant issue when comprehensive implementation of Electronic Medical Records (EMR) is considered. This study was conducted to determine the organizational contextual factors affecting physicians’ adoption of EMR. Methods This was a descriptive-analytical study in which a sample of 330 physicians working in hospitals affiliated to Tehran University of Medical Sciences was selected. Physicians’ attitudes toward EMRs’ adoption have been assessed by a conceptual path model of Technology Acceptance model (TAM) and organizational context variables. Collected data were analyzed by SPSS16 using regression analysis. The final model was tested by structural equation modeling (SEM) and represented by SPSS–AMOS, structural equation modeling software. Results The results suggest that modified proposed conceptual path model explains about 56% of the variance of EMRs’ adoption. The findings also evidenced that perceived usefulness (PU), perceived ease of use (PEOU), management support; physicians’ involvement, physicians’ autonomy, and the doctor–patient relationship have direct and significant effect on physicians’ attitudes toward EMRs’ adoption. However, training showed to have no significant effect on PU and PEOU. Conclusion The present study acknowledged that considerable part of physicians’ attitude toward EMRs’ adoption is controlled by organizational contextual factors. These factors should be subsequently the major concern of health organizations and health policy makers. | The effects of organizational contextual factors on physicians’ attitude toward adoption of Electronic Medical Records |
S1532046414002299 | Objective To develop a probabilistic model for discovering and quantifying determinants of outbreak detection and to use the model to predict detection performance for new outbreaks. Materials and methods We used an existing software platform to simulate waterborne disease outbreaks of varying duration and magnitude. The simulated data were overlaid on real data from visits to emergency department in Montreal for gastroenteritis. We analyzed the combined data using biosurveillance algorithms, varying their parameters over a wide range. We then applied structure and parameter learning algorithms to the resulting data set to build a Bayesian network model for predicting detection performance as a function of outbreak characteristics and surveillance system parameters. We evaluated the predictions of this model through 5-fold cross-validation. Results The model predicted performance metrics of commonly used outbreak detection methods with an accuracy greater than 0.80. The model also quantified the influence of different outbreak characteristics and parameters of biosurveillance algorithms on detection performance in practically relevant surveillance scenarios. In addition to identifying characteristics expected a priori to have a strong influence on detection performance, such as the alerting threshold and the peak size of the outbreak, the model suggested an important role for other algorithm features, such as adjustment for weekly patterns. Conclusion We developed a model that accurately predicts how characteristics of disease outbreaks and detection methods will influence on detection. This model can be used to compare the performance of detection methods under different surveillance scenarios, to gain insight into which characteristics of outbreaks and biosurveillance algorithms drive detection performance, and to guide the configuration of surveillance systems. | Quantifying the determinants of outbreak detection performance through simulation and machine learning |
S1532046414002305 | Chronic diseases patients often require constant dietary control that involves complicated interaction among factors such as the illness stage, the patient’s physical condition, the patient’s activity level, the amount of food intake, and key nutrient restrictions. This study aims to integrate multiple knowledge sources for problem solving modeling and knowledge-based system (KBS) development. A chronic kidney disease dietary consultation system is constructed by using Web Ontology Language (OWL) and Semantic Web Rule Language (SWRL) to demonstrate how a KBS approach can achieve sound problem solving modeling and effective knowledge inference. For system evaluation, information from 84 case patients is used to evaluate the performance of the system in recommending appropriate food serving amounts from different food groups for balanced key nutrient ingestion. The results show that, excluding interference factors, the OWL-based KBS can achieve accurate problem solving reasoning while maintaining knowledge base shareability and extensibility. | A chronic disease dietary consultation system using OWL-based ontologies and semantic rules |
S1532046414002317 | Objective Automatic detection of adverse drug reaction (ADR) mentions from text has recently received significant interest in pharmacovigilance research. Current research focuses on various sources of text-based information, including social media—where enormous amounts of user posted data is available, which have the potential for use in pharmacovigilance if collected and filtered accurately. The aims of this study are: (i) to explore natural language processing (NLP) approaches for generating useful features from text, and utilizing them in optimized machine learning algorithms for automatic classification of ADR assertive text segments; (ii) to present two data sets that we prepared for the task of ADR detection from user posted internet data; and (iii) to investigate if combining training data from distinct corpora can improve automatic classification accuracies. Methods One of our three data sets contains annotated sentences from clinical reports, and the two other data sets, built in-house, consist of annotated posts from social media. Our text classification approach relies on generating a large set of features, representing semantic properties (e.g., sentiment, polarity, and topic), from short text nuggets. Importantly, using our expanded feature sets, we combine training data from different corpora in attempts to boost classification accuracies. Results Our feature-rich classification approach performs significantly better than previously published approaches with ADR class F-scores of 0.812 (previously reported best: 0.770), 0.538 and 0.678 for the three data sets. Combining training data from multiple compatible corpora further improves the ADR F-scores for the in-house data sets to 0.597 (improvement of 5.9 units) and 0.704 (improvement of 2.6 units) respectively. Conclusions Our research results indicate that using advanced NLP techniques for generating information rich features from text can significantly improve classification accuracies over existing benchmarks. Our experiments illustrate the benefits of incorporating various semantic features such as topics, concepts, sentiments, and polarities. Finally, we show that integration of information from compatible corpora can significantly improve classification performance. This form of multi-corpus training may be particularly useful in cases where data sets are heavily imbalanced (e.g., social media data), and may reduce the time and costs associated with the annotation of data in the future. | Portable automatic text classification for adverse drug reaction detection via multi-corpus training |
S1532046414002330 | Exploitation of recombinant DNA and sequencing technologies has led to a new concept in vaccination in which isolated epitopes, capable of stimulating a specific immune response, have been identified and used to achieve advanced vaccine formulations; replacing those constituted by whole pathogen-formulations. In this context, bioinformatics approaches play a critical role on analyzing multiple genomes to select the protective epitopes in silico. It is conceived that cocktails of defined epitopes or chimeric protein arrangements, including the target epitopes, may provide a rationale design capable to elicit convenient humoral or cellular immune responses. This review presents a comprehensive compilation of the most advantageous online immunological software and searchable, in order to facilitate the design and development of vaccines. An outlook on how these tools are supporting vaccine development is presented. HIV and influenza have been taken as examples of promising developments on vaccination against hypervariable viruses. Perspectives in this field are also envisioned. | An overview of bioinformatics tools for epitope prediction: Implications on vaccine development |
S1532046414002342 | Identifying candidate disease genes is important to improve medical care. However, this task is challenging in the post-genomic era. Several computational approaches have been proposed to prioritize potential candidate genes relying on protein–protein interaction (PPI) networks. However, the experimental PPI network is usually liable to contain a number of spurious interactions. In this paper, we construct a reliable heterogeneous network by fusing multiple networks, a PPI network reconstructed by topological similarity, a phenotype similarity network and known associations between diseases and genes. We then devise a random walk-based algorithm on the reliable heterogeneous network called RWRHN to prioritize potential candidate genes for inherited diseases. The results of leave-one-out cross-validation experiments show that the RWRHN algorithm has better performance than the RWRH and CIPHER methods in inferring disease genes. Furthermore, RWRHN is used to predict novel causal genes for 16 diseases, including breast cancer, diabetes mellitus type 2, and prostate cancer, as well as to detect disease-related protein complexes. The top predictions are supported by literature evidence. | Prioritization of potential candidate disease genes by topological similarity of protein–protein interaction network and phenotype data |
S1532046414002354 | Predictive models built using temporal data in electronic health records (EHRs) can potentially play a major role in improving management of chronic diseases. However, these data present a multitude of technical challenges, including irregular sampling of data and varying length of available patient history. In this paper, we describe and evaluate three different approaches that use machine learning to build predictive models using temporal EHR data of a patient. The first approach is a commonly used non-temporal approach that aggregates values of the predictors in the patient’s medical history. The other two approaches exploit the temporal dynamics of the data. The two temporal approaches vary in how they model temporal information and handle missing data. Using data from the EHR of Mount Sinai Medical Center, we learned and evaluated the models in the context of predicting loss of estimated glomerular filtration rate (eGFR), the most common assessment of kidney function. Our results show that incorporating temporal information in patient’s medical history can lead to better prediction of loss of kidney function. They also demonstrate that exactly how this information is incorporated is important. In particular, our results demonstrate that the relative importance of different predictors varies over time, and that using multi-task learning to account for this is an appropriate way to robustly capture the temporal dynamics in EHR data. Using a case study, we also demonstrate how the multi-task learning based model can yield predictive models with better performance for identifying patients at high risk of short-term loss of kidney function. | Incorporating temporal EHR data in predictive models for risk stratification of renal function deterioration |
S1532046414002366 | Objective Investigating the contacts of a newly diagnosed tuberculosis (TB) case to prevent TB transmission is a core public health activity. In the context of limited resources, it is often necessary to prioritize investigation when multiple cases are reported. Public health personnel currently prioritize contact investigation intuitively based on past experience. Decision-support software using patient attributes to predict the probability of a TB case being involved in recent transmission could aid in this prioritization, but a prediction model is needed to drive such software. Methods We developed a logistic regression model using the clinical and demographic information of TB cases reported to Montreal Public Health between 1997 and 2007. The reference standard for transmission was DNA fingerprint analysis. We measured the predictive performance, in terms of sensitivity, specificity, negative predictive value, positive predictive value, the Receiver Operating Characteristic (ROC) curve and the Area Under the ROC (AUC). Results Among 1552 TB cases enrolled in the study, 314 (20.2%) were involved in recent transmission. The AUC of the model was 0.65 (95% confidence interval: 0.61–0.68), which is significantly better than random prediction. The maximized values of sensitivity and specificity on the ROC were 0.53 and 0.67, respectively. Conclusions The characteristics of a TB patient reported to public health can be used to predict whether the newly diagnosed case is associated with recent transmission as opposed to reactivation of latent infection. | Towards probabilistic decision support in public health practice: Predicting recent transmission of tuberculosis from patient attributes |
S1532046414002378 | Background Pharmacovigilance is the activity related to the collection, analysis and prevention of adverse drug reactions (ADRs) induced by drugs or biologics. The detection of adverse drug reactions is performed using statistical algorithms and groupings of ADR terms from the MedDRA (Medical Dictionary for Drug Regulatory Activities) terminology. Standardized MedDRA Queries (SMQs) are the groupings which become a standard for assisting the retrieval and evaluation of MedDRA-coded ADR reports worldwide. Currently 84 SMQs have been created, while several important safety topics are not yet covered. Creation of SMQs is a long and tedious process performed by the experts. It relies on manual analysis of MedDRA in order to find out all the relevant terms to be included in a SMQ. Our objective is to propose an automatic method for assisting the creation of SMQs using the clustering of terms which are semantically similar. Methods The experimental method relies on a specific semantic resource, and also on the semantic distance algorithms and clustering approaches. We perform several experiments in order to define the optimal parameters. Results Our results show that the proposed method can assist the creation of SMQs and make this process faster and systematic. The average performance of the method is precision 59% and recall 26%. The correlation of the results obtained is 0.72 against the medical doctors judgments and 0.78 against the medical coders judgments. Conclusions These results and additional evaluation indicate that the generated clusters can be efficiently used for the detection of pharmacovigilance signals, as they provide better signal detection than the existing SMQs. | Semantic distance-based creation of clusters of pharmacovigilance terms and their evaluation |
S153204641400238X | The focus of this paper is on the challenges and opportunities presented by developing scenarios of use for interactive medical devices. Scenarios are integral to the international standard for usability engineering of medical devices (IEC 62366:2007), and are also applied to the development of health software (draft standard IEC 82304-1). The 62366 standard lays out a process for mitigating risk during normal use (i.e. use as per the instructions, or accepted medical practice). However, this begs the question of whether “real use” (that which occurs in practice) matches “normal use”. In this paper, we present an overview of the product lifecycle and how it impacts on the type of scenario that can be practically applied. We report on the development and testing of a set of scenarios intended to inform the design of infusion pumps based on “real use”. The scenarios were validated by researchers and practitioners experienced in clinical practice, and their utility was assessed by developers and practitioners representing different stages of the product lifecycle. These evaluations highlighted previously unreported challenges and opportunities for the use of scenarios in this context. Challenges include: integrating scenario-based design with usability engineering practice; covering the breadth of uses of infusion devices; and managing contradictory evidence. Opportunities included scenario use beyond design to guide marketing, to inform purchasing and as resources for training staff. This study exemplifies one empirically grounded approach to communicating and negotiating the realities of practice. | Usability standards meet scenario-based design: Challenges and opportunities |
S1532046414002391 | Background Invasive fungal diseases (IFDs) are associated with considerable health and economic costs. Surveillance of the more diagnostically challenging invasive fungal diseases, specifically of the sino-pulmonary system, is not feasible for many hospitals because case finding is a costly and labour intensive exercise. We developed text classifiers for detecting such IFDs from free-text radiology (CT) reports, using machine-learning techniques. Method We obtained free-text reports of CT scans performed over a specific hospitalisation period (2003–2011), for 264 IFD and 289 control patients from three tertiary hospitals. We analysed IFD evidence at patient, report, and sentence levels. Three infectious disease experts annotated the reports of 73 IFD-positive patients for language suggestive of IFD at sentence level, and graded the sentences as to whether they suggested or excluded the presence of IFD. Reliable agreement between annotators was obtained and this was used as training data for our classifiers. We tested a variety of Machine Learning (ML), rule based, and hybrid systems, with feature types including bags of words, bags of phrases, and bags of concepts, as well as report-level structured features. Evaluation was carried out over a robust framework with separate Development and Held-Out datasets. Results The best systems (using Support Vector Machines) achieved very high recall at report- and patient-levels over unseen data: 95% and 100% respectively. Precision at report-level over held-out data was 71%; however, most of the associated false-positive reports (53%) belonged to patients who had a previous positive report appropriately flagged by the classifier, reducing negative impact in practice. Conclusions Our machine learning application holds the potential for developing systematic IFD surveillance systems for hospital populations. | Automatic detection of patients with invasive fungal disease from free-text computed tomography (CT) scans |
S1532046414002408 | Scarce healthcare resources require carefully made policies ensuring optimal bed allocation, quality healthcare service, and adequate financial support. This paper proposes a complex analysis of the resource allocation in a hospital department by integrating in the same framework a queuing system, a compartmental model, and an evolutionary-based optimization. The queuing system shapes the flow of patients through the hospital, the compartmental model offers a feasible structure of the hospital department in accordance to the queuing characteristics, and the evolutionary paradigm provides the means to optimize the bed-occupancy management and the resource utilization using a genetic algorithm approach. The paper also focuses on a “What-if analysis” providing a flexible tool to explore the effects on the outcomes of the queuing system and resource utilization through systematic changes in the input parameters. The methodology was illustrated using a simulation based on real data collected from a geriatric department of a hospital from London, UK. In addition, the paper explores the possibility of adapting the methodology to different medical departments (surgery, stroke, and mental illness). Moreover, the paper also focuses on the practical use of the model from the healthcare point of view, by presenting a simulated application. | Improving hospital bed occupancy and resource utilization through queuing modeling and evolutionary computation |
S1532046414002597 | Objective To precisely define the utility of tests in a clinical pathway through data-driven analysis of the electronic medical record (EMR). Materials and methods The information content was defined in terms of the entropy of the expected value of the test related to a given outcome. A kernel density classifier was used to estimate the necessary distributions. To validate the method, we used data from the EMR of the gastrointestinal department at a university hospital. Blood tests from patients undergoing surgery for gastrointestinal surgery were analyzed with respect to second surgery within 30days of the index surgery. Results The information content is clearly reflected in the patient pathway for certain combinations of tests and outcomes. C-reactive protein tests coupled to anastomosis leakage, a severe complication show a clear pattern of information gain through the patient trajectory, where the greatest gain from the test is 3–4days post index surgery. Discussion We have defined the information content in a data-driven and information theoretic way such that the utility of a test can be precisely defined. The results reflect clinical knowledge. In the case we used the tests carry little negative impact. The general approach can be expanded to cases that carry a substantial negative impact, such as in certain radiological techniques. | Data-driven approach for assessing utility of medical tests using electronic medical records |
S1532046414002627 | Evaluating and selecting software packages that meet the requirements of an organization are difficult aspects of software engineering process. Selecting the wrong open-source EMR software package can be costly and may adversely affect business processes and functioning of the organization. This study aims to evaluate and select open-source EMR software packages based on multi-criteria decision-making. A hands-on study was performed and a set of open-source EMR software packages were implemented locally on separate virtual machines to examine the systems more closely. Several measures as evaluation basis were specified, and the systems were selected based a set of metric outcomes using Integrated Analytic Hierarchy Process (AHP) and TOPSIS. The experimental results showed that GNUmed and OpenEMR software can provide better basis on ranking score records than other open-source EMR software packages. | Evaluation and selection of open-source EMR software packages based on integrated AHP and TOPSIS |
S1532046414002639 | Modern healthcare is getting reshaped by growing Electronic Medical Records (EMR). Recently, these records have been shown of great value towards building clinical prediction models. In EMR data, patients’ diseases and hospital interventions are captured through a set of diagnoses and procedures codes. These codes are usually represented in a tree form (e.g. ICD-10 tree) and the codes within a tree branch may be highly correlated. These codes can be used as features to build a prediction model and an appropriate feature selection can inform a clinician about important risk factors for a disease. Traditional feature selection methods (e.g. Information Gain, T-test, etc.) consider each variable independently and usually end up having a long feature list. Recently, Lasso and related l 1 -penalty based feature selection methods have become popular due to their joint feature selection property. However, Lasso is known to have problems of selecting one feature of many correlated features randomly. This hinders the clinicians to arrive at a stable feature set, which is crucial for clinical decision making process. In this paper, we solve this problem by using a recently proposed Tree-Lasso model. Since, the stability behavior of Tree-Lasso is not well understood, we study the stability behavior of Tree-Lasso and compare it with other feature selection methods. Using a synthetic and two real-world datasets (Cancer and Acute Myocardial Infarction), we show that Tree-Lasso based feature selection is significantly more stable than Lasso and comparable to other methods e.g. Information Gain, ReliefF and T-test. We further show that, using different types of classifiers such as logistic regression, naive Bayes, support vector machines, decision trees and Random Forest, the classification performance of Tree-Lasso is comparable to Lasso and better than other methods. Our result has implications in identifying stable risk factors for many healthcare problems and therefore can potentially assist clinical decision making for accurate medical prognosis. | Stable feature selection for clinical prediction: Exploiting ICD tree structure using Tree-Lasso |
S1532046414002640 | Metabolomics is an emerging field that includes ascertaining a metabolic profile from a combination of small molecules, and which has health applications. Metabolomic methods are currently applied to discover diagnostic biomarkers and to identify pathophysiological pathways involved in pathology. However, metabolomic data are complex and are usually analyzed by statistical methods. Although the methods have been widely described, most have not been either standardized or validated. Data analysis is the foundation of a robust methodology, so new mathematical methods need to be developed to assess and complement current methods. We therefore applied, for the first time, the dominance-based rough set approach (DRSA) to metabolomics data; we also assessed the complementarity of this method with standard statistical methods. Some attributes were transformed in a way allowing us to discover global and local monotonic relationships between condition and decision attributes. We used previously published metabolomics data (18 variables) for amyotrophic lateral sclerosis (ALS) and non-ALS patients. Principal Component Analysis (PCA) and Orthogonal Partial Least Square-Discriminant Analysis (OPLS-DA) allowed satisfactory discrimination (72.7%) between ALS and non-ALS patients. Some discriminant metabolites were identified: acetate, acetone, pyruvate and glutamine. The concentrations of acetate and pyruvate were also identified by univariate analysis as significantly different between ALS and non-ALS patients. DRSA correctly classified 68.7% of the cases and established rules involving some of the metabolites highlighted by OPLS-DA (acetate and acetone). Some rules identified potential biomarkers not revealed by OPLS-DA (beta-hydroxybutyrate). We also found a large number of common discriminating metabolites after Bayesian confirmation measures, particularly acetate, pyruvate, acetone and ascorbate, consistent with the pathophysiological pathways involved in ALS. DRSA provides a complementary method for improving the predictive performance of the multivariate data analysis usually used in metabolomics. This method could help in the identification of metabolites involved in disease pathogenesis. Interestingly, these different strategies mostly identified the same metabolites as being discriminant. The selection of strong decision rules with high value of Bayesian confirmation provides useful information about relevant conditionâdecision relationships not otherwise revealed in metabolomics data. | Comparative analysis of targeted metabolomics: Dominance-based rough set approach versus orthogonal partial least square-discriminant analysis |
S1532046414002652 | Missing data arise in clinical research datasets for reasons ranging from incomplete electronic health records to incorrect trial data collection. This has an adverse effect on analysis performed with the data, but it can also affect the management of a clinical trial itself. We propose two graphical visualization schemes to aid in managing the completeness of a clinical research dataset: the binary completeness grid (BCG) for single patient observation, and the gradient completeness grid (GCG) for an entire dataset. We use these tools to manage three clinical trials. Two are ongoing observational trials, while the other is a cohort study that is complete. The completeness grids revealed unexpected patterns in our data and enabled us to identify records that should have been purged and identify missing follow-up data from sets of observations thought to be complete. Binary and gradient completeness grids provide a rapid, convenient way to visualize missing data in clinical datasets. | Visual grids for managing data completeness in clinical research datasets |
S1532046414002664 | Introduction This article explores how measures of semantic similarity and relatedness are impacted by the semantic groups to which the concepts they are measuring belong. Our goal is to determine if there are distinctions between homogeneous comparisons (where both concepts belong to the same group) and heterogeneous ones (where the concepts are in different groups). Our hypothesis is that the similarity measures will be significantly affected since they rely on hierarchical is-a relations, whereas relatedness measures should be less impacted since they utilize a wider range of relations. In addition, we also evaluate the effect of combining different measures of similarity and relatedness. Our hypothesis is that these combined measures will more closely correlate with human judgment, since they better reflect the rich variety of information humans use when assessing similarity and relatedness. Method We evaluate our method on four reference standards. Three of the reference standards were annotated by human judges for relatedness and one was annotated for similarity. Results We found significant differences in the correlation of semantic similarity and relatedness measures with human judgment, depending on which semantic groups were involved. We also found that combining a definition based relatedness measure with an information content similarity measure resulted in significant improvements in correlation over individual measures. Availability The semantic similarity and relatedness package is an open source program available from http://umls-similarity.sourceforge.net/. The reference standards are available at http://www.people.vcu.edu/∼{}btmcinnes/downloads.html. | Evaluating semantic similarity and relatedness over the semantic grouping of clinical term pairs |
S1532046414002676 | Text mining of scientific literature has been essential for setting up large public biomedical databases, which are being widely used by the research community. In the biomedical domain, the existence of a large number of terminological resources and knowledge bases (KB) has enabled a myriad of machine learning methods for different text mining related tasks. Unfortunately, KBs have not been devised for text mining tasks but for human interpretation, thus performance of KB-based methods is usually lower when compared to supervised machine learning methods. The disadvantage of supervised methods though is they require labeled training data and therefore not useful for large scale biomedical text mining systems. KB-based methods do not have this limitation. In this paper, we describe a novel method to generate word-concept probabilities from a KB, which can serve as a basis for several text mining tasks. This method not only takes into account the underlying patterns within the descriptions contained in the KB but also those in texts available from large unlabeled corpora such as MEDLINE. The parameters of the model have been estimated without training data. Patterns from MEDLINE have been built using MetaMap for entity recognition and related using co-occurrences. The word-concept probabilities were evaluated on the task of word sense disambiguation (WSD). The results showed that our method obtained a higher degree of accuracy than other state-of-the-art approaches when evaluated on the MSH WSD data set. We also evaluated our method on the task of document ranking using MEDLINE citations. These results also showed an increase in performance over existing baseline retrieval approaches. | Knowledge based word-concept model estimation and refinement for biomedical text mining |
S1532046414002688 | Motivation The primary economy-driven documentation of patient-specific information in clinical information systems leads to drawbacks in the use of these systems in daily clinical routine. Missing meta-data regarding underlying clinical workflows within the stored information is crucial for intelligent support systems. Unfortunately, there is still a lack of primary clinical needs-driven electronic patient documentation. Hence, physicians and surgeons must search hundreds of documents to find necessary patient data rather than accessing relevant information directly from the current process step. In this work, a completely new approach has been developed to enrich the existing information in clinical information systems with additional meta-data, such as the actual treatment phase from which the information entity originates. Methods Stochastic models based on Hidden Markov Models (HMMs) are used to create a mathematical representation of the underlying clinical workflow. These models are created from real-world anonymized patient data and are tailored to therapy processes for patients with head and neck cancer. Additionally, two methodologies to extend the models to improve the workflow recognition rates are presented in this work. Results A leave-one-out cross validation study was performed and achieved promising recognition rates of up to 90% with a standard deviation of 6.4%. Conclusions The method presented in this paper demonstrates the feasibility of predicting clinical workflow steps from patient-specific information as the basis for clinical workflow support, as well as for the analysis and improvement of clinical pathways. | Predicting treatment process steps from events |
S153204641400269X | The workflow models of the patient journey in a Pediatric Emergency Department (PED) seems to be an effective approach to develop an accurate and complete representation of the PED processes. This model can drive the collection of comprehensive quantitative and qualitative service delivery and patient treatment data as an evidence base for the PED service planning. Our objective in this study is to identify crowded situation indicators and bottlenecks that contribute to over-crowding. The greatest source of delay in patient flow is the waiting time from the health care request, and especially the bed request to exit from the PED for hospital admission. It represented 70% of the time that these patients occupied in the PED waiting rooms. The use of real data to construct the workflow model of the patient path is effective in identifying sources of delay in patient flow, and aspects of the PED activity that could be improved. The development of this model was based on accurate visits made in the PED of the Regional University Hospital Center (CHRU) of Lille (France). This modeling, which has to represent most faithfully possible the reality of the PED of CHRU of Lille, is necessary. It must be detailed enough to produce an analysis allowing to identify the dysfunctions of the PED and also to propose and to estimate prevention indicators of crowded situations. Our survey is integrated into the French National Research Agency (ANR) project, titled: “Hospital: Optimization, Simulation and avoidance of strain” (HOST). 1 Hôpital: Optimisation, Simulation et évitement des Tensions (ANR-11-TecSan-010: http://host.ec-lille.fr/wp-content/themes/twentyeleven/docsANR/R0/HOST-WP0.pdf). 1 | Mapping patient path in the Pediatric Emergency Department: A workflow model driven approach |
S1532046414002706 | Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin’s Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5–50years over a three-year period. Each patient’s home address was geocoded to one of 3456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin’s geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. | Sparse modeling of spatial environmental variables associated with asthma |
S1532046414002718 | Medical devices are becoming more interconnected and complex, and are increasingly supported by fragmented organizational systems, e.g. through different processes, committees, supporting staff and training regimes. Distributed Cognition has been proposed as a framework for understanding the design and use of medical devices. However, it is not clear that it has the analytic apparatus to support the investigation of such complexities. This paper proposes a framework that introduces concentric layers to DiCoT, a method that facilitates the application of Distributed Cognition theory. We use this to explore how an inpatient blood glucose meter is coupled with its context. The analysis is based on an observational study of clinicians using a newly introduced glucometer on an oncology ward over approximately 150h (11days and 4 nights). Using the framework we describe the basic mechanics of the system, incremental design considerations, and larger design considerations. The DiCoT concentric layers (DiCoT-CL) framework shows promise for analyzing the design and use of medical devices, and how they are coupled with their context. | Exploring medical device design and use through layers of Distributed Cognition: How a glucometer is coupled with its context |
S153204641400272X | Cognitive Informatics (CI) is a burgeoning interdisciplinary domain comprising of the cognitive and information sciences that focuses on human information processing, mechanisms and processes within the context of computing and computer applications. Based on a review of articles published in the Journal of Biomedical Informatics (JBI) between January 2001 and March 2014, we identified 57 articles that focused on topics related to cognitive informatics. We found that while the acceptance of CI into the mainstream informatics research literature is relatively recent, its impact has been significant – from characterizing the limits of clinician problem-solving and reasoning behavior, to describing coordination and communication patterns of distributed clinical teams, to developing sustainable and cognitively-plausible interventions for supporting clinician activities. Additionally, we found that most research contributions fell under the topics of decision-making, usability and distributed team activities with a focus on studying behavioral and cognitive aspects of clinical personnel, as they performed their activities or interacted with health information systems. We summarize our findings within the context of the current areas of CI research, future research directions and current and future challenges for CI researchers. | Cognitive informatics in biomedicine and healthcare |
S1532046414002731 | Prior research on information systems (IS) shows that users’ attitudes and continuance intentions are associated with their satisfaction with information systems. As such, the increasing amount of investments in clinical information systems (CIS) signifies the importance of understanding CIS end users’ (i.e., clinicians) satisfaction. In this study, we develop a conceptual framework to identify the cognitive determinants of clinicians’ satisfaction formation. The disconfirmation paradigm serves as the core of the framework. The expectations and needs congruency models are the two models of this paradigm, and perceived performance is the basis of the comparisons in the models. The needs and expectations associated with the models are also specified. The survey methodology is adopted in this study to empirically validate the proposed research model. The survey is conducted at a public hospital and results in 112 and 203 valid responses (56% and 98% response rates) from doctors and nurses respectively. The partial least squares (PLS) method is used to analyze the data. The results of the study show that perceived CIS performance is the most influential factor on clinicians’ (i.e., doctors and nurses) satisfaction. Doctors’ expectations congruency is the next significant determinant of their satisfaction. Contrary to most previous findings, nurses’ expectations and expectations congruency do not show a significant effect on their satisfaction. However, the needs congruency is found to significantly affect nurses’ satisfaction. | Clinical information systems end user satisfaction: The expectations and needs congruencies effects |
S1532046414002743 | For cancer classification problems based on gene expression, the data usually has only a few dozen sizes but has thousands to tens of thousands of genes which could contain a large number of irrelevant genes. A robust feature selection algorithm is required to remove irrelevant genes and choose the informative ones. Support vector data description (SVDD) has been applied to gene selection for many years. However, SVDD cannot address the problems with multiple classes since it only considers the target class. In addition, it is time-consuming when applying SVDD to gene selection. This paper proposes a novel fast feature selection method based on multiple SVDD and applies it to multi-class microarray data. A recursive feature elimination (RFE) scheme is introduced to iteratively remove irrelevant features, so the proposed method is called multiple SVDD-RFE (MSVDD-RFE). To make full use of all classes for a given task, MSVDD-RFE independently selects a relevant gene subset for each class. The final selected gene subset is the union of these relevant gene subsets. The effectiveness and accuracy of MSVDD-RFE are validated by experiments on five publicly available microarray datasets. Our proposed method is faster and more effective than other methods. | A fast gene selection method for multi-cancer classification using multiple support vector data description |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.