FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S1532046414002755 | An empirical method of sample size determination for building prediction models was proposed recently. Permutation method which is used in this procedure is a commonly used method to address the problem of overfitting during cross-validation while evaluating the performance of prediction models constructed from microarray data. But major drawback of such methods which include bootstrapping and full permutations is prohibitively high cost of computation required for calculating the sample size. In this paper, we propose that a single representative null distribution can be used instead of a full permutation by using both simulated and real data sets. During simulation, we have used a dataset with zero effect size and confirmed that the empirical type I error approaches to 0.05. Hence this method can be confidently applied to reduce overfitting problem during cross-validation. We have observed that pilot data set generated by random sampling from real data could be successfully used for sample size determination. We present our results using an experiment that was repeated for 300 times while producing results comparable to that of full permutation method. Since we eliminate full permutation, sample size estimation time is not a function of pilot data size. In our experiment we have observed that this process takes around 30min. With the increasing number of clinical studies, developing efficient sample size determination methods for building prediction models is critical. But empirical methods using bootstrap and permutation usually involve high computing costs. In this study, we propose a method that can reduce required computing time drastically by using representative null distribution of permutations. We use data from pilot experiments to apply this method for designing clinical studies efficiently for high throughput data. | Practical approach to determine sample size for building logistic prediction models using high-throughput data |
S1532046414002767 | Many classification problems, especially in the field of bioinformatics, are associated with more than one class, known as multi-label classification problems. In this study, we propose a new adaptation for the Binary Relevance algorithm taking into account possible relations among labels, focusing on the interpretability of the model, not only on its performance. Experiments were conducted to compare the performance of our approach against others commonly found in the literature and applied to functional genomic datasets. The experimental results show that our proposal has a performance comparable to that of other methods and that, at the same time, it provides an interpretable model from the multi-label problem. | A multi-label approach using binary relevance and decision trees applied to functional genomics |
S1532046414002779 | Pathway-based analysis approaches provide additional insights into the pathogenesis of complex diseases. Copy number variations (CNVs) play an important role in gene expression regulation. Joint pathway analysis of CNVs and gene expression data should provide more useful information for revealing the molecular mechanism of complex diseases. To implement trans-omics pathway analysis of genome-wide CNVs and mRNA expression profiles data, we extended the gene set enrichment analysis algorithm and developed a flexible trans-omics pathway analysis tool CPAS. CPAS was developed by C to interface with R for efficient data analysis. CNV-gene and pathway-gene annotation files derived from public database were included in CPAS. We hope that CPAS could help to identify disease-relevant biological pathways that were undetectable using traditional single-omic analysis approaches. | CPAS: A trans-omics pathway analysis tool for jointly analyzing DNA copy number variations and mRNA expression profiles data |
S1532046414002780 | Introduction The ambiguity of biomedical abbreviations is one of the challenges in biomedical text mining systems. In particular, the handling of term variants and abbreviations without nearby definitions is a critical issue. In this study, we adopt the concepts of topic of document and word link to disambiguate biomedical abbreviations. Methods We newly suggest the link topic model inspired by the latent Dirichlet allocation model, in which each document is perceived as a random mixture of topics, where each topic is characterized by a distribution over words. Thus, the most probable expansions with respect to abbreviations of a given abstract are determined by word-topic, document-topic, and word-link distributions estimated from a document collection through the link topic model. The model allows two distinct modes of word generation to incorporate semantic dependencies among words, particularly long form words of abbreviations and their sentential co-occurring words; a word can be generated either dependently on the long form of the abbreviation or independently. The semantic dependency between two words is defined as a link and a new random parameter for the link is assigned to each word as well as a topic parameter. Because the link status indicates whether the word constitutes a link with a given specific long form, it has the effect of determining whether a word forms a unigram or a skipping/consecutive bigram with respect to the long form. Furthermore, we place a constraint on the model so that a word has the same topic as a specific long form if it is generated in reference to the long form. Consequently, documents are generated from the two hidden parameters, i.e. topic and link, and the most probable expansion of a specific abbreviation is estimated from the parameters. Results Our model relaxes the bag-of-words assumption of the standard topic model in which the word order is neglected, and it captures a richer structure of text than does the standard topic model by considering unigrams and semantically associated bigrams simultaneously. The addition of semantic links improves the disambiguation accuracy without removing irrelevant contextual words and reduces the parameter space of massive skipping or consecutive bigrams. The link topic model achieves 98.42% disambiguation accuracy on 73,505 MEDLINE abstracts with respect to 21 three letter abbreviations and their 139 distinct long forms. | Link-topic model for biomedical abbreviation disambiguation |
S1532046414002792 | Clinical risk prediction – the estimation of the likelihood an individual is at risk of a disease – is a coveted and exigent clinical task, and a cornerstone to the recommendation of life saving management strategies. This is especially important for individuals at risk of cardiovascular disease (CVD) given the fact that it is the leading causes of death in many developed counties. To this end, we introduce a novel learning algorithm – a key factor that influences the performance of machine learning-based prediction models – and utilities it to develop CVD risk prediction tool. This novel neural-inspired algorithm, called the Artificial Neural Cell System for classification (ANCSc), is inspired by mechanisms that develop the brain and empowering it with capabilities such as information processing/storage and recall, decision making and initiating actions on external environment. Specifically, we exploit on 3 natural neural mechanisms responsible for developing and enriching the brain – namely neurogenesis, neuroplasticity via nurturing and apoptosis – when implementing ANCSc algorithm. Benchmark testing was conducted using the Honolulu Heart Program (HHP) dataset and results are juxtaposed with 2 other algorithms – i.e. Support Vector Machine (SVM) and Evolutionary Data-Conscious Artificial Immune Recognition System (EDC-AIRS). Empirical experiments indicate that ANCSc algorithm (statistically) outperforms both SVM and EDC-AIRS algorithms. Key clinical markers identified by ANCSc algorithm include risk factors related to diet/lifestyle, pulmonary function, personal/family/medical history, blood data, blood pressure, and electrocardiography. These clinical markers, in general, are also found to be clinically significant – providing a promising avenue for identifying potential cardiovascular risk factors to be evaluated in clinical trials. | A novel neural-inspired learning algorithm with application to clinical risk prediction |
S1532046414002809 | Background Clinical models in electronic health records are typically expressed as templates which support the multiple clinical workflows in which the system is used. The templates are often designed using local rather than standard information models and terminology, which hinders semantic interoperability. Semantic challenges can be solved by harmonizing and standardizing clinical models. However, methods supporting harmonization based on existing clinical models are lacking. One approach is to explore semantic similarity estimation as a basis of an analytical framework. Therefore, the aim of this study is to develop and apply methods for intrinsic similarity-estimation based analysis that can compare and give an overview of multiple clinical models. Method For a similarity estimate to be intrinsic it should be based on an established ontology, for which SNOMED CT was chosen. In this study, Lin similarity estimates and Sokal and Sneath similarity estimates were used together with two aggregation techniques (average and best-match-average respectively) resulting in a total of four methods. The similarity estimations are used to hierarchically cluster templates. The test material consists of templates from Danish and Swedish EHR systems. The test material was used to evaluate how the four different methods perform. Result and discussion The best-match-average aggregation technique performed better in terms of clustering similar templates than the average aggregation technique. No difference could be seen in terms of the choice of similarity estimate in this study, but the finding may be different for other datasets. The dendrograms resulting from the hierarchical clustering gave an overview of the templates and a basis of further analysis. Conclusion Hierarchical clustering of templates based on SNOMED CT and semantic similarity estimation with best-match-average aggregation technique can be used for comparison and summarization of multiple templates. Consequently, it can provide a valuable tool for harmonization and standardization of clinical models. | Clustering clinical models from local electronic health records based on semantic similarity |
S1532046415000027 | When validating risk models (or probabilistic classifiers), calibration is often overlooked. Calibration refers to the reliability of the predicted risks, i.e. whether the predicted risks correspond to observed probabilities. In medical applications this is important because treatment decisions often rely on the estimated risk of disease. The aim of this paper is to present generic tools to assess the calibration of multiclass risk models. We describe a calibration framework based on a vector spline multinomial logistic regression model. This framework can be used to generate calibration plots and calculate the estimated calibration index (ECI) to quantify lack of calibration. We illustrate these tools in relation to risk models used to characterize ovarian tumors. The outcome of the study is the surgical stage of the tumor when relevant and the final histological outcome, which is divided into five classes: benign, borderline malignant, stage I, stage II–IV, and secondary metastatic cancer. The 5909 patients included in the study are randomly split into equally large training and test sets. We developed and tested models using the following algorithms: logistic regression, support vector machines, k nearest neighbors, random forest, naive Bayes and nearest shrunken centroids. Multiclass calibration plots are interesting as an approach to visualizing the reliability of predicted risks. The ECI is a convenient tool for comparing models, but is less informative and interpretable than calibration plots. In our case study, logistic regression and random forest showed the highest degree of calibration, and the naive Bayes the lowest. | A spline-based tool to assess and visualize the calibration of multiclass risk predictions |
S1532046415000039 | Chromosomal microarrays (CMAs) are routinely used in both research and clinical laboratories; yet, little attention has been given to the estimation of genome-wide true and false negatives during the assessment of these assays and how such information could be used to calibrate various algorithmic metrics to improve performance. Low-throughput, locus-specific methods such as fluorescence in situ hybridization (FISH), quantitative PCR (qPCR), or multiplex ligation-dependent probe amplification (MLPA) preclude rigorous calibration of various metrics used by copy number variant (CNV) detection algorithms. To aid this task, we have established a comparative methodology, CNV-ROC, which is capable of performing a high throughput, low cost, analysis of CMAs that takes into consideration genome-wide true and false negatives. CNV-ROC uses a higher resolution microarray to confirm calls from a lower resolution microarray and provides for a true measure of genome-wide performance metrics at the resolution offered by microarray testing. CNV-ROC also provides for a very precise comparison of CNV calls between two microarray platforms without the need to establish an arbitrary degree of overlap. Comparison of CNVs across microarrays is done on a per-probe basis and receiver operator characteristic (ROC) analysis is used to calibrate algorithmic metrics, such as log2 ratio threshold, to enhance CNV calling performance. CNV-ROC addresses a critical and consistently overlooked aspect of analytical assessments of genome-wide techniques like CMAs which is the measurement and use of genome-wide true and false negative data for the calculation of performance metrics and comparison of CNV profiles between different microarray experiments. | CNV-ROC: A cost effective, computer-aided analytical performance evaluator of chromosomal microarrays |
S1532046415000040 | The characterization of complex diseases remains a great challenge for biomedical researchers due to the myriad interactions of genetic and environmental factors. Network medicine approaches strive to accommodate these factors holistically. Phylogenomic techniques that can leverage available genomic data may provide an evolutionary perspective that may elucidate knowledge for gene networks of complex diseases and provide another source of information for network medicine approaches. Here, an automated method is presented that leverages publicly available genomic data and phylogenomic techniques, resulting in a gene network. The potential of approach is demonstrated based on a case study of nine genes associated with Alzheimer Disease, a complex neurodegenerative syndrome. The developed technique, which is incorporated into an update to a previously described Perl script called “ASAP,” was implemented through a suite of Ruby scripts entitled “ASAP2,” first compiles a list of sequence-similarity based orthologues using PSI-BLAST and a recursive NCBI BLAST+ search strategy, then constructs maximum parsimony phylogenetic trees for each set of nucleotide and protein sequences, and calculates phylogenetic metrics (Incongruence Length Difference between orthologue sets, partitioned Bremer support values, combined branch scores, and Robinson–Foulds distance) to provide an empirical assessment of evolutionary conservation within a given genetic network. In addition to the individual phylogenetic metrics, ASAP2 provides results in a way that can be used to generate a gene network that represents evolutionary similarity based on topological similarity (the Robinson–Foulds distance). The results of this study demonstrate the potential for using phylogenomic approaches that enable the study of multiple genes simultaneously to provide insights about potential gene relationships that can be studied within a network medicine framework that may not have been apparent using traditional, single-gene methods. Furthermore, the results provide an initial integrated evolutionary history of an Alzheimer Disease gene network and identify potentially important co-evolutionary clustering that may warrant further investigation. | Adapting simultaneous analysis phylogenomic techniques to study complex disease gene relationships |
S1532046415000052 | Since the genome project in 1990s, a number of studies associated with genes have been conducted and researchers have confirmed that genes are involved in disease. For this reason, the identification of the relationships between diseases and genes is important in biology. We propose a method called LGscore, which identifies disease-related genes using Google data and literature data. To implement this method, first, we construct a disease-related gene network using text-mining results. We then extract gene–gene interactions based on co-occurrences in abstract data obtained from PubMed, and calculate the weights of edges in the gene network by means of Z-scoring. The weights contain two values: the frequency and the Google search results. The frequency value is extracted from literature data, and the Google search result is obtained using Google. We assign a score to each gene through a network analysis. We assume that genes with a large number of links and numerous Google search results and frequency values are more likely to be involved in disease. For validation, we investigated the top 20 inferred genes for five different diseases using answer sets. The answer sets comprised six databases that contain information on disease–gene relationships. We identified a significant number of disease-related genes as well as candidate genes for Alzheimer’s disease, diabetes, colon cancer, lung cancer, and prostate cancer. Our method was up to 40% more accurate than existing methods. | LGscore: A method to identify disease-related genes using biological literature and Google data |
S1532046415000064 | The diagnostic of human breast cancer is an intricate process and specific indicators may produce negative results. In order to avoid misleading results, accurate and reliable diagnostic system for breast cancer is indispensable. Recently, several interesting machine-learning (ML) approaches are proposed for prediction of breast cancer. To this end, we developed a novel classifier stacking based evolutionary ensemble system “Can–Evo–Ens” for predicting amino acid sequences associated with breast cancer. In this paper, first, we selected four diverse-type of ML algorithms of Naïve Bayes, K-Nearest Neighbor, Support Vector Machines, and Random Forest as base-level classifiers. These classifiers are trained individually in different feature spaces using physicochemical properties of amino acids. In order to exploit the decision spaces, the preliminary predictions of base-level classifiers are stacked. Genetic programming (GP) is then employed to develop a meta-classifier that optimal combine the predictions of the base classifiers. The most suitable threshold value of the best-evolved predictor is computed using Particle Swarm Optimization technique. Our experiments have demonstrated the robustness of Can–Evo–Ens system for independent validation dataset. The proposed system has achieved the highest value of Area Under Curve (AUC) of ROC Curve of 99.95% for cancer prediction. The comparative results revealed that proposed approach is better than individual ML approaches and conventional ensemble approaches of AdaBoostM1, Bagging, GentleBoost, and Random Subspace. It is expected that the proposed novel system would have a major impact on the fields of Biomedical, Genomics, Proteomics, Bioinformatics, and Drug Development. | Can–Evo–Ens: Classifier stacking based evolutionary ensemble system for prediction of human breast cancer using amino acid sequences |
S1532046415000076 | Objective To develop a method for profiling the collective populations targeted for recruitment by multiple clinical studies addressing the same medical condition using one eligibility feature each time. Methods Using a previously published database COMPACT as the backend, we designed a scalable method for visual aggregate analysis of clinical trial eligibility features. This method consists of four modules for eligibility feature frequency analysis, query builder, distribution analysis, and visualization, respectively. This method is capable of analyzing (1) frequently used qualitative and quantitative features for recruiting subjects for a selected medical condition, (2) distribution of study enrollment on consecutive value points or value intervals of each quantitative feature, and (3) distribution of studies on the boundary values, permissible value ranges, and value range widths of each feature. All analysis results were visualized using Google Charts API. Five recruited potential users assessed the usefulness of this method for identifying common patterns in any selected eligibility feature for clinical trial participant selection. Results We implemented this method as a Web-based analytical system called VITTA (Visual Analysis Tool of Clinical Study Target Populations). We illustrated the functionality of VITTA using two sample queries involving quantitative features BMI and HbA1c for conditions “hypertension” and “Type 2 diabetes”, respectively. The recruited potential users rated the user-perceived usefulness of VITTA with an average score of 86.4/100. Conclusions We contributed a novel aggregate analysis method to enable the interrogation of common patterns in quantitative eligibility criteria and the collective target populations of multiple related clinical studies. A larger-scale study is warranted to formally assess the usefulness of VITTA among clinical investigators and sponsors in various therapeutic areas. | Visual aggregate analysis of eligibility features of clinical trials |
S1532046415000088 | The knowledge on protein–protein interactions (PPI) and their related pathways are equally important to understand the biological functions of the living cell. Such information on human proteins is highly desirable to understand the mechanism of several diseases such as cancer, diabetes, and Alzheimer’s disease. Because much of that information is buried in biomedical literature, an automated text mining system for visualizing human PPI and pathways is highly desirable. In this paper, we present HPIminer, a text mining system for visualizing human protein interactions and pathways from biomedical literature. HPIminer extracts human PPI information and PPI pairs from biomedical literature, and visualize their associated interactions, networks and pathways using two curated databases HPRD and KEGG. To our knowledge, HPIminer is the first system to build interaction networks from literature as well as curated databases. Further, the new interactions mined only from literature and not reported earlier in databases are highlighted as new. A comparative study with other similar tools shows that the resultant network is more informative and provides additional information on interacting proteins and their associated networks. | HPIminer: A text mining system for building and visualizing human protein interaction networks and pathways |
S153204641500009X | Introduction While mammography notably contributes to earlier detection of breast cancer, it has its limitations, including a large number of false positive exams. Improved radiology education could potentially contribute to alleviating this issue. Toward this goal, in this paper we propose an algorithm for modeling of false positive error making among radiology trainees. Identifying troublesome locations for the trainees could focus their training and in turn improve their performance. Methods The algorithm proposed in this paper predicts locations that are likely to result in a false positive error for each trainee based on the previous annotations made by the trainee. The algorithm consists of three steps. First, the suspicious false positive locations are identified in mammograms by Difference of Gaussian filter and suspicious regions are segmented by computer vision-based segmentation algorithms. Second, 133 features are extracted for each suspicious region to describe its distinctive characteristics. Third, a random forest classifier is applied to predict the likelihood of the trainee making a false positive error using the extracted features. The random forest classifier is trained using previous annotations made by the trainee. We evaluated the algorithm using data from a reader study in which 3 experts and 10 trainees interpreted 100 mammographic cases. Results The algorithm was able to identify locations where the trainee will commit a false positive error with accuracy higher than an algorithm that selects such locations randomly. Specifically, our algorithm found false positive locations with 40% accuracy when only 1 location was selected for all cases for each trainee and 12% accuracy when 10 locations were selected. The accuracies for randomly identified locations were both 0% for these two scenarios. Conclusions In this first study on the topic, we were able to build computer models that were able to find locations for which a trainee will make a false positive error in images that were not previously seen by the trainee. Presenting the trainees with such locations rather than randomly selected ones may improve their educational outcomes. | Modeling false positive error making patterns in radiology trainees for improved mammography education |
S1532046415000106 | The recent exponential growth of genomic databases has resulted in the common task of sequence alignment becoming one of the major bottlenecks in the field of computational biology. It is typical for these large datasets and complex computations to require cost prohibitive High Performance Computing (HPC) to function. As such, parallelised solutions have been proposed but many exhibit scalability limitations and are incapable of effectively processing “Big Data” – the name attributed to datasets that are extremely large, complex and require rapid processing. The Hadoop framework, comprised of distributed storage and a parallelised programming framework known as MapReduce, is specifically designed to work with such datasets but it is not trivial to efficiently redesign and implement bioinformatics algorithms according to this paradigm. The parallelisation strategy of “divide and conquer” for alignment algorithms can be applied to both data sets and input query sequences. However, scalability is still an issue due to memory constraints or large databases, with very large database segmentation leading to additional performance decline. Herein, we present Hadoop Blast (HBlast), a parallelised BLAST algorithm that proposes a flexible method to partition both databases and input query sequences using “virtual partitioning”. HBlast presents improved scalability over existing solutions and well balanced computational work load while keeping database segmentation and recompilation to a minimum. Enhanced BLAST search performance on cheap memory constrained hardware has significant implications for in field clinical diagnostic testing; enabling faster and more accurate identification of pathogenic DNA in human blood or tissue samples. | HBLAST: Parallelised sequence similarity – A Hadoop MapReducable basic local alignment search tool |
S153204641500012X | Objective Structured data on mammographic findings are difficult to obtain without manual review. We developed and evaluated a rule-based natural language processing (NLP) system to extract mammographic findings from free-text mammography reports. Materials and Methods The NLP system extracted four mammographic findings: mass, calcification, asymmetry, and architectural distortion, using a dictionary look-up method on 93,705 mammography reports from Group Health. Status annotations and anatomical location annotation were associated to each NLP detected finding through association rules. After excluding negated, uncertain, and historical findings, affirmative mentions of detected findings were summarized. Confidence flags were developed to denote reports with highly confident NLP results and reports with possible NLP errors. A random sample of 100 reports was manually abstracted to evaluate the accuracy of the system. Results The NLP system correctly coded 96–99 out of our sample of 100 reports depending on findings. Measures of sensitivity, specificity and negative predictive values exceeded 0.92 for all findings. Positive predictive values were relatively low for some findings due to their low prevalence. Discussion Our NLP system was implemented entirely in SAS Base, which makes it portable and easy to implement. It performed reasonably well with multiple applications, such as using confidence flags as a filter to improve the efficiency of manual review. Refinements of library and association rules, and testing on more diverse samples may further improve its performance. Conclusion Our NLP system successfully extracts clinically useful information from mammography reports. Moreover, SAS is a feasible platform for implementing NLP algorithms. | Using natural language processing to extract mammographic findings |
S1532046415000131 | Objectives Adverse drug reactions (ADRs) are believed to be a leading cause of death in the world. Pharmacovigilance systems are aimed at early detection of ADRs. With the popularity of social media, Web forums and discussion boards become important sources of data for consumers to share their drug use experience, as a result may provide useful information on drugs and their adverse reactions. In this study, we propose an automated ADR related posts filtering mechanism using text classification methods. In real-life settings, ADR related messages are highly distributed in social media, while non-ADR related messages are unspecific and topically diverse. It is expensive to manually label a large amount of ADR related messages (positive examples) and non-ADR related messages (negative examples) to train classification systems. To mitigate this challenge, we examine the use of a partially supervised learning classification method to automate the process. Methods We propose a novel pharmacovigilance system leveraging a Latent Dirichlet Allocation modeling module and a partially supervised classification approach. We select drugs with more than 500 threads of discussion, and collect all the original posts and comments of these drugs using an automatic Web spidering program as the text corpus. Various classifiers were trained by varying the number of positive examples and the number of topics. The trained classifiers were applied to 3000 posts published over 60days. Top-ranked posts from each classifier were pooled and the resulting set of 300 posts was reviewed by a domain expert to evaluate the classifiers. Results Compare to the alternative approaches using supervised learning methods and three general purpose partially supervised learning methods, our approach performs significantly better in terms of precision, recall, and the F measure (the harmonic mean of precision and recall), based on a computational experiment using online discussion threads from Medhelp. Conclusions Our design provides satisfactory performance in identifying ADR related posts for post-marketing drug surveillance. The overall design of our system also points out a potentially fruitful direction for building other early warning systems that need to filter big data from social media networks. | Filtering big data from social media – Building an early warning system for adverse drug reactions |
S1532046415000143 | Electronic medical record (EMR) offers promises for novel analytics. However, manual feature engineering from EMR is labor intensive because EMR is complex – it contains temporal, mixed-type and multimodal data packed in irregular episodes. We present a computational framework to harness EMR with minimal human supervision via restricted Boltzmann machine (RBM). The framework derives a new representation of medical objects by embedding them in a low-dimensional vector space. This new representation facilitates algebraic and statistical manipulations such as projection onto 2D plane (thereby offering intuitive visualization), object grouping (hence enabling automated phenotyping), and risk stratification. To enhance model interpretability, we introduced two constraints into model parameters: (a) nonnegative coefficients, and (b) structural smoothness. These result in a novel model called eNRBM (EMR-driven nonnegative RBM). We demonstrate the capability of the eNRBM on a cohort of 7578 mental health patients under suicide risk assessment. The derived representation not only shows clinically meaningful feature grouping but also facilitates short-term risk stratification. The F-scores, 0.21 for moderate-risk and 0.36 for high-risk, are significantly higher than those obtained by clinicians and competitive with the results obtained by support vector machines. | Learning vector representation of medical objects via EMR-driven nonnegative restricted Boltzmann machines (eNRBM) |
S1532046415000155 | Real integration of Virtual Microscopy with the pathologist service workflow requires the design of adaptable strategies for any hospital service to interact with a set of Whole Slide Images. Nowadays, mobile devices have the actual potential of supporting an online pervasive network of specialists working together. However, such devices are still very limited. This article introduces a novel highly adaptable strategy for streaming and visualizing WSI from mobile devices. The presented approach effectively exploits and extends the granularity of the JPEG2000 standard and integrates it with different strategies to achieve a lossless, loosely-coupled, decoder and platform independent implementation, adaptable to any interaction model. The performance was evaluated by two expert pathologists interacting with a set of 20 virtual slides. The method efficiently uses the available device resources: the memory usage did not exceed a 7% of the device capacity while the decoding times were smaller than the 200ms per Region of Interest, i.e., a window of 256 × 256 pixels. This model is easily adaptable to other medical imaging scenarios. | An adaptable navigation strategy for Virtual Microscopy from mobile platforms |
S1532046415000167 | Background Literature-based discovery (LBD) is characterized by uncovering hidden associations in non-interacting scientific literature. Prior approaches to LBD include use of: (1) domain expertise and structured background knowledge to manually filter and explore the literature, (2) distributional statistics and graph-theoretic measures to rank interesting connections, and (3) heuristics to help eliminate spurious connections. However, manual approaches to LBD are not scalable and purely distributional approaches may not be sufficient to obtain insights into the meaning of poorly understood associations. While several graph-based approaches have the potential to elucidate associations, their effectiveness has not been fully demonstrated. A considerable degree of a priori knowledge, heuristics, and manual filtering is still required. Objectives In this paper we implement and evaluate a context-driven, automatic subgraph creation method that captures multifaceted complex associations between biomedical concepts to facilitate LBD. Given a pair of concepts, our method automatically generates a ranked list of subgraphs, which provide informative and potentially unknown associations between such concepts. Methods To generate subgraphs, the set of all MEDLINE articles that contain either of the two specified concepts (A, C) are first collected. Then binary relationships or assertions, which are automatically extracted from the MEDLINE articles, called semantic predications, are used to create a labeled directed predications graph. In this predications graph, a path is represented as a sequence of semantic predications. The hierarchical agglomerative clustering (HAC) algorithm is then applied to cluster paths that are bounded by the two concepts (A, C). HAC relies on implicit semantics captured through Medical Subject Heading (MeSH) descriptors, and explicit semantics from the MeSH hierarchy, for clustering. Paths that exceed a threshold of semantic relatedness are clustered into subgraphs based on their shared context. Finally, the automatically generated clusters are provided as a ranked list of subgraphs. Results The subgraphs generated using this approach facilitated the rediscovery of 8 out of 9 existing scientific discoveries. In particular, they directly (or indirectly) led to the recovery of several intermediates (or B-concepts) between A- and C-terms, while also providing insights into the meaning of the associations. Such meaning is derived from predicates between the concepts, as well as the provenance of the semantic predications in MEDLINE. Additionally, by generating subgraphs on different thematic dimensions (such as Cellular Activity, Pharmaceutical Treatment and Tissue Function), the approach may enable a broader understanding of the nature of complex associations between concepts. Finally, in a statistical evaluation to determine the interestingness of the subgraphs, it was observed that an arbitrary association is mentioned in only approximately 4 articles in MEDLINE on average. Conclusion These results suggest that leveraging the implicit and explicit semantics provided by manually assigned MeSH descriptors is an effective representation for capturing the underlying context of complex associations, along multiple thematic dimensions in LBD situations. | Context-driven automatic subgraph creation for literature-based discovery |
S1532046415000180 | Background Full syntactic parsing of clinical text as a part of clinical natural language processing (NLP) is critical for a wide range of applications. Several robust syntactic parsers are publicly available to produce linguistic representations for sentences. However, these existing parsers are mostly trained on general English text and may require adaptation for optimal performance on clinical text. Our objective was to adapt an existing general English parser for the clinical text of operative reports via lexicon augmentation, statistics adjusting, and grammar rules modification based on operative reports. Method The Stanford unlexicalized probabilistic context-free grammar (PCFG) parser lexicon was expanded with SPECIALIST lexicon along with statistics collected from a limited set of operative notes tagged by two POS taggers (GENIA tagger and MedPost). The most frequently occurring verb entries of the SPECIALIST lexicon were adjusted based on manual review of verb usage in operative notes. Stanford parser grammar production rules were also modified based on linguistic features of operative reports. An analogous approach was then applied to the GENIA corpus to test the generalizability of this approach to biologic text. Results The new unlexicalized PCFG parser extended with the extra lexicon from SPECIALIST along with accurate statistics collected from an operative note corpus tagged with GENIA POS tagger improved the F-score by 2.26% from 87.64% to 89.90%. There was a progressive improvement with the addition of multiple approaches. Lexicon augmentation combined with statistics from the operative notes corpus provided the greatest improvement of parser performance. Application of this approach on the GENIA corpus increased the F-score by 3.81% with a simple new grammar and addition of the GENIA corpus lexicon. Conclusion Using statistics collected from clinical text tagged with POS taggers along with proper modification of grammars and lexicons of an unlexicalized PCFG parser may improve parsing performance of existing parsers on specialized clinical text. | Domain adaption of parsing for operative notes |
S1532046415000192 | Recently, the use of artificial intelligence based data mining techniques for massive medical data classification and diagnosis has gained its popularity, whereas the effectiveness and efficiency by feature selection is worthy to further investigate. In this paper, we presents a novel method for feature selection with the use of opposite sign test (OST) as a local search for the electromagnetism-like mechanism (EM) algorithm, denoted as improved electromagnetism-like mechanism (IEM) algorithm. Nearest neighbor algorithm is served as a classifier for the wrapper method. The proposed IEM algorithm is compared with nine popular feature selection and classification methods. Forty-six datasets from the UCI repository and eight gene expression microarray datasets are collected for comprehensive evaluation. Non-parametric statistical tests are conducted to justify the performance of the methods in terms of classification accuracy and Kappa index. The results confirm that the proposed IEM method is superior to the common state-of-art methods. Furthermore, we apply IEM to predict the occurrence of Type 2 diabetes mellitus (DM) after a gestational DM. Our research helps identify the risk factors for this disease; accordingly accurate diagnosis and prognosis can be achieved to reduce the morbidity and mortality rate caused by DM. | An improved electromagnetism-like mechanism algorithm and its application to the prediction of diabetes mellitus |
S1532046415000209 | Use of clinical simulation in the design and evaluation of eHealth systems and applications has increased during the last decade. This paper describes a methodological approach for using clinical simulations in the design and evaluation of clinical information systems. The method is based on experiences from more than 20 clinical simulation studies conducted at the ITX-lab in the Capital Region of Denmark during the last 5years. A ten-step approach to conducting simulations is presented in this paper. To illustrate the approach, a clinical simulation study concerning implementation of Digital Clinical Practice Guidelines in a prototype planning and coordination module is presented. In the case study potential benefits were assessed in a full-scale simulation test including 18 health care professionals. The results showed that health care professionals can benefit from such a module. Unintended consequences concerning terminology and changes in the division of responsibility amongst healthcare professionals were also identified, and questions were raised concerning future workflow across sector borders. Furthermore unexpected new possible benefits concerning improved communication, content of information in discharge letters and quality management emerged during the testing. In addition new potential groups of users were identified. The case study is used to demonstrate the potential of using the clinical simulation approach described in the paper. | Clinical simulation: A method for development and evaluation of clinical information systems |
S1532046415000210 | Background and significance Sparsity is often a desirable property of statistical models, and various feature selection methods exist so as to yield sparser and interpretable models. However, their application to biomedical text classification, particularly to mortality risk stratification among intensive care unit (ICU) patients, has not been thoroughly studied. Objective To develop and characterize sparse classifiers based on the free text of nursing notes in order to predict ICU mortality risk and to discover text features most strongly associated with mortality. Methods We selected nursing notes from the first 24h of ICU admission for 25,826 adult ICU patients from the MIMIC-II database. We then developed a pair of stochastic gradient descent-based classifiers with elastic-net regularization. We also studied the performance-sparsity tradeoffs of both classifiers as their regularization parameters were varied. Results The best-performing classifier achieved a 10-fold cross-validated AUC of 0.897 under the log loss function and full L 2 regularization, while full L 1 regularization used just 0.00025% of candidate input features and resulted in an AUC of 0.889. Using the log loss (range of AUCs 0.889–0.897) yielded better performance compared to the hinge loss (0.850–0.876), but the latter yielded even sparser models. Discussion Most features selected by both classifiers appear clinically relevant and correspond to predictors already present in existing ICU mortality models. The sparser classifiers were also able to discover a number of informative – albeit nonclinical – features. Conclusion The elastic-net-regularized classifiers perform reasonably well and are capable of reducing the number of features required by over a thousandfold, with only a modest impact on performance. | Efficient and sparse feature selection for biomedical text classification via the elastic net: Application to ICU risk stratification from nursing notes |
S1532046415000362 | Objective Automatic monitoring of Adverse Drug Reactions (ADRs), defined as adverse patient outcomes caused by medications, is a challenging research problem that is currently receiving significant attention from the medical informatics community. In recent years, user-posted data on social media, primarily due to its sheer volume, has become a useful resource for ADR monitoring. Research using social media data has progressed using various data sources and techniques, making it difficult to compare distinct systems and their performances. In this paper, we perform a methodical review to characterize the different approaches to ADR detection/extraction from social media, and their applicability to pharmacovigilance. In addition, we present a potential systematic pathway to ADR monitoring from social media. Methods We identified studies describing approaches for ADR detection from social media from the Medline, Embase, Scopus and Web of Science databases, and the Google Scholar search engine. Studies that met our inclusion criteria were those that attempted to extract ADR information posted by users on any publicly available social media platform. We categorized the studies according to different characteristics such as primary ADR detection approach, size of corpus, data source(s), availability, and evaluation criteria. Results Twenty-two studies met our inclusion criteria, with fifteen (68%) published within the last two years. However, publicly available annotated data is still scarce, and we found only six studies that made the annotations used publicly available, making system performance comparisons difficult. In terms of algorithms, supervised classification techniques to detect posts containing ADR mentions, and lexicon-based approaches for extraction of ADR mentions from texts have been the most popular. Conclusion Our review suggests that interest in the utilization of the vast amounts of available social media data for ADR monitoring is increasing. In terms of sources, both health-related and general social media data have been used for ADR detection—while health-related sources tend to contain higher proportions of relevant data, the volume of data from general social media websites is significantly higher. There is still very limited amount of annotated data publicly available , and, as indicated by the promising results obtained by recent supervised learning approaches, there is a strong need to make such data available to the research community. | Utilizing social media data for pharmacovigilance: A review |
S1532046415000374 | Introduction Surgical workflow management is expected to enable situation-aware adaptation and intelligent systems behavior in an integrated operating room (OR). The overall aim is to unburden the surgeon and OR staff from both manual maintenance and information seeking tasks. A major step toward intelligent systems behavior is a stable classification of the surgical situation from multiple perspectives based on performed low-level tasks. Material and methods The present work proposes a method for the classification of surgical situations based on multi-perspective workflow modeling. A model network that interconnects different types of surgical process models is described. Various aspects of a surgical situation description were considered: low-level tasks, high-level tasks, patient status, and the use of medical devices. A study with sixty neurosurgical interventions was conducted to evaluate the performance of our approach and its robustness against incomplete workflow recognition input. Results A correct classification rate of over 90% was measured for high-level tasks and patient status. The device usage models for navigation and neurophysiology classified over 95% of the situations correctly, whereas the ultrasound usage was more difficult to predict. Overall, the classification rate decreased with an increasing level of input distortion. Discussion Autonomous adaptation of medical devices and intelligent systems behavior do not currently depend solely on low-level tasks. Instead, they require a more general type of understanding of the surgical condition. The integration of various surgical process models in a network provided a comprehensive representation of the interventions and allowed for the generation of extensive situation descriptions. Conclusion Multi-perspective surgical workflow modeling and online situation models will be a significant pre-requisite for reliable and intelligent systems behavior. Hence, they will contribute to a cooperative OR environment. | Multi-perspective workflow modeling for online surgical situation models |
S1532046415000398 | One of the challenging problems in drug discovery is to identify the novel targets for drugs. Most of the traditional methods for drug targets optimization focused on identifying the particular families of “druggable targets”, but ignored their topological properties based on the biological pathways. In this study, we characterized the topological properties of human anticancer drug targets (ADTs) in the context of biological pathways. We found that the ADTs tended to present the following seven topological properties: influence the number of the pathways related to cancer, be localized at the start or end of the pathways, interact with cancer related genes, exhibit higher connectivity, vulnerability, betweenness, and closeness than other genes. We first ranked ADTs based on their topological property values respectively, then fused them into one global-rank using the joint cumulative distribution of an N-dimensional order statistic to optimize human ADTs. We applied the optimization method to 13 anticancer drugs, respectively. Results demonstrated that over 70% of known ADTs were ranked in the top 20%. Furthermore, the performance for mercaptopurine was significant: 6 known targets (ADSL, GMPR2, GMPR, HPRT1, AMPD3, AMPD2) were ranked in the top 15 and other four out of the top 15 (MAT2A, CDKN1A, AREG, JUN) have the potentialities to become new targets for cancer therapy. | Characterizing and optimizing human anticancer drug targets based on topological properties in the context of biological pathways |
S1532046415000404 | This study examines the ability of nonclinical adverse event observations to predict human clinical adverse events observed in drug development programs. In addition it examines the relationship between nonclinical and clinical adverse event observations to drug withdrawal and proposes a model to predict drug withdrawal based on these observations. These analyses provide risk assessments useful for both planning patient safety programs, as well as a statistical framework for assessing the future success of drug programs based on nonclinical and clinical observations. Bayesian analyses were undertaken to investigate the connection between nonclinical adverse event observations and observations of that same event in clinical trial for a large set of approved drugs. We employed the same statistical methods used to evaluate the efficacy of diagnostic tests to evaluate the ability of nonclinical studies to predict adverse events in clinical studies, and adverse events in both to predict drug withdrawal. We find that some nonclinical observations suggest higher risk for observing the same adverse event in clinical studies, particularly arrhythmias, QT prolongation, and abnormal hepatic function. However the lack of these events in nonclinical studies is found to not be a good predictor of safety in humans. Some nonclinical and clinical observations appear to be associated with high risk of drug withdrawal from market, especially arrhythmia and hepatic necrosis. We use the method to estimate the overall risk of drug withdrawal from market using the product of the risks from each nonclinical and clinical observation to create a risk profile. | Prediction of clinical risks by analysis of preclinical and clinical adverse events |
S1532046415000416 | Background Bodyweight related measures (weight, height, BMI, abdominal circumference) are extremely important for clinical care, research and quality improvement. These and other vitals signs data are frequently missing from structured tables of electronic health records. However they are often recorded as text within clinical notes. In this project we sought to develop and validate a learning algorithm that would extract bodyweight related measures from clinical notes in the Veterans Administration (VA) Electronic Health Record to complement the structured data used in clinical research. Methods We developed the Regular Expression Discovery Extractor (REDEx), a supervised learning algorithm that generates regular expressions from a training set. The regular expressions generated by REDEx were then used to extract the numerical values of interest. To train the algorithm we created a corpus of 268 outpatient primary care notes that were annotated by two annotators. This annotation served to develop the annotation process and identify terms associated with bodyweight related measures for training the supervised learning algorithm. Snippets from an additional 300 outpatient primary care notes were subsequently annotated independently by two reviewers to complete the training set. Inter-annotator agreement was calculated. REDEx was applied to a separate test set of 3561 notes to generate a dataset of weights extracted from text. We estimated the number of unique individuals who would otherwise not have bodyweight related measures recorded in the CDW and the number of additional bodyweight related measures that would be additionally captured. Results REDEx’s performance was: accuracy=98.3%, precision=98.8%, recall=98.3%, F =98.5%. In the dataset of weights from 3561 notes, 7.7% of notes contained bodyweight related measures that were not available as structured data. In addition 2 additional bodyweight related measures were identified per individual per year. Conclusion Bodyweight related measures are frequently stored as text in clinical notes. A supervised learning algorithm can be used to extract this data. Implications for clinical care, epidemiology, and quality improvement efforts are discussed. | Regular expression-based learning to extract bodyweight values from clinical notes |
S1532046415000428 | This report describes a grant-funded project to explore the use of DIRECT secure messaging for the electronic delivery of laboratory test results to outpatient physicians and electronic health record systems. The project seeks to leverage the inherent attributes of DIRECT secure messaging and electronic provider directories to overcome certain barriers to the delivery of lab test results in the outpatient setting. The described system enables laboratories that generate test results as HL7 messages to deliver these results as structured or unstructured documents attached to DIRECT secure messages. The system automatically analyzes generated HL7 messages and consults an electronic provider directory to determine the appropriate DIRECT address and delivery format for each indicated recipient. The system also enables lab results delivered to providers as structured attachments to be consumed by HL7 interface engines and incorporated into electronic health record systems. Lab results delivered as unstructured attachments may be printed or incorporated into patient records as PDF files. The system receives and logs acknowledgement messages to document the status of each transmitted lab result, and a graphical interface allows searching and review of this logged information. The described system is a fully implemented prototype that has been tested in a laboratory setting. Although this approach is promising, further work is required to pilot test the system in production settings with clinical laboratories and outpatient provider organizations. | DIRECT secure messaging as a common transport layer for reporting structured and unstructured lab results to outpatient providers |
S153204641500043X | In Electronic Health Records (EHRs), much of valuable information regarding patients’ conditions is embedded in free text format. Natural language processing (NLP) techniques have been developed to extract clinical information from free text. One challenge faced in clinical NLP is that the meaning of clinical entities is heavily affected by modifiers such as negation. A negation detection algorithm, NegEx, applies a simplistic approach that has been shown to be powerful in clinical NLP. However, due to the failure to consider the contextual relationship between words within a sentence, NegEx fails to correctly capture the negation status of concepts in complex sentences. Incorrect negation assignment could cause inaccurate diagnosis of patients’ condition or contaminated study cohorts. We developed a negation algorithm called DEEPEN to decrease NegEx’s false positives by taking into account the dependency relationship between negation words and concepts within a sentence using Stanford dependency parser. The system was developed and tested using EHR data from Indiana University (IU) and it was further evaluated on Mayo Clinic dataset to assess its generalizability. The evaluation results demonstrate DEEPEN, which incorporates dependency parsing into NegEx, can reduce the number of incorrect negation assignment for patients with positive findings, and therefore improve the identification of patients with the target clinical findings in EHRs. | DEEPEN: A negation detection system for clinical text incorporating dependency relation into NegEx |
S1532046415000441 | Identifying unknown drug interactions is of great benefit in the early detection of adverse drug reactions. Despite existence of several resources for drug–drug interaction (DDI) information, the wealth of such information is buried in a body of unstructured medical text which is growing exponentially. This calls for developing text mining techniques for identifying DDIs. The state-of-the-art DDI extraction methods use Support Vector Machines (SVMs) with non-linear composite kernels to explore diverse contexts in literature. While computationally less expensive, linear kernel-based systems have not achieved a comparable performance in DDI extraction tasks. In this work, we propose an efficient and scalable system using a linear kernel to identify DDI information. The proposed approach consists of two steps: identifying DDIs and assigning one of four different DDI types to the predicted drug pairs. We demonstrate that when equipped with a rich set of lexical and syntactic features, a linear SVM classifier is able to achieve a competitive performance in detecting DDIs. In addition, the one-against-one strategy proves vital for addressing an imbalance issue in DDI type classification. Applied to the DDIExtraction 2013 corpus, our system achieves an F1 score of 0.670, as compared to 0.651 and 0.609 reported by the top two participating teams in the DDIExtraction 2013 challenge, both based on non-linear kernel methods. | Extracting drug–drug interactions from literature using a rich feature-based linear kernel approach |
S1532046415000465 | Background Understanding individual patient host-response to viruses is key to designing optimal personalized therapy. Unsurprisingly, in vivo human experimentation to understand individualized dynamic response of the transcriptome to viruses are rarely studied because of the obvious limitations stemming from ethical considerations of the clinical risk. Objective In this rhinovirus study, we first hypothesized that ex vivo human cells response to virus can serve as a proxy for otherwise controversial in vivo human experimentation. We further hypothesized that the N-of-1-pathways framework, previously validated in cancer, can be effective in understanding the more subtle individual transcriptomic response to viral infection. Method N-of-1-pathways computes a significance score for a given list of gene sets at the patient level, using merely the ‘omics profiles of two paired samples as input. We extracted the peripheral blood mononuclear cells (PBMC) of four human subjects, aliquoted in two paired samples, one subjected to ex vivo rhinovirus infection. Their dysregulated genes and pathways were then compared to those of 9 human subjects prior and after intranasal inoculation in vivo with rhinovirus. Additionally, we developed the Similarity Venn Diagram, a novel visualization method that goes beyond conventional overlap to show the similarity between two sets of qualitative measures. Results We evaluated the individual N-of-1-pathways results using two established cohort-based methods: GSEA and enrichment of differentially expressed genes. Similarity Venn Diagrams and individual patient ROC curves illustrate and quantify that the in vivo dysregulation is recapitulated ex vivo both at the gene and pathway level (p-values⩽0.004). Conclusion We established the first evidence that an interpretable dynamic transcriptome metric, conducted as an ex vivo assays for a single subject, has the potential to predict individualized response to infectious disease without the clinical risks otherwise associated to in vivo challenges. These results serve as a foundational work for personalized “virograms”. | Towards a PBMC “virogram assay” for precision medicine: Concordance between ex vivo and in vivo viral infection transcriptomes |
S1532046415000738 | Although potential drug–drug interactions (PDDIs) are a significant source of preventable drug-related harm, there is currently no single complete source of PDDI information. In the current study, all publically available sources of PDDI information that could be identified using a comprehensive and broad search were combined into a single dataset. The combined dataset merged fourteen different sources including 5 clinically-oriented information sources, 4 Natural Language Processing (NLP) Corpora, and 5 Bioinformatics/Pharmacovigilance information sources. As a comprehensive PDDI source, the merged dataset might benefit the pharmacovigilance text mining community by making it possible to compare the representativeness of NLP corpora for PDDI text extraction tasks, and specifying elements that can be useful for future PDDI extraction purposes. An analysis of the overlap between and across the data sources showed that there was little overlap. Even comprehensive PDDI lists such as DrugBank, KEGG, and the NDF-RT had less than 50% overlap with each other. Moreover, all of the comprehensive lists had incomplete coverage of two data sources that focus on PDDIs of interest in most clinical settings. Based on this information, we think that systems that provide access to the comprehensive lists, such as APIs into RxNorm, should be careful to inform users that the lists may be incomplete with respect to PDDIs that drug experts suggest clinicians be aware of. In spite of the low degree of overlap, several dozen cases were identified where PDDI information provided in drug product labeling might be augmented by the merged dataset. Moreover, the combined dataset was also shown to improve the performance of an existing PDDI NLP pipeline and a recently published PDDI pharmacovigilance protocol. Future work will focus on improvement of the methods for mapping between PDDI information sources, identifying methods to improve the use of the merged dataset in PDDI NLP algorithms, integrating high-quality PDDI information from the merged dataset into Wikidata, and making the combined dataset accessible as Semantic Web Linked Data. | Toward a complete dataset of drug–drug interaction information from publicly available sources |
S1532046415000829 | Objective This paper describes the University of Michigan’s nine-year experience in developing and using a full-text search engine designed to facilitate information retrieval (IR) from narrative documents stored in electronic health records (EHRs). The system, called the Electronic Medical Record Search Engine (EMERSE), functions similar to Google but is equipped with special functionalities for handling challenges unique to retrieving information from medical text. Materials and methods Key features that distinguish EMERSE from general-purpose search engines are discussed, with an emphasis on functions crucial to (1) improving medical IR performance and (2) assuring search quality and results consistency regardless of users’ medical background, stage of training, or level of technical expertise. Results Since its initial deployment, EMERSE has been enthusiastically embraced by clinicians, administrators, and clinical and translational researchers. To date, the system has been used in supporting more than 750 research projects yielding 80 peer-reviewed publications. In several evaluation studies, EMERSE demonstrated very high levels of sensitivity and specificity in addition to greatly improved chart review efficiency. Discussion Increased availability of electronic data in healthcare does not automatically warrant increased availability of information. The success of EMERSE at our institution illustrates that free-text EHR search engines can be a valuable tool to help practitioners and researchers retrieve information from EHRs more effectively and efficiently, enabling critical tasks such as patient case synthesis and research data abstraction. Conclusion EMERSE, available free of charge for academic use, represents a state-of-the-art medical IR tool with proven effectiveness and user acceptance. | Supporting information retrieval from electronic health records: A report of University of Michigan’s nine-year experience in developing and using the Electronic Medical Record Search Engine (EMERSE) |
S1532046415000891 | The 2014 i2b2/UTHealth natural language processing shared task featured a track focused on identifying risk factors for heart disease (specifically, Cardiac Artery Disease) in clinical narratives. For this track, we used a “light” annotation paradigm to annotate a set of 1304 longitudinal medical records describing 296 patients for risk factors and the times they were present. We designed the annotation task for this track with the goal of balancing annotation load and time with quality, so as to generate a gold standard corpus that can benefit a clinically-relevant task. We applied light annotation procedures and determined the gold standard using majority voting. On average, the agreement of annotators with the gold standard was above 0.95, indicating high reliability. The resulting document-level annotations generated for each record in each longitudinal EMR in this corpus provide information that can support studies of progression of heart disease risk factors in the included patients over time. These annotations were used in the Risk Factor track of the 2014 i2b2/UTHealth shared task. Participating systems achieved a mean micro-averaged F 1 measure of 0.815 and a maximum F 1 measure of 0.928 for identifying these risk factors in patient records. | Annotating risk factors for heart disease in clinical narratives for diabetic patients |
S1532046415000969 | Risk sharing arrangements between hospitals and payers together with penalties imposed by the Centers for Medicare and Medicaid (CMS) are driving an interest in decreasing early readmissions. There are a number of published risk models predicting 30day readmissions for particular patient populations, however they often exhibit poor predictive performance and would be unsuitable for use in a clinical setting. In this work we describe and compare several predictive models, some of which have never been applied to this task and which outperform the regression methods that are typically applied in the healthcare literature. In addition, we apply methods from deep learning to the five conditions CMS is using to penalize hospitals, and offer a simple framework for determining which conditions are most cost effective to target. | A comparison of models for predicting early hospital readmissions |
S1532046415001070 | Evaluation of survival models to predict cancer patient prognosis is one of the most important areas of emphasis in cancer research. A binary classification approach has difficulty directly predicting survival due to the characteristics of censored observations and the fact that the predictive power depends on the threshold used to set two classes. In contrast, the traditional Cox regression approach has some drawbacks in the sense that it does not allow for the identification of interactions between genomic features, which could have key roles associated with cancer prognosis. In addition, data integration is regarded as one of the important issues in improving the predictive power of survival models since cancer could be caused by multiple alterations through meta-dimensional genomic data including genome, epigenome, transcriptome, and proteome. Here we have proposed a new integrative framework designed to perform these three functions simultaneously: (1) predicting censored survival data; (2) integrating meta-dimensional omics data; (3) identifying interactions within/between meta-dimensional genomic features associated with survival. In order to predict censored survival time, martingale residuals were calculated as a new continuous outcome and a new fitness function used by the grammatical evolution neural network (GENN) based on mean absolute difference of martingale residuals was implemented. To test the utility of the proposed framework, a simulation study was conducted, followed by an analysis of meta-dimensional omics data including copy number, gene expression, DNA methylation, and protein expression data in breast cancer retrieved from The Cancer Genome Atlas (TCGA). On the basis of the results from breast cancer dataset, we were able to identify interactions not only within a single dimension of genomic data but also between meta-dimensional omics data that are associated with survival. Notably, the predictive power of our best meta-dimensional model was 73% which outperformed all of the other models conducted based on a single dimension of genomic data. Breast cancer is an extremely heterogeneous disease and the high levels of genomic diversity within/between breast tumors could affect the risk of therapeutic responses and disease progression. Thus, identifying interactions within/between meta-dimensional omics data associated with survival in breast cancer is expected to deliver direction for improved meta-dimensional prognostic biomarkers and therapeutic targets. | Predicting censored survival data based on the interactions between meta-dimensional omics data in breast cancer |
S1532046415001124 | As healthcare shifts from the hospital to the home, it is becoming increasingly important to understand how patients interact with home medical devices, to inform the safe and patient-friendly design of these devices. Distributed Cognition (DCog) has been a useful theoretical framework for understanding situated interactions in the healthcare domain. However, it has not previously been applied to study interactions with home medical devices. In this study, DCog was applied to understand renal patients’ interactions with Home Hemodialysis Technology (HHT), as an example of a home medical device. Data was gathered through ethnographic observations and interviews with 19 renal patients and interviews with seven professionals. Data was analyzed through the principles summarized in the Distributed Cognition for Teamwork methodology. In this paper we focus on the analysis of system activities, information flows, social structures, physical layouts, and artefacts. By explicitly considering different ways in which cognitive processes are distributed, the DCog approach helped to understand patients’ interaction strategies, and pointed to design opportunities that could improve patients’ experiences of using HHT. The findings highlight the need to design HHT taking into consideration likely scenarios of use in the home and of the broader home context. A setting such as home hemodialysis has the characteristics of a complex and safety–critical socio-technical system, and a DCog approach effectively helps to understand how safety is achieved or compromised in such a system. | Understanding safety–critical interactions with a home medical device through Distributed Cognition |
S1532046415001161 | Background Self-monitoring is an integral component of many chronic diseases; however few theoretical frameworks address how individuals understand self-monitoring data and use it to guide self-management. Purpose To articulate a theoretical framework of sensemaking in diabetes self-management that integrates existing scholarship with empirical data. Methods The proposed framework is grounded in theories of sensemaking adopted from organizational behavior, education, and human–computer interaction. To empirically validate the framework the researchers reviewed and analyzed reports on qualitative studies of diabetes self-management practices published in peer-reviewed journals from 2000 to 2015. Results The proposed framework distinguishes between sensemaking and habitual modes of self-management and identifies three essential sensemaking activities: perception of new information related to health and wellness, development of inferences that inform selection of actions, and carrying out daily activities in response to new information. The analysis of qualitative findings from 50 published reports provided ample empirical evidence for the proposed framework; however, it also identified a number of barriers to engaging in sensemaking in diabetes self-management. Conclusions The proposed framework suggests new directions for research in diabetes self-management and for design of new informatics interventions for data-driven self-management. | Adopting the sensemaking perspective for chronic disease self-management |
S1532046415001173 | The 2014 i2b2/UTHealth Natural Language Processing (NLP) shared task featured four tracks. The first of these was the de-identification track focused on identifying protected health information (PHI) in longitudinal clinical narratives. The longitudinal nature of clinical narratives calls particular attention to details of information that, while benign on their own in separate records, can lead to identification of patients in combination in longitudinal records. Accordingly, the 2014 de-identification track addressed a broader set of entities and PHI than covered by the Health Insurance Portability and Accountability Act – the focus of the de-identification shared task that was organized in 2006. Ten teams tackled the 2014 de-identification task and submitted 22 system outputs for evaluation. Each team was evaluated on their best performing system output. Three of the 10 systems achieved F1 scores over .90, and seven of the top 10 scored over .75. The most successful systems combined conditional random fields and hand-written rules. Our findings indicate that automated systems can be very effective for this task, but that de-identification is not yet a solved problem. | Automated systems for the de-identification of longitudinal clinical narratives: Overview of 2014 i2b2/UTHealth shared task Track 1 |
S1532046415001197 | De-identification, identifying and removing all protected health information (PHI) present in clinical data including electronic medical records (EMRs), is a critical step in making clinical data publicly available. The 2014 i2b2 (Center of Informatics for Integrating Biology and Bedside) clinical natural language processing (NLP) challenge sets up a track for de-identification (track 1). In this study, we propose a hybrid system based on both machine learning and rule approaches for the de-identification track. In our system, PHI instances are first identified by two (token-level and character-level) conditional random fields (CRFs) and a rule-based classifier, and then are merged by some rules. Experiments conducted on the i2b2 corpus show that our system submitted for the challenge achieves the highest micro F-scores of 94.64%, 91.24% and 91.63% under the “token”, “strict” and “relaxed” criteria respectively, which is among top-ranked systems of the 2014 i2b2 challenge. After integrating some refined localization dictionaries, our system is further improved with F-scores of 94.83%, 91.57% and 91.95% under the “token”, “strict” and “relaxed” criteria respectively. | Automatic de-identification of electronic medical records using token-level and character-level conditional random fields |
S1532046415001203 | This paper describes a supervised machine learning approach for identifying heart disease risk factors in clinical text, and assessing the impact of annotation granularity and quality on the system’s ability to recognize these risk factors. We utilize a series of support vector machine models in conjunction with manually built lexicons to classify triggers specific to each risk factor. The features used for classification were quite simple, utilizing only lexical information and ignoring higher-level linguistic information such as syntax and semantics. Instead, we incorporated high-quality data to train the models by annotating additional information on top of a standard corpus. Despite the relative simplicity of the system, it achieves the highest scores (micro- and macro-F 1, and micro- and macro-recall) out of the 20 participants in the 2014 i2b2/UTHealth Shared Task. This system obtains a micro- (macro-) precision of 0.8951 (0.8965), recall of 0.9625 (0.9611), and F 1-measure of 0.9276 (0.9277). Additionally, we perform a series of experiments to assess the value of the annotated data we created. These experiments show how manually-labeled negative annotations can improve information extraction performance, demonstrating the importance of high-quality, fine-grained natural language annotations. | The role of fine-grained annotations in supervised recognition of risk factors for heart disease from EHRs |
S1532046415001239 | Heart disease is the leading cause of death globally and a significant part of the human population lives with it. A number of risk factors have been recognized as contributing to the disease, including obesity, coronary artery disease (CAD), hypertension, hyperlipidemia, diabetes, smoking, and family history of premature CAD. This paper describes and evaluates a methodology to extract mentions of such risk factors from diabetic clinical notes, which was a task of the i2b2/UTHealth 2014 Challenge in Natural Language Processing for Clinical Data. The methodology is knowledge-driven and the system implements local lexicalized rules (based on syntactical patterns observed in notes) combined with manually constructed dictionaries that characterize the domain. A part of the task was also to detect the time interval in which the risk factors were present in a patient. The system was applied to an evaluation set of 514 unseen notes and achieved a micro-average F-score of 88% (with 86% precision and 90% recall). While the identification of CAD family history, medication and some of the related disease factors (e.g. hypertension, diabetes, hyperlipidemia) showed quite good results, the identification of CAD-specific indicators proved to be more challenging (F-score of 74%). Overall, the results are encouraging and suggested that automated text mining methods can be used to process clinical notes to identify risk factors and monitor progression of heart disease on a large-scale, providing necessary data for clinical and epidemiological studies. | Using local lexicalized rules to identify heart disease risk factors in clinical notes |
S1532046415001240 | Background: The determination of risk factors and their temporal relations in natural language patient records is a complex task which has been addressed in the i2b2/UTHealth 2014 shared task. In this context, in most systems it was broadly decomposed into two sub-tasks implemented by two components: entity detection, and temporal relation determination. Task-level (“black box”) evaluation is relevant for the final clinical application, whereas component-level evaluation (“glass box”) is important for system development and progress monitoring. Unfortunately, because of the interaction between entity representation and temporal relation representation, glass box and black box evaluation cannot be managed straightforwardly at the same time in the setting of the i2b2/UTHealth 2014 task, making it difficult to assess reliably the relative performance and contribution of the individual components to the overall task. Objective: To identify obstacles and propose methods to cope with this difficulty, and illustrate them through experiments on the i2b2/UTHealth 2014 dataset. Methods: We outline several solutions to this problem and examine their requirements in terms of adequacy for component-level and task-level evaluation and of changes to the task framework. We select the solution which requires the least modifications to the i2b2 evaluation framework and illustrate it with our system. This system identifies risk factor mentions with a CRF system complemented by hand-designed patterns, identifies and normalizes temporal expressions through a tailored version of the Heideltime tool, and determines temporal relations of each risk factor with a One Rule classifier. Results: Giving a fixed value to the temporal attribute in risk factor identification proved to be the simplest way to evaluate the risk factor detection component independently. This evaluation method enabled us to identify the risk factor detection component as most contributing to the false negatives and false positives of the global system. This led us to redirect further effort to this component, focusing on medication detection, with gains of 7 to 20 recall points and of 3 to 6 F-measure points depending on the corpus and evaluation. Conclusion: We proposed a method to achieve a clearer glass box evaluation of risk factor detection and temporal relation detection in clinical texts, which can provide an example to help system development in similar tasks. This glass box evaluation was instrumental in refocusing our efforts and obtaining substantial improvements in risk factor detection. | Combining glass box and black box evaluations in the identification of heart disease risk factors and their temporal relations from clinical records |
S1532046415001252 | This paper presents a natural language processing (NLP) system that was designed to participate in the 2014 i2b2 de-identification challenge. The challenge task aims to identify and classify seven main Protected Health Information (PHI) categories and 25 associated sub-categories. A hybrid model was proposed which combines machine learning techniques with keyword-based and rule-based approaches to deal with the complexity inherent in PHI categories. Our proposed approaches exploit a rich set of linguistic features, both syntactic and word surface-oriented, which are further enriched by task-specific features and regular expression template patterns to characterize the semantics of various PHI categories. Our system achieved promising accuracy on the challenge test data with an overall micro-averaged F-measure of 93.6%, which was the winner of this de-identification challenge. | Automatic detection of protected health information from clinic narratives |
S153204641500129X | Objective Some phase 1 clinical trials offer strong financial incentives for healthy individuals to participate in their studies. There is evidence that some individuals enroll in multiple trials concurrently. This creates safety risks and introduces data quality problems into the trials. Our objective was to construct a privacy preserving protocol to track phase 1 participants to detect concurrent enrollment. Design A protocol using secure probabilistic querying against a database of trial participants that allows for screening during telephone interviews and on-site enrollment was developed. The match variables consisted of demographic information. Measurement The accuracy (sensitivity, precision, and negative predictive value) of the matching and its computational performance in seconds were measured under simulated environments. Accuracy was also compared to non-secure matching methods. Results The protocol performance scales linearly with the database size. At the largest database size of 20,000 participants, a query takes under 20s on a 64 cores machine. Sensitivity, precision, and negative predictive value of the queries were consistently at or above 0.9, and were very similar to non-secure versions of the protocol. Conclusion The protocol provides a reasonable solution to the concurrent enrollment problems in phase 1 clinical trials, and is able to ensure that personal information about participants is kept secure. | A privacy preserving protocol for tracking participants in phase I clinical trials |
S1532046415001343 | Objective This paper introduces a model that predicts future changes in systolic blood pressure (SBP) based on structured and unstructured (text-based) information from longitudinal clinical records. Method For each patient, the clinical records are sorted in chronological order and SBP measurements are extracted from them. The model predicts future changes in SBP based on the preceding clinical notes. This is accomplished using least median squares regression on salient features found using a feature selection algorithm. Results Using the prediction model, a correlation coefficient of 0.47 is achieved on unseen test data (p <.0001). This is in contrast to a baseline correlation coefficient of 0.39. | Predicting changes in systolic blood pressure using longitudinal patient records |
S1532046415001380 | Biclustering has become a popular technique for the study of gene expression data, especially for discovering functionally related gene sets under different subsets of experimental conditions. Most of biclustering approaches use a measure or cost function that determines the quality of biclusters. In such cases, the development of both a suitable heuristics and a good measure for guiding the search are essential for discovering interesting biclusters in an expression matrix. Nevertheless, not all existing biclustering approaches base their search on evaluation measures for biclusters. There exists a diverse set of biclustering tools that follow different strategies and algorithmic concepts which guide the search towards meaningful results. In this paper we present a extensive survey of biclustering approaches, classifying them into two categories according to whether or not use evaluation metrics within the search method: biclustering algorithms based on evaluation measures and non metric-based biclustering algorithms. In both cases, they have been classified according to the type of meta-heuristics which they are based on. | Biclustering on expression data: A review |
S1532046415001392 | A recent promise to access unstructured clinical data from electronic health records on large-scale has revitalized the interest in automated de-identification of clinical notes, which includes the identification of mentions of Protected Health Information (PHI). We describe the methods developed and evaluated as part of the i2b2/UTHealth 2014 challenge to identify PHI defined by 25 entity types in longitudinal clinical narratives. Our approach combines knowledge-driven (dictionaries and rules) and data-driven (machine learning) methods with a large range of features to address de-identification of specific named entities. In addition, we have devised a two-pass recognition approach that creates a patient-specific run-time dictionary from the PHI entities identified in the first step with high confidence, which is then used in the second pass to identify mentions that lack specific clues. The proposed method achieved the overall micro F 1-measures of 91% on strict and 95% on token-level evaluation on the test dataset (514 narratives). Whilst most PHI entities can be reliably identified, particularly challenging were mentions of Organizations and Professions. Still, the overall results suggest that automated text mining methods can be used to reliably process clinical notes to identify personal information and thus providing a crucial step in large-scale de-identification of unstructured data for further clinical and epidemiological studies. | Combining knowledge- and data-driven methods for de-identification of clinical narratives |
S1532046415001409 | The second track of the 2014 i2b2/UTHealth natural language processing shared task focused on identifying medical risk factors related to Coronary Artery Disease (CAD) in the narratives of longitudinal medical records of diabetic patients. The risk factors included hypertension, hyperlipidemia, obesity, smoking status, and family history, as well as diabetes and CAD, and indicators that suggest the presence of those diseases. In addition to identifying the risk factors, this track of the 2014 i2b2/UTHealth shared task studied the presence and progression of the risk factors in longitudinal medical records. Twenty teams participated in this track, and submitted 49 system runs for evaluation. Six of the top 10 teams achieved F1 scores over 0.90, and all 10 scored over 0.87. The most successful system used a combination of additional annotations, external lexicons, hand-written rules and Support Vector Machines. The results of this track indicate that identification of risk factors and their progression over time is well within the reach of automated systems. | Identifying risk factors for heart disease over time: Overview of 2014 i2b2/UTHealth shared task Track 2 |
S1532046415001410 | This paper describes the use of an agile text mining platform (Linguamatics’ Interactive Information Extraction Platform, I2E) to extract document-level cardiac risk factors in patient records as defined in the i2b2/UTHealth 2014 challenge. The approach uses a data-driven rule-based methodology with the addition of a simple supervised classifier. We demonstrate that agile text mining allows for rapid optimization of extraction strategies, while post-processing can leverage annotation guidelines, corpus statistics and logic inferred from the gold standard data. We also show how data imbalance in a training set affects performance. Evaluation of this approach on the test data gave an F-Score of 91.7%, one percent behind the top performing system. | Agile text mining for the 2014 i2b2/UTHealth Cardiac risk factors challenge |
S1532046415001483 | Objective In recognition of potential barriers that may inhibit the widespread adoption of biomedical software, the 2014 i2b2 Challenge introduced a special track, Track 3 – Software Usability Assessment, in order to develop a better understanding of the adoption issues that might be associated with the state-of-the-art clinical NLP systems. This paper reports the ease of adoption assessment methods we developed for this track, and the results of evaluating five clinical NLP system submissions. Materials and methods A team of human evaluators performed a series of scripted adoptability test tasks with each of the participating systems. The evaluation team consisted of four “expert evaluators” with training in computer science, and eight “end user evaluators” with mixed backgrounds in medicine, nursing, pharmacy, and health informatics. We assessed how easy it is to adopt the submitted systems along the following three dimensions: communication effectiveness (i.e., how effective a system is in communicating its designed objectives to intended audience), effort required to install, and effort required to use. We used a formal software usability testing tool, TURF, to record the evaluators’ interactions with the systems and ‘think-aloud’ data revealing their thought processes when installing and using the systems and when resolving unexpected issues. Results Overall, the ease of adoption ratings that the five systems received are unsatisfactory. Installation of some of the systems proved to be rather difficult, and some systems failed to adequately communicate their designed objectives to intended adopters. Further, the average ratings provided by the end user evaluators on ease of use and ease of interpreting output are −0.35 and −0.53, respectively, indicating that this group of users generally deemed the systems extremely difficult to work with. While the ratings provided by the expert evaluators are higher, 0.6 and 0.45, respectively, these ratings are still low indicating that they also experienced considerable struggles. Discussion The results of the Track 3 evaluation show that the adoptability of the five participating clinical NLP systems has a great margin for improvement. Remedy strategies suggested by the evaluators included (1) more detailed and operation system specific use instructions; (2) provision of more pertinent onscreen feedback for easier diagnosis of problems; (3) including screen walk-throughs in use instructions so users know what to expect and what might have gone wrong; (4) avoiding jargon and acronyms in materials intended for end users; and (5) packaging prerequisites required within software distributions so that prospective adopters of the software do not have to obtain each of the third-party components on their own. | Ease of adoption of clinical natural language processing software: An evaluation of five systems |
S1532046415001501 | Background Identifying key variables such as disorders within the clinical narratives in electronic health records has wide-ranging applications within clinical practice and biomedical research. Previous research has demonstrated reduced performance of disorder named entity recognition (NER) and normalization (or grounding) in clinical narratives than in biomedical publications. In this work, we aim to identify the cause for this performance difference and introduce general solutions. Methods We use closure properties to compare the richness of the vocabulary in clinical narrative text to biomedical publications. We approach both disorder NER and normalization using machine learning methodologies. Our NER methodology is based on linear-chain conditional random fields with a rich feature approach, and we introduce several improvements to enhance the lexical knowledge of the NER system. Our normalization method – never previously applied to clinical data – uses pairwise learning to rank to automatically learn term variation directly from the training data. Results We find that while the size of the overall vocabulary is similar between clinical narrative and biomedical publications, clinical narrative uses a richer terminology to describe disorders than publications. We apply our system, DNorm-C, to locate disorder mentions and in the clinical narratives from the recent ShARe/CLEF eHealth Task. For NER (strict span-only), our system achieves precision=0.797, recall=0.713, f-score=0.753. For the normalization task (strict span+concept) it achieves precision=0.712, recall=0.637, f-score=0.672. The improvements described in this article increase the NER f-score by 0.039 and the normalization f-score by 0.036. We also describe a high recall version of the NER, which increases the normalization recall to as high as 0.744, albeit with reduced precision. Discussion We perform an error analysis, demonstrating that NER errors outnumber normalization errors by more than 4-to-1. Abbreviations and acronyms are found to be frequent causes of error, in addition to the mentions the annotators were not able to identify within the scope of the controlled vocabulary. Conclusion Disorder mentions in text from clinical narratives use a rich vocabulary that results in high term variation, which we believe to be one of the primary causes of reduced performance in clinical narrative. We show that pairwise learning to rank offers high performance in this context, and introduce several lexical enhancements – generalizable to other clinical NER tasks – that improve the ability of the NER system to handle this variation. DNorm-C is a high performing, open source system for disorders in clinical text, and a promising step toward NER and normalization methods that are trainable to a wide variety of domains and entities. (DNorm-C is open source software, and is available with a trained model at the DNorm demonstration website: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/tmTools/#DNorm.) | Challenges in clinical natural language processing for automated disorder normalization |
S1532046415001562 | Automated phenotype identification plays a critical role in cohort selection and bioinformatics data mining. Natural Language Processing (NLP)-informed classification techniques can robustly identify phenotypes in unstructured medical notes. In this paper, we systematically assess the effect of naive, lexically normalized, and semantic feature spaces on classifier performance for obesity, atherosclerotic cardiovascular disease (CAD), hyperlipidemia, hypertension, and diabetes. We train support vector machines (SVMs) using individual feature spaces as well as combinations of these feature spaces on two small training corpora (730 and 790 documents) and a combined (1520 documents) training corpus. We assess the importance of feature spaces and training data size on SVM model performance. We show that inclusion of semantically-informed features does not statistically improve performance for these models. The addition of training data has weak effects of mixed statistical significance across disease classes suggesting larger corpora are not necessary to achieve relatively high performance with these models. | A systematic comparison of feature space effects on disease classifier performance for phenotype identification of five diseases |
S1532046415001690 | The 2014 i2b2 natural language processing shared task focused on identifying cardiovascular risk factors such as high blood pressure, high cholesterol levels, obesity and smoking status among other factors found in health records of diabetic patients. In addition, the task involved detecting medications, and time information associated with the extracted data. This paper presents the development and evaluation of a natural language processing (NLP) application conceived for this i2b2 shared task. For increased efficiency, the application main components were adapted from two existing NLP tools implemented in the Apache UIMA framework: Textractor (for dictionary-based lookup) and cTAKES (for preprocessing and smoking status detection). The application achieved a final (micro-averaged) F1-measure of 87.5% on the final evaluation test set. Our attempt was mostly based on existing tools adapted with minimal changes and allowed for satisfying performance with limited development efforts. | Adapting existing natural language processing resources for cardiovascular risk factors identification in clinical notes |
S1532046415001707 | Coronary artery disease (CAD) often leads to myocardial infarction, which may be fatal. Risk factors can be used to predict CAD, which may subsequently lead to prevention or early intervention. Patient data such as co-morbidities, medication history, social history and family history are required to determine the risk factors for a disease. However, risk factor data are usually embedded in unstructured clinical narratives if the data is not collected specifically for risk assessment purposes. Clinical text mining can be used to extract data related to risk factors from unstructured clinical notes. This study presents methods to extract Framingham risk factors from unstructured electronic health records using clinical text mining and to calculate 10-year coronary artery disease risk scores in a cohort of diabetic patients. We developed a rule-based system to extract risk factors: age, gender, total cholesterol, HDL-C, blood pressure, diabetes history and smoking history. The results showed that the output from the text mining system was reliable, but there was a significant amount of missing data to calculate the Framingham risk score. A systematic approach for understanding missing data was followed by implementation of imputation strategies. An analysis of the 10-year Framingham risk scores for coronary artery disease in this cohort has shown that the majority of the diabetic patients are at moderate risk of CAD. | Coronary artery disease risk assessment from unstructured electronic health records using text mining |
S1532046415001768 | We present the design, and analyze the performance of a multi-stage natural language processing system employing named entity recognition, Bayesian statistics, and rule logic to identify and characterize heart disease risk factor events in diabetic patients over time. The system was originally developed for the 2014 i2b2 Challenges in Natural Language in Clinical Data. The system’s strengths included a high level of accuracy for identifying named entities associated with heart disease risk factor events. The system’s primary weakness was due to inaccuracies when characterizing the attributes of some events. For example, determining the relative time of an event with respect to the record date, whether an event is attributable to the patient’s history or the patient’s family history, and differentiating between current and prior smoking status. We believe these inaccuracies were due in large part to the lack of an effective approach for integrating context into our event detection model. To address these inaccuracies, we explore the addition of a distributional semantic model for characterizing contextual evidence of heart disease risk factor events. Using this semantic model, we raise our initial 2014 i2b2 Challenges in Natural Language of Clinical data F1 score of 0.838 to 0.890 and increased precision by 10.3% without use of any lexicons that might bias our results. | Mining heart disease risk factors in clinical text with named entity recognition and distributional semantic models |
S1532046415001781 | In the United States, about 600,000 people die of heart disease every year. The annual cost of care services, medications, and lost productivity reportedly exceeds 108.9billion dollars. Effective disease risk assessment is critical to prevention, care, and treatment planning. Recent advancements in text analytics have opened up new possibilities of using the rich information in electronic medical records (EMRs) to identify relevant risk factors. The 2014 i2b2/UTHealth Challenge brought together researchers and practitioners of clinical natural language processing (NLP) to tackle the identification of heart disease risk factors reported in EMRs. We participated in this track and developed an NLP system by leveraging existing tools and resources, both public and proprietary. Our system was a hybrid of several machine-learning and rule-based components. The system achieved an overall F1 score of 0.9185, with a recall of 0.9409 and a precision of 0.8972. | Risk factor detection for heart disease by applying text analytics in electronic medical records |
S1532046415001793 | De-identification is a shared task of the 2014 i2b2/UTHealth challenge. The purpose of this task is to remove protected health information (PHI) from medical records. In this paper, we propose a novel de-identifier, WI-deId, based on conditional random fields (CRFs). A preprocessing module, which tokenizes the medical records using regular expressions and an off-the-shelf tokenizer, is introduced, and three groups of features are extracted to train the de-identifier model. The experiment shows that our system is effective in the de-identification of medical records, achieving a micro-F1 of 0.9232 at the i2b2 strict entity evaluation level. | CRFs based de-identification of medical records |
S153204641500180X | For the purpose of post-marketing drug safety surveillance, which has traditionally relied on the voluntary reporting of individual cases of adverse drug events (ADEs), other sources of information are now being explored, including electronic health records (EHRs), which give us access to enormous amounts of longitudinal observations of the treatment of patients and their drug use. Adverse drug events, which can be encoded in EHRs with certain diagnosis codes, are, however, heavily underreported. It is therefore important to develop capabilities to process, by means of computational methods, the more unstructured EHR data in the form of clinical notes, where clinicians may describe and reason around suspected ADEs. In this study, we report on the creation of an annotated corpus of Swedish health records for the purpose of learning to identify information pertaining to ADEs present in clinical notes. To this end, three key tasks are tackled: recognizing relevant named entities (disorders, symptoms, drugs), labeling attributes of the recognized entities (negation, speculation, temporality), and relationships between them (indication, adverse drug event). For each of the three tasks, leveraging models of distributional semantics – i.e., unsupervised methods that exploit co-occurrence information to model, typically in vector space, the meaning of words – and, in particular, combinations of such models, is shown to improve the predictive performance. The ability to make use of such unsupervised methods is critical when faced with large amounts of sparse and high-dimensional data, especially in domains where annotated resources are scarce. | Identifying adverse drug event information in clinical notes with distributional semantic representations of context |
S1532046415001823 | The 2014 i2b2/UTHealth natural language processing shared task featured a track focused on the de-identification of longitudinal medical records. For this track, we de-identified a set of 1304 longitudinal medical records describing 296 patients. This corpus was de-identified under a broad interpretation of the HIPAA guidelines using double-annotation followed by arbitration, rounds of sanity checking, and proof reading. The average token-based F1 measure for the annotators compared to the gold standard was 0.927. The resulting annotations were used both to de-identify the data and to set the gold standard for the de-identification track of the 2014 i2b2/UTHealth shared task. All annotated private health information were replaced with realistic surrogates automatically and then read over and corrected manually. The resulting corpus is the first of its kind made available for de-identification research. This corpus was first used for the 2014 i2b2/UTHealth shared task, during which the systems achieved a mean F-measure of 0.872 and a maximum F-measure of 0.964 using entity-based micro-averaged evaluations. | Annotating longitudinal clinical narratives for de-identification: The 2014 i2b2/UTHealth corpus |
S1532046415001835 | Objectives The increasing of potential medical demand in China has threatened the health of the population, the medical equity, accessibility to medical services, and has impeded the development of Chinese health delivery system. This study aims to understand the mechanism of the increasing potential medical demand and find some solutions. Methods We constructed a system dynamics model to analyze and simulate this problem, to predict the influences of health policies on the actual percentage of patients not seeking medical care (adjusting the quantity structure of hospitals and community health systems (CHSs), adjusting outpatient prices, and adjusting the level of health insurance). Results Decreasing the number of hospitals, increasing the number of CHSs, and raising the proportion of health insurance compensation would effectively increase the transformation of potential medical demand. But currently, changes of the outpatient prices didn’t play a role in the transformation of potential medical demand. Conclusions Combined with validation analysis and model simulation, we suggest some possible solutions. The main factors causing potential medical demand are accessibility to medical services and proportion of health insurance compensation. Thus, adjusting the number of hospitals and CHSs and increasing the proportion of health insurance compensation should decrease the actual percentage of patients not seeking medical care and accelerate the transformation of potential medical demand, which deserved being concerned in policymaking. | Transformation of potential medical demand in China: A system dynamics simulation model |
S153204641500194X | Despite recent progress in prediction and prevention, heart disease remains a leading cause of death. One preliminary step in heart disease prediction and prevention is risk factor identification. Many studies have been proposed to identify risk factors associated with heart disease; however, none have attempted to identify all risk factors. In 2014, the National Center of Informatics for Integrating Biology and Beside (i2b2) issued a clinical natural language processing (NLP) challenge that involved a track (track 2) for identifying heart disease risk factors in clinical texts over time. This track aimed to identify medically relevant information related to heart disease risk and track the progression over sets of longitudinal patient medical records. Identification of tags and attributes associated with disease presence and progression, risk factors, and medications in patient medical history were required. Our participation led to development of a hybrid pipeline system based on both machine learning-based and rule-based approaches. Evaluation using the challenge corpus revealed that our system achieved an F1-score of 92.68%, making it the top-ranked system (without additional annotations) of the 2014 i2b2 clinical NLP challenge. | An automatic system to identify heart disease risk factors in clinical texts over time |
S1532046415001963 | For the 2014 i2b2/UTHealth de-identification challenge, we introduced a new non-parametric Bayesian hidden Markov model using a Dirichlet process (HMM-DP). The model intends to reduce task-specific feature engineering and to generalize well to new data. In the challenge we developed a variational method to learn the model and an efficient approximation algorithm for prediction. To accommodate out-of-vocabulary words, we designed a number of feature functions to model such words. The results show the model is capable of understanding local context cues to make correct predictions without manual feature engineering and performs as accurately as state-of-the-art conditional random field models in a number of categories. To incorporate long-range and cross-document context cues, we developed a skip-chain conditional random field model to align the results produced by HMM-DP, which further improved the performance. | Hidden Markov model using Dirichlet process for de-identification |
S1532046415001987 | The second track of the 2014 i2b2 challenge asked participants to automatically identify risk factors for heart disease among diabetic patients using natural language processing techniques for clinical notes. This paper describes a rule-based system developed using a combination of regular expressions, concepts from the Unified Medical Language System (UMLS), and freely-available resources from the community. With a performance (F1=90.7) that is significantly higher than the median (F1=87.20) and close to the top performing system (F1=92.8), it was the best rule-based system of all the submissions in the challenge. We also used this system to evaluate the utility of different terminologies in the UMLS towards the challenge task. Of the 155 terminologies in the UMLS, 129 (76.78%) have no representation in the corpus. The Consumer Health Vocabulary had very good coverage of relevant concepts and was the most useful terminology for the challenge task. While segmenting notes into sections and lists has a significant impact on the performance, identifying negations and experiencer of the medical event results in negligible gain. | Comparison of UMLS terminologies to identify risk of heart disease using clinical notes |
S1532046415001999 | Coronary artery disease (CAD) is the leading cause of death in both the UK and worldwide. The detection of related risk factors and tracking their progress over time is of great importance for early prevention and treatment of CAD. This paper describes an information extraction system that was developed to automatically identify risk factors for heart disease in medical records while the authors participated in the 2014 i2b2/UTHealth NLP Challenge. Our approaches rely on several nature language processing (NLP) techniques such as machine learning, rule-based methods, and dictionary-based keyword spotting to cope with complicated clinical contexts inherent in a wide variety of risk factors. Our system achieved encouraging performance on the challenge test data with an overall micro-averaged F-measure of 0.915, which was competitive to the best system (F-measure of 0.927) of this challenge task. | A hybrid model for automatic identification of risk factors for heart disease |
S1532046415002002 | Objective With the ARX data anonymization tool structured biomedical data can be de-identified using syntactic privacy models, such as k-anonymity. Data is transformed with two methods: (a) generalization of attribute values, followed by (b) suppression of data records. The former method results in data that is well suited for analyses by epidemiologists, while the latter method significantly reduces loss of information. Our tool uses an optimal anonymization algorithm that maximizes output utility according to a given measure. To achieve scalability, existing optimal anonymization algorithms exclude parts of the search space by predicting the outcome of data transformations regarding privacy and utility without explicitly applying them to the input dataset. These optimizations cannot be used if data is transformed with generalization and suppression. As optimal data utility and scalability are important for anonymizing biomedical data, we had to develop a novel method. Methods In this article, we first confirm experimentally that combining generalization with suppression significantly increases data utility. Next, we proof that, within this coding model, the outcome of data transformations regarding privacy and utility cannot be predicted. As a consequence, existing algorithms fail to deliver optimal data utility. We confirm this finding experimentally. The limitation of previous work can be overcome at the cost of increased computational complexity. However, scalability is important for anonymizing data with user feedback. Consequently, we identify properties of datasets that may be predicted in our context and propose a novel and efficient algorithm. Finally, we evaluate our solution with multiple datasets and privacy models. Results This work presents the first thorough investigation of which properties of datasets can be predicted when data is anonymized with generalization and suppression. Our novel approach adopts existing optimization strategies to our context and combines different search methods. The experiments show that our method is able to efficiently solve a broad spectrum of anonymization problems. Conclusion Our work shows that implementing syntactic privacy models is challenging and that existing algorithms are not well suited for anonymizing data with transformation models which are more complex than generalization alone. As such models have been recommended for use in the biomedical domain, our results are of general relevance for de-identifying structured biomedical data. | The cost of quality: Implementing generalization and suppression for anonymizing biomedical data with minimal information loss |
S1532046415002014 | Clinical trials are essential for determining whether new interventions are effective. In order to determine the eligibility of patients to enroll into these trials, clinical trial coordinators often perform a manual review of clinical notes in the electronic health record of patients. This is a very time-consuming and exhausting task. Efforts in this process can be expedited if these coordinators are directed toward specific parts of the text that are relevant for eligibility determination. In this study, we describe the creation of a dataset that can be used to evaluate automated methods capable of identifying sentences in a note that are relevant for screening a patient’s eligibility in clinical trials. Using this dataset, we also present results for four simple methods in natural language processing that can be used to automate this task. We found that this is a challenging task (maximum F-score=26.25), but it is a promising direction for further research. | Textual inference for eligibility criteria resolution in clinical trials |
S1532046415002075 | Electronic medical records (EMRs) for diabetic patients contain information about heart disease risk factors such as high blood pressure, cholesterol levels, and smoking status. Discovering the described risk factors and tracking their progression over time may support medical personnel in making clinical decisions, as well as facilitate data modeling and biomedical research. Such highly patient-specific knowledge is essential to driving the advancement of evidence-based practice, and can also help improve personalized medicine and care. One general approach for tracking the progression of diseases and their risk factors described in EMRs is to first recognize all temporal expressions, and then assign each of them to the nearest target medical concept. However, this method may not always provide the correct associations. In light of this, this work introduces a context-aware approach to assign the time attributes of the recognized risk factors by reconstructing contexts that contain more reliable temporal expressions. The evaluation results on the i2b2 test set demonstrate the efficacy of the proposed approach, which achieved an F-score of 0.897. To boost the approach’s ability to process unstructured clinical text and to allow for the reproduction of the demonstrated results, a set of developed .NET libraries used to develop the system is available at https://sites.google.com/site/hongjiedai/projects/nttmuclinicalnet. | A context-aware approach for progression tracking of medical concepts in electronic medical records |
S1532046415002129 | The 2014 i2b2/UTHealth Natural Language Processing (NLP) shared task featured a new longitudinal corpus of 1304 records representing 296 diabetic patients. The corpus contains three cohorts: patients who have a diagnosis of coronary artery disease (CAD) in their first record, and continue to have it in subsequent records; patients who do not have a diagnosis of CAD in the first record, but develop it by the last record; patients who do not have a diagnosis of CAD in any record. This paper details the process used to select records for this corpus and provides an overview of novel research uses for this corpus. This corpus is the only annotated corpus of longitudinal clinical narratives currently available for research to the general research community. | Creation of a new longitudinal corpus of clinical narratives |
S1532046415002233 | We present the Unsupervised Phenome Model (UPhenome), a probabilistic graphical model for large-scale discovery of computational models of disease, or phenotypes. We tackle this challenge through the joint modeling of a large set of diseases and a large set of clinical observations. The observations are drawn directly from heterogeneous patient record data (notes, laboratory tests, medications, and diagnosis codes), and the diseases are modeled in an unsupervised fashion. We apply UPhenome to two qualitatively different mixtures of patients and diseases: records of extremely sick patients in the intensive care unit with constant monitoring, and records of outpatients regularly followed by care providers over multiple years. We demonstrate that the UPhenome model can learn from these different care settings, without any additional adaptation. Our experiments show that (i) the learned phenotypes combine the heterogeneous data types more coherently than baseline LDA-based phenotypes; (ii) they each represent single diseases rather than a mix of diseases more often than the baseline ones; and (iii) when applied to unseen patient records, they are correlated with the patients’ ground-truth disorders. Code for training, inference, and quantitative evaluation is made available to the research community. | Learning probabilistic phenotypes from heterogeneous EHR data |
S1532046415002294 | i2b2 (Informatics for Integrating Biology and the Bedside) is a National Center for Biomedical Computing based at Partners HealthCare System. The i2b2 center was funded by the National Institutes of Health, from 2004 to 2014, to build a âÂÂscalable frameworkâ and resources for enabling researchers to use existing data for investigation of diseases with genetic origins. The framework and resources built by i2b2 investigators have âÂÂenjoyed wide international adoption by the CTSA network, academic health centers, and industryâÂÂ. Secondary use of electronic health records (EHRs) constituted one of the cornerstones of i2b2. Over the years, i2b2 investigators built various corpora in support of Natural Language Processing (NLP) research that aimed to extract meaningful information from narrative health records in order to enable their use for disease discovery and investigation. These corpora were developed with two goals in mind: 1. To enable progress in the field of clinical NLP by advancing the state of the art. To enable clinical applications of NLP. | Practical applications for natural language processing in clinical research: The 2014 i2b2/UTHealth shared tasks |
S1532046415002324 | The causal and interplay mechanisms of Single Nucleotide Polymorphisms (SNPs) associated with complex diseases (complex disease SNPs) investigated in genome-wide association studies (GWAS) at the transcriptional level (mRNA) are poorly understood despite recent advancements such as discoveries reported in the Encyclopedia of DNA Elements (ENCODE) and Genotype-Tissue Expression (GTex). Protein interaction network analyses have successfully improved our understanding of both single gene diseases (Mendelian diseases) and complex diseases. Whether the mRNAs downstream of complex disease genes are central or peripheral in the genetic information flow relating DNA to mRNA remains unclear and may be disease-specific. Using expression Quantitative Trait Loci (eQTL) that provide DNA to mRNA associations and network centrality metrics, we hypothesize that we can unveil the systems properties of information flow between SNPs and the transcriptomes of complex diseases. We compare different conditions such as naïve SNP assignments and stringent linkage disequilibrium (LD) free assignments for transcripts to remove confounders from LD. Additionally, we compare the results from eQTL networks between lymphoblastoid cell lines and liver tissue. Empirical permutation resampling (p <0.001) and theoretic Mann–Whitney U test (p <10−30) statistics indicate that mRNAs corresponding to complex disease SNPs via eQTL associations are likely to be regulated by a larger number of SNPs than expected. We name this novel property mRNA hubness in eQTL networks, and further term mRNAs with high hubness as master integrators. mRNA master integrators receive and coordinate the perturbation signals from large numbers of polymorphisms and respond to the personal genetic architecture integratively. This genetic signal integration contrasts with the mechanism underlying some Mendelian diseases, where a genetic polymorphism affecting a single protein hub produces a divergent signal that affects a large number of downstream proteins. Indeed, we verify that this property is independent of the hubness in protein networks for which these mRNAs are transcribed. Our findings provide novel insights into the pleiotropy of mRNAs targeted by complex disease polymorphisms and the architecture of the information flow between the genetic polymorphisms and transcriptomes of complex diseases. | eQTL networks unveil enriched mRNA master integrators downstream of complex disease-associated SNPs |
S1532046415002415 | Self-reported patient data has been shown to be a valuable knowledge source for post-market pharmacovigilance. In this paper we propose using the popular micro-blogging service Twitter to gather evidence about adverse drug reactions (ADRs) after firstly having identified micro-blog messages (also know as “tweets”) that report first-hand experience. In order to achieve this goal we explore machine learning with data crowdsourced from laymen annotators. With the help of lay annotators recruited from CrowdFlower we manually annotated 1548 tweets containing keywords related to two kinds of drugs: SSRIs (eg. Paroxetine), and cognitive enhancers (eg. Ritalin). Our results show that inter-annotator agreement (Fleiss’ kappa) for crowdsourcing ranks in moderate agreement with a pair of experienced annotators (Spearman’s Rho=0.471). We utilized the gold standard annotations from CrowdFlower for automatically training a range of supervised machine learning models to recognize first-hand experience. F-Score values are reported for 6 of these techniques with the Bayesian Generalized Linear Model being the best (F-Score=0.64 and Informedness=0.43) when combined with a selected set of features obtained by using information gain criteria. | Crowdsourcing Twitter annotations to identify first-hand experiences of prescription drug use |
S1532046415002762 | Objective mHealth systems are becoming more common to aid patients in their diabetes self-management, but recent studies indicate a need for thorough evaluation of patients’ experienced usability. Current evaluations lack a multi-method design for data collection and structured methods for data analyses. The purpose of this study was to provide a feasibility test of a multi-method approach for both data collection and data analyses for patients’ experienced usability of a mHealth system for diabetes type 2 self-management. Materials and methods A random sample of 10 users was selected from a larger clinical trial. Data collection methods included user testing with eight representative tasks and Think Aloud protocol, a semi-structured interview and a questionnaire on patients’ experiences using the system. The Framework Analysis (FA) method and Usability Problem Taxonomy (UPT) were used to structure, code and analyze the results. A usability severity rating was assigned after classification. Results The combined methods resulted in a total of 117 problems condensed into 19 usability issues with an average severity rating of 2.47 or serious. The usability test detected 50% of the initial usability problems, followed by the post-interview at 29%. The usability test found 18 of 19 consolidated usability problems while the questionnaire uncovered one unique issue. Patients experienced most usability problems (8) in the Glucose Readings View when performing complex tasks such as adding, deleting, and exporting glucose measurements. The severity ratings were the highest for the Glucose Diary View, Glucose Readings View, and Blood Pressure View with an average severity rating of 3 (serious). Most of the issues were classified under the artifact component of the UPT and primary categories of Visualness (7) and Manipulation (6). In the UPT task component, most issues were in the primary category Task-mapping (12). Conclusions Multiple data collection methods yielded a more comprehensive set of usability issues. Usability testing uncovered the largest volume of usability issues, followed by interviewing and then the questionnaire. The interview did not surface any unique consolidated usability issues while the questionnaire surfaced one. The FA and UPT were valuable in structuring and classifying problems. The resulting descriptions serve as a communication tool in problem solving and programming. We recommend the usage of multiple methods in data collection and employing the FA and UPT in data analyses for future usability testing. | An evaluation of patients’ experienced usability of a diabetes mHealth system using a multi-method approach |
S153204641500283X | Background Drug repositioning is the process of finding new indications for existing drugs. Its importance has been dramatically increasing recently due to the enormous increase in new drug discovery cost. However, most of the previous molecular-centered drug repositioning work is not able to reflect the end-point physiological activities of drugs because of the inherent complexity of human physiological systems. Methods Here, we suggest a novel computational framework to make inferences for alternative indications of marketed drugs by using electronic clinical information which reflects the end-point physiological results of drug’s effects on the biological activities of humans. In this work, we use the concept of complementarity between clinical disease signatures and clinical drug effects. With this framework, we establish disease-related clinical variable vectors (clinical disease signature vectors) and drug-related clinical variable vectors (clinical drug effect vectors) by applying two methodologies (i.e., statistical analysis and literature mining). Finally, we assign a repositioning possibility score to each disease–drug pair by the calculation of complementarity (anti-correlation) and association between clinical states (“up” or “down”) of disease signatures and clinical effects (“up”, “down” or “association”) of drugs. A total of 717 clinical variables in the electronic clinical dataset (NHANES), are considered in this study. Results The statistical significance of our prediction results is supported through two benchmark datasets (Comparative Toxicogenomics Database and Clinical Trials). We discovered not only lots of known relationships between diseases and drugs, but also many hidden disease–drug relationships. For example, glutathione and edetic-acid may be investigated as candidate drugs for asthma treatment. We examined prediction results by using statistical experiments (enrichment verification, hyper-geometric and permutation test P <0.009 in Comparative Toxicogenomics Database and Clinical Trials) and presented evidences for those with already published literature. Conclusion The results show that electronic clinical information is a feasible data resource and utilizing the complementarity (anti-correlated relationships) between clinical signatures of disease and clinical effects of drugs is a potentially predictive concept in drug repositioning research. It makes the proposed approach useful to identity novel relationships between diseases and drugs that have a high probability of being biologically valid. | Inferring new drug indications using the complementarity between clinical disease signatures and drug effects |
S1532046415002932 | In recent years, an ever increasing range of technology-based applications have been developed with the goal of assisting in the delivery of more effective and efficient fall prevention interventions. Whilst there have been a number of studies that have surveyed technologies for a particular sub-domain of fall prevention, there is no existing research which surveys the full spectrum of falls prevention interventions and characterises the range of technologies that have augmented this landscape. This study presents a conceptual framework and survey of the state of the art of technology-based fall prevention systems which is derived from a systematic template analysis of studies presented in contemporary research literature. The framework proposes four broad categories of fall prevention intervention system: Pre-fall prevention; Post-fall prevention; Fall injury prevention; Cross-fall prevention. Other categories include, Application type, Technology deployment platform, Information sources, Deployment environment, User interface type, and Collaborative function. After presenting the conceptual framework, a detailed survey of the state of the art is presented as a function of the proposed framework. A number of research challenges emerge as a result of surveying the research literature, which include a need for: new systems that focus on overcoming extrinsic falls risk factors; systems that support the environmental risk assessment process; systems that enable patients and practitioners to develop more collaborative relationships and engage in shared decision making during falls risk assessment and prevention activities. In response to these challenges, recommendations and future research directions are proposed to overcome each respective challenge. | Fall prevention intervention technologies: A conceptual framework and survey of the state of the art |
S1532046416000137 | Understanding causal relationships among large numbers of variables is a fundamental goal of biomedical sciences and can be facilitated by Directed Acyclic Graphs (DAGs) where directed edges between nodes represent the influence of components of the system on each other. In an observational setting, some of the directions are often unidentifiable because of Markov equivalency. Additional exogenous information, such as expert knowledge or genotype data can help establish directionality among the endogenous variables. In this study, we use the method of principle component analysis to extract information across the genome in order to generate a robust statistical causal network among phenotypes, the variables of primary interest. The method is applied to 590,020 SNP genotypes measured on 1596 individuals to generate the statistical causal network of 13 cardiovascular disease risk factor phenotypes. First, principal component analysis was used to capture information across the genome. The principal components were then used to identify a robust causal network structure, GDAG, among the phenotypes. Analyzing a robust causal network over risk factors reveals the flow of information in direct and alternative paths, as well as determining predictors and good targets for intervention. For example, the analysis identified BMI as influencing multiple other risk factor phenotypes and a good target for intervention to lower disease risk. | Generating a robust statistical causal structure over 13 cardiovascular disease risk factors using genomics data |
S1532046416000198 | Objectives Today, hospitals and other health care-related institutions are accumulating a growing bulk of real world clinical data. Such data offer new possibilities for the generation of disease models for the health economic evaluation. In this article, we propose a new approach to leverage cancer registry data for the development of Markov models. Records of breast cancer patients from a clinical cancer registry were used to construct a real world data driven disease model. Methods We describe a model generation process which maps database structures to disease state definitions based on medical expert knowledge. Software was programmed in Java to automatically derive a model structure and transition probabilities. We illustrate our method with the reconstruction of a published breast cancer reference model derived primarily from clinical study data. In doing so, we exported longitudinal patient data from a clinical cancer registry covering eight years. The patient cohort (n =892) comprised HER2-positive and HER2-negative women treated with or without Trastuzumab. Results The models generated with this method for the respective patient cohorts were comparable to the reference model in their structure and treatment effects. However, our computed disease models reflect a more detailed picture of the transition probabilities, especially for disease free survival and recurrence. Conclusions Our work presents an approach to extract Markov models semi-automatically using real world data from a clinical cancer registry. Health care decision makers may benefit from more realistic disease models to improve health care-related planning and actions based on their own data. | A method for using real world data in breast cancer modeling |
S1532046416300089 | Background Recent Cochrane reviews on falls and fall prevention have shown that it is possible to prevent falls in older adults living in the community and in care facilities. Technologies aimed at fall detection, assessment, prediction and prevention are emerging, yet there has been no consistency in describing or reporting on interventions using technologies. With the growth of eHealth and data driven interventions, a common language and classification is required. Objective The FARSEEING Taxonomy of Technologies was developed as a tool for those in the field of biomedical informatics to classify and characterise components of studies and interventions. Methods The Taxonomy Development Group (TDG) comprised experts from across Europe. Through face-to-face meetings and contributions via email, five domains were developed, modified and agreed: Approach; Base; Components of outcome measures; Descriptors of technologies; and Evaluation. Each domain included sub-domains and categories with accompanying definitions. The classification system was tested against published papers and further amendments undertaken, including development of an online tool. Six papers were classified by the TDG with levels of consensus recorded. Results Testing the taxonomy with papers highlighted difficulties in definitions across international healthcare systems, together with differences of TDG members’ backgrounds. Definitions were clarified and amended accordingly, but some difficulties remained. The taxonomy and manual were large documents leading to a lengthy classification process. The development of the online application enabled a much simpler classification process, as categories and definitions appeared only when relevant. Overall consensus for the classified papers was 70.66%. Consensus scores increased as modifications were made to the taxonomy. Conclusion The FARSEEING Taxonomy of Technologies presents a common language, which should now be adopted in the field of biomedical informatics. In developing the taxonomy as an online tool, it has become possible to continue to develop and modify the classification system to incorporate new technologies and interventions. | Developing the FARSEEING Taxonomy of Technologies: Classification and description of technology use (including ICT) in falls prevention studies |
S1532046416300235 | Objective The timely acknowledgement of critical patient clinical reports is vital for the delivery of safe patient care. With current EHR systems, critical reports reside on different screens. This leads to treatment delays and inefficient work flows. As a remedy, the R.A.P.I.D. (Root Aggregated Prioritized Information Display) system represents all data on a single screen, and its simple and intuitive “button” array structure allows triaged sign-off/sign-out of critical and non-critical reports. Materials and methods With 100 hematology and chemistry reports from each of two EHR systems Meditech (Westwood, MA) and Orchard Labs, Inc. (Carmel, IN), we generated files of the reports in their individual standard display formats (enhanced Meditech-EM and enhanced Orchard-EO). We also displayed the same 200 reports in the R.A.P.I.D. format. We then conducted a randomized trial to compare the time and accuracy of acknowledgement of critical and non-critical results. Results The sign-off times for reviewing the results for physician and non-physician providers, respectively, in seconds (with 95% confidence intervals) were for EM 1.78 (1.40–2.26) and 1.99 (1.72–2.30), for EO 2.69 (2.12–3.42) and 2.78 (2.40–3.21), and for R.A.P.I.D. 0.83 (0.70–0.98) and 1.58 (1.43–1.76). Non-physician providers reassigned system-defined non-critical results as critical with a frequency of 15.2% for EM, 18.4% for EO, and 7.83% for R.A.P.I.D., and critical results as non-critical with a frequency of 14.7%, 5.6%, and 5.8% respectively. Discussion The new display system was superior to two standard EHR systems that were significantly enhanced by first collecting the reports from their usual distributed locations and then by creating for each of the two standard EHRs a single file of reports for acknowledgement. Conclusions From a single screen display of all reports, the new display system enables timely acknowledgement of critical reports for patient safety and non-critical report triage for improved provider work flows. | R.A.P.I.D. (Root Aggregated Prioritized Information Display): A single screen display for efficient digital triaging of medical reports |
S1566253516300069 | We propose and advocate basic principles for the fusion of incomplete or uncertain information items, that should apply regardless of the formalism adopted for representing pieces of information coming from several sources. This formalism can be based on sets, logic, partial orders, possibility theory, belief functions or imprecise probabilities. We propose a general notion of information item representing incomplete or uncertain information about the values of an entity of interest. It is supposed to rank such values in terms of relative plausibility, and explicitly point out impossible values. Basic issues affecting the results of the fusion process, such as relative information content and consistency of information items, as well as their mutual consistency, are discussed. For each representation setting, we present fusion rules that obey our principles, and compare them to postulates specific to the representation proposed in the past. In the crudest (Boolean) representation setting (using a set of possible values), we show that the understanding of the set in terms of most plausible values, or in terms of non-impossible ones matters for choosing a relevant fusion rule. Especially, in the latter case our principles justify the method of maximal consistent subsets, while the former is related to the fusion of logical bases. Then we consider several formal settings for incomplete or uncertain information items, where our postulates are instantiated: plausibility orderings, qualitative and quantitative possibility distributions, belief functions and convex sets of probabilities. The aim of this paper is to provide a unified picture of fusion rules across various uncertainty representation settings. | The basic principles of uncertain information fusion. An organised review of merging rules in different representation frameworks |
S1566253516300252 | The field of wireless sensor networks (WSNs), embedded systems with sensing and networking capability, has now matured after a decade-long research effort and technological advances in electronics and networked systems. An important remaining challenge now is to extract meaningful information from the ever-increasing amount of sensor data collected by WSNs. In particular, there is strong interest in algorithms capable of automatic detection of patterns, events or other out-of-the order, anomalous system behavior. Data anomalies may indicate states of the system that require further analysis or prompt actions. Traditionally, anomaly detection techniques are executed in a central processing facility, which requires the collection of all measurement data at a central location, an obvious limitation for WSNs due to the high data communication costs involved. In this paper we explore the extent by which one may depart from this classical centralized paradigm, looking at decentralized anomaly detection based on unsupervised machine learning. Our aim is to detect anomalies at the sensor nodes, as opposed to centrally, to reduce energy and spectrum consumption. We study the information gain coming from aggregate neighborhood data, in comparison to performing simple, in-node anomaly detection. We evaluate the effects of neighborhood size and spatio-temporal correlation on the performance of our new neighborhood-based approach using a range of real-world network deployments and datasets. We find the conditions that make neighborhood data fusion advantageous, identifying also the cases in which this approach does not lead to detectable improvements. Improvements are linked to the diffusive properties of data (spatio-temporal correlations) but also to the type of sensors, anomalies and network topological features. Overall, when a dataset stems from a similar mixture of diffusive processes precision tends to benefit, particularly in terms of recall. Our work paves the way towards understanding how distributed data fusion methods may help managing the complexity of wireless sensor networks, for instance in massive Internet of Things scenarios. | Spatial anomaly detection in sensor networks using neighborhood information |
S1567422313000124 | E-Sourcing software has become an integral part of electronic commerce. Beyond the use of single-lot auction formats, there has been an emerging interest in using e-sourcing software for complex negotiations. Procurement markets typically exhibit scale economies leading to various types of volume discounts which are in wide-spread use in practice. The analysis of bids in such negotiations typically leads to computationally hard optimization problems. Scenario analysis describes a process, in which procurement managers compute different award allocations as a result of different allocation constraints and parameters that they put in place. This paper discusses an optimization model and computational methods which allow for effective scenario analysis with allocation problems in the presence of different types of discount policies and allocation constraints. The model reduces the number of parameter settings to explore considerably. The models are such that they can often not be solved exactly for realistic problem sizes in practically acceptable time frames. Therefore, we provide results of numerical experiments using exact algorithms and heuristics to solve the problem. We find that RINS and Variable Neighborhood Search can be effectively used in traditional branch-and-cut algorithms for this problem. Overall, new computational approaches allow procurement managers to evaluate offers even in markets with a complex set of volume discounts and multiple allocation constraints. | Optimization-based decision support for scenario analysis in electronic sourcing markets with volume discounts |
S156742231300015X | Online reviews, as one kind of quality indicator of products or service, are becoming increasingly important in influencing purchase decisions of prospective consumers on electronic commerce websites. With the fast growth of the Chinese e-commerce industry, it is thus indispensable to design effective online review systems for e-commerce websites in the Chinese context, by taking into account cultural factors. In this paper, we conduct two empirical studies on online reviews. Firstly, we study how culture differences across countries (i.e., China and the USA) impact the way in which consumers provide online reviews. Secondly, we investigate the impact of online reviews on product sales in the Chinese context, and show that directly copying the ideas of successful online review systems in the USA will deteriorate the effectiveness of the systems in China. Finally, we propose several suggestions for the development of effective online review systems in the Chinese context based on the results of our two empirical studies and the findings in previous studies. | Towards effective online review systems in the Chinese context: A cross-cultural empirical study |
S1567422313000173 | Drawing on information processing theory and the stimulus–organism–response model, we developed research hypotheses about consumers’ decision-making processes. Specifically, we examined the effects of online tie strength, perceived diagnosticity, and product-related risks on consumers’ purchase intentions. We conducted a field experiment on Facebook to test these hypotheses. We found that the product information and recommendations provided by friends with whom consumers have strong ties are perceived as having a high level of diagnosticity. The latter increases the probability that the consumers will purchase the product in question. Product-related risks moderate the effect of tie strength on perceived diagnosticity. For high-risk products, the information and recommendations provided by strong-tie contacts have a greater effect on purchase intentions than the information and recommendations provided by weak-tie contacts. However, we did not find this effect for low-risk products. We discuss the implications of our findings for both theory and practice. | How online social ties and product-related risks influence purchase intentions: A Facebook experiment |
S1567422313000185 | Since the advent and rapid diffusion of the Internet, the subject of consumer channel choice has attracted a large amount of research, mainly focused on the influence of channel, consumer and product category characteristics as its drivers. The interaction between channel choice and the purchase situation has been largely ignored, however. This paper is an attempt to fill this gap by identifying the key purchase situation variables and conducting an experiment to assess their impact on the choice between the traditional retail outlet and the online store. The results show that the key determinants of channel choice relate to time and distance. Distance-to-store and time pressures are among the factors affecting the probability of online purchase. Using a conceptual framework to explore differences in the impact of situational variables across product categories (high/low involvement, search/experience good), we show that distance-to-store has more influence on the likelihood of online purchase in situations involving search goods, while social variables are found to play a role only in the context of high-involvement goods. | Situational variables in online versus offline channel choice |
S1567422313000197 | This study investigates business-to-customer (B2C) electronic commerce services from a quality management perspective. We propose a novel quality management approach that is based on human factors engineering to manage e-commerce service quality and operate according to customer needs. First, we screen credible quality requirements and determine their weights by integrating Kano’s model with our previous work, including the model of B2C website service quality. Then, we can extract quality characteristics according to image words and credible customer requirements. The weights of the quality characteristics are deduced by refining characteristics through a house of quality reconstruction and mapping customer requirements to characteristics. Finally, based on related theories or viewpoints, the operating practices for quality characteristics entail quantitative output with importance and grades taken into consideration. Our experimental results demonstrate that operating practices can help managers understand e-commerce service quality and have useful implications for companies in the management of e-commerce service quality. | Quality management of B2C e-commerce service based on human factors engineering |
S1567422313000215 | User perceptions of website design (for Information Content, Information Design, Navigation Design, Visual Design), Website Trust, and Transaction Security are examined for differences in an eight country sample. Motivation for the investigation includes: (1) to test and compare user reactions to website design in countries with different degrees of uncertainty avoidance, (2) to consider user reactions based on country economic and technological conditions related to the theory of institutional trust and social capital, and (3) to extend clustering theory and the GLOBE cultural cluster model to determine if culturally similar countries group regarding user perceptions of websites. Overall and as predicted, users in low uncertainty avoidance, high institutional trust and social capital countries such as Canada and the USA have the most favorable perceptions of website design. An interesting finding is that while country economic and technological conditions may temper user perceptions in some instances, overall culture is a stronger predictor. Clustering theory is a useful determinant of user perceptions, and there is strong support that users within a given cultural cluster have similar requirements regarding website design. | Website design, trust and culture: An eight country investigation |
S156742231300032X | In e-commerce applications, vendors can construct detailed profiles about customers’ preferences, which is known as buyer profiling. These profiles can then be used by vendors in order to perform practices such as price discrimination, poor judgment, etc. The use of pseudonyms and, specially, changing pseudonyms from time to time are known to minimize profiling, minimizing the capacity of vendors to perform such practices in turn. Although there are some frameworks and tools that support pseudonym change, there are few proposals that suggest or directly change the pseudonym in an automated fashion. Instead, users are usually provided with the mechanisms to change pseudonyms but without any advise on when they should actually use these mechanisms. In this paper, we present an approach to control buyer profiling by means of automated pseudonym changes performed according to human privacy attitudes. We also present an application scenario and an evaluation of our proposal. | Automated buyer profiling control based on human privacy attitudes |
S1567422313000422 | The profit of portal companies in Taiwan is generated by online advertising and e-commerce. Effective advertising requires predicting how users respond to advertisements and then targeting (presenting advertisements) the users in ways that reflect their preferences. The behavioral targeting leverages historical users’ behaviors in order to select for display the ads which are most related to the users’ preferences. Although we may not want to provide advertisements that repeat the same concept, we can exploit predicted return visit rates for the registered members at the specific category of a portal site. However, customers’ preferences change over time. In order to capture the ‘concept drift’, we propose a novel and simple time function to increase/decrease the weight of the old data in evaluating various members’ past behaviors. Then, we construct a member’s Click Preference Index with a Time factor (CPIT) model in order to effectively distinguish the different kinds of member behaviors and predict return visit rates. The marketers of a portal site can thus target the members with high return visit rates and design corresponding marketing strategies. Experimental results with a real dataset have demonstrated that this CPIT model can be practically implemented and provide satisfactory results. | Prediction of members’ return visit rates using a time factor |
S1567422313000446 | A true e-business enabled firm needs the support from a well-tuned enterprise resource planning (ERP) system for providing real time data. However, many companies complain that after their huge investments in ERP systems, they found the ERP systems do not bring them new orders, new profits, or competitive advantage as ERP vendors claim. Academic studies also found mixed results regarding ERP’s payoff. In line with resource based view (RBV), the study proposes an integrated model to shed light on the ERP value paradox. We try to answer “With what organizational resources and by building what firm specific capabilities, the investment in ERP systems may bring firms competitive advantage”. Using a sample of 150 ERP and e-business adopters in the US, we found that (1) organizational resources such as managerial skills and organizational change management play a more important role than IT resources (ERP, e-Business technologies) in generating business integration capability. (2) However, neither IT resources nor organizational resources directly provide firms with competitive advantage. Instead, business integration capability built from the two resources plays a mediating role through which business achieves competitive advantage. | Commodity or competitive advantage? Analysis of the ERP value paradox |
S1567422313000471 | In this paper we present an XML-based multi-agent system, called Multi Agent System for Traders (MAST), that supports several Business-to-Customer e-Commerce activities, including advertisements and payments. MAST helps both customers and merchants in performing their tasks by using a personalized approach. MAST’s e-payment model avoids exchanging sensitive information, reinforcing trust between merchants and customers. A complete prototype of MAST has been implemented under the JADE framework, and it has been exploited for realizing some experiments, in order to evaluate its performances. | Multi-agent technology and ontologies to support personalization in B2C E-Commerce |
S1567422313000483 | The evaluation of online marketing activities using standalone metrics does not explain the development of consumer behavior over time, although it is of primary importance to allocate and optimize financial resources among multiple advertising channels. We develop a binary logit model with a Bayesian mixture approach to demonstrate consumer clickstreams across multiple online advertising channels. Therefore, a detailed user-level dataset from a large financial service provider is analyzed. We find both differences in the effects of repeated advertisement exposure across multiple types of display advertising as well as positive effects of interaction between display and paid search advertising influencing consumer click probabilities. We identify two consumer types with different levels of susceptibility to online advertising (resistant vs. susceptible consumers) and show that the knowledge of consumers individual click probabilities can support companies in managing display advertising campaigns. | Modeling the clickstream across multiple online advertising channels using a binary logit with Bayesian mixture of normals |
S1567422313000653 | Sunk costs, which can cause inconsistencies between consumer behavior and economic theory, have been widely studied in different research areas and among various consumer groups. Nevertheless, the authors found that to date there has been no empirical research examining how sunk costs affect consumer behavior with regard to online shopping in different product types. Therefore, this study used the following 2×2×2 experimental design to study the effects of sunk costs on consumers’ online shopping decisions. The results show that (1) sunk costs have significant effects on consumer online shopping behavior. However, due to the different product characteristics, the impacts were weaker effective when consumers bought search goods than when they purchased experience goods; (2) the sunk costs (Membership fee vs. Deposit) of an initial choice and a new choice had interaction effects on consumer intention to purchase a new product; and (3) the sunk costs and service quality of an initial choice and new choice would affect the level of regret felt with regard to the consumer’s initial choice. In addition, some recommendations on market positioning and service quality design are made, which practitioners can refer to when formulating marketing strategies. | The role of sunk costs in online consumer decision-making |
S1567422313000677 | The expansive connectivity of emerging information systems has set the stage for pervasive access to healthcare services via e-health systems for selecting the best possible healthcare services. Emerging systems are expected to be highly dynamic open environments connecting diverse number of healthcare services and autonomous entities that are autonomous agents or software applications representing patients. Entities in such dynamic environments may have different security needs from e-health systems raising the challenge of trust computations regarding security. In this research, we proposed a trust assessment model of an e-health service from the viewpoint of an entity. The model contains a comprehensive architecture applicable to different types of entities, and a novel set of trust assessment metrics may be used to assess a specific property of a security system (i.e. partial metrics) or all properties (i.e. total metrics). The simulation based evaluation of proposed model in the context of a Hospital Online Appointment Service has shown that the proposed model provides better trust computation results than existing trust models for e-health systems. Furthermore, the entities are also able to assess the trust even with incomplete security information. | Trust assessment of security for e-health systems |
S1567422313000756 | Online auction fraudsters constantly monitor the contextual situations of the auction and change their behavior strategies accordingly to distract the attention of their targets. This flipping of behavior makes it difficult to identify fraudsters. Thus, legitimate traders need appropriate countermeasures to avoid becoming victimized. To help online auction users detect fraudsters as early as possible, this study develops a systematic method to discover the fraudulent strategies from proven cases of online auction fraud. First, according to the results of cluster analysis on the proven fraudsters, four typical types of fraud are identified, which are Aggressive, Classical, Luxury and Low-profiled. To provide better insight, a strategy is further represented by a series of status transitions. Hidden statuses of latent fraudsters are discovered by applying X-means clustering to the phased profiles of their transaction histories. As a result, various strategies can be extracted by such a systematic method and interesting characteristics are found in these strategies. For example, about 80% fraudsters in the Yahoo!Taiwan auction site flip their behavior no more than two times, which is not as complicated as expected originally. Based on these discovered fraudulent statuses, a high-resolution fraud detection method is performed to classify suspects into legitimate users or fraudsters in different statuses, potentially improving overall detection accuracy. A two-way monitoring procedure is then proposed to successively examine the statuses of a suspicious account. Analysis shows that the two-way monitoring method is promising for better detection of well-camouflaged fraudsters. | Analysis of fraudulent behavior strategies in online auctions for detecting latent fraudsters |
S1567422313000768 | In electronic marketplaces, reputation systems and incentive mechanisms are prevalently employed to promote the honesty of sellers and buyers. In this article, we focus on the scenario in which the inventory is in short supply, i.e. an e-marketplace with limited inventory (EMLI). The challenges are in twofold: (a) for sellers who aim to maximize their profit, they may intentionally conduct dishonest transactions since the limited products are likely to be sold out regardless of their reputation; (b) for buyers who intend to gain the limited products, they may provide untruthful ratings to mislead other buyers. To address these issues, we propose an incentive mechanism to promote buyer and seller honesty for this type of e-marketplaces. Specifically, the mechanism models the honesty of buyers and sellers as scores and reputation, respectively. It then offers a higher price to the products of more honest sellers (with higher reputation) and allocates the products to more honest buyers (with higher scores). In this way, both sellers and buyers are well encouraged to be honest. Furthermore, we impose proper membership fee on new sellers to cope with the whitewashing attack. We finally theoretically analyze and empirically demonstrate the efficacy of the proposed mechanism and its nice properties. | An incentive mechanism designed for e-marketplaces with limited inventory |
S156742231300077X | Researchers have found that price dispersion and market inefficiency exists in electronic marketplaces. Little attention has been bestowed to explore difference in market efficiency between traditional and electronic marketplaces. This study integrates both product and channel preference factors to analyze differences in market efficiency between electronic and traditional shopping environments. Data Envelopment Analysis (DEA) is applied to calculate market efficiency for single-channel and multi-channel shoppers. Results show that market efficiencies vary across consumer segments and products. In summary, this paper enhances understanding of market efficiency by incorporating behavioral segment and product characteristics into the explanatory framework. | An empirical comparison of market efficiency: Electronic marketplaces vs. traditional retail formats |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.