FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S1524070314000241 | Computer Aided Design (CAD) software libraries rely on the tensor-product NURBS model as standard spline technology. However, in applications of industrial complexity, this mathematical model does not provide sufficient flexibility as an effective geometric modeling option. In particular, the multivariate tensor-product construction precludes the design of adaptive spline representations that support local refinements. Consequently, many patches and trimming operations are needed in challenging applications. The investigation of generalizations of tensor-product splines that support adaptive refinement has recently gained significant momentum due to the advent of Isogeometric Analysis (IgA) [2], where adaptivity is needed for performing local refinement in numerical simulations. Moreover, traditional CAD models containing many small (and possibly trimmed) patches are not directly usable for IgA. Truncated hierarchical B-splines (THB-splines) provide the possibility of introducing different levels of resolution in an adaptive framework, while simultaneously preserving the main properties of standard B-splines. We demonstrate that surface fitting schemes based on THB-spline representations may lead to significant improvements for the geometric (re-)construction of critical turbine blade parts. Furthermore, the local THB-spline evaluation in terms of B-spline patches can be properly combined with commercial geometric modeling kernels in order to convert the multilevel spline representation into an equivalent – namely, exact – CAD geometry. This software interface fully integrates the adaptive modeling tool into CAD systems that comply with the current NURBS standard. It also paves the way for the introduction of isogeometric simulations into complex real world applications. | Adaptive CAD model (re-)construction with THB-splines |
S1524070314000253 | In this paper we present a convenient building model synthesis method. It aims at obtaining new user-defined building models through seamless stitching after synthesis of each single building facade. During the optimization process of synthesis of each single building facade, we utilize model structure analysis method to obtain the smallest structural units and the constraint graph among them, transforming complicated three-dimension (3D) synthesis problem into two-dimension (2D) constraint graph synthesis problem. Then we construct a global energy function and minimize it through iterative optimization with expectation maximization algorithm, in order to obtain new objective constraint graph. During stitching process, in order to get complete model synthesis result, we replace objective constraint graph with structural unit to transform synthesis back into 3D space, and achieve automatic stitching between neighboring construction units and neighboring facades by using the connection point sets of structural units in original samples. The experiment results demonstrate our method can generate building models of absolutely different styles quickly and efficiently based on single or multiple samples, while maintaining the continuity and visual integrity of result models well. | A model synthesis method based on single building facade |
S1524070314000265 | Monsters and strange creatures are frequently demanded in 3D games and movies. Modeling such kind of objects calls for creativity and imagination. Especially in a scenario where a large number of monsters with various shapes and styles are required, the designing and modeling process becomes even more challenging. We present a system to assist artists in the creative design of a large collection of various 3D monsters. Starting with a small set of shapes manually selected from different categories, our system iteratively generates sets of monster models serving as the artist’s reference and inspiration. The key component of our system is a so-called creature grammar, which is a shape grammar tailored for the generation of 3D monsters. Creature grammar governs the evolution from creatures with regular structures gradually into monsters with more and more abnormal structures through evolving the arrangement and number of shape parts, while preserving the semantics prescribed as prior knowledge. Experiments show that even starting with a small set of shapes from a few categories of common creatures (e.g., humanoids, bird-like creatures and quadrupeds), our system can produce a large set of unexpected monsters with both shape diversity and visual plausibility, thus providing great support for the user’s creative design. | Creature grammar for creative modeling of 3D monsters |
S1524070314000277 | Rational Bézier curves provide a curve fitting tool and are widely used in Computer Aided Geometric Design, Computer Aided Design and Geometric Modeling. The injectivity (one-to-one property) of rational Bézier curve as a mapping function is equivalent to the curve without self-intersections. We present a geometric condition on the control polygon which is equivalent to the injectivity of rational Bézier curve with this control polygon for all possible choices of weights. The proof is based on the degree elevation and toric degeneration of rational Bézier curve. | Self-intersections of rational Bézier curves |
S1524070314000289 | A quaternion rational surface is a surface generated from two rational space curves by quaternion multiplication. The goal of this paper is to demonstrate how to apply syzygies to analyze quaternion rational surfaces. We show that we can easily construct three special syzygies for a quaternion rational surface from a μ-basis for one of the generating rational space curves. The implicit equation of any quaternion rational surface can be computed from these three special syzygies and inversion formulas for the non-singular points on quaternion rational surfaces can be constructed. Quaternion rational ruled surfaces are generated from the quaternion product of a straight line and a rational space curve. We investigate special μ-bases for quaternion rational ruled surfaces and use these special μ-bases to provide implicitization and inversion formulas for quaternion rational ruled surfaces. Finally, we show how to determine if a real rational surface is also a quaternion rational surface. | Quaternion rational surfaces: Rational surfaces generated from the quaternion product of two rational space curves |
S1524070314000290 | In this paper we present a novel approach to register multiple scans from a static object. We formulate the registration problem as an optimization of the maps from all other scans to one reference scan where any map between two scans can be represented by the composition of these maps. In this way, all loop closures can be automatically guaranteed as the maps among all scans are globally consistent. Furthermore, to avoid the incorrect correspondences between the points in the scan, we employ a parametric bi-directional approach that generates invertible transformations in pairwise overlapping regions. With the parameter information in use and the consistency taken into consideration, we are able to eliminate the drift that often occurred in multi-view registration process. Our approach is fully automatic and has performed better than existing approaches by various experimental results. | Globally consistent rigid registration |
S1524070314000307 | We present a novel approach for non-rigid registration of partially overlapping surfaces acquired from a deforming object. To allow for large and general deformations our method employs a nonlinear physics-inspired deformation model, which has been designed with a particular focus on robustness and performance. We discretize the surface into a set of overlapping patches, for each of which an optimal rigid motion is found and interpolated faithfully using dual quaternion blending. Using this discretization we can formulate the two components of our objective function—a fitting and a regularization term—as a combined global shape matching problem, which can be solved through a very robust numerical approach. Interleaving the optimization with successive patch refinement results in an efficient hierarchical coarse-to-fine optimization. Compared to other approaches our as-rigid-as-possible deformation model is faster, causes less distortion, and gives more accurate fitting results. | Deformable registration using patch-wise shape matching |
S1524070314000344 | Ricci flow deforms the Riemannian metric proportionally to the curvature, such that the curvature evolves according to a heat diffusion process and eventually becomes constant everywhere. Ricci flow has demonstrated its great potential by solving various problems in many fields, which can be hardly handled by alternative methods so far. This work introduces the unified theoretic framework for discrete surface Ricci flow, including all the common schemes: tangential circle packing, Thurston’s circle packing, inversive distance circle packing and discrete Yamabe flow. Furthermore, this work also introduces a novel schemes, virtual radius circle packing and the mixed type schemes, under the unified framework. This work gives explicit geometric interpretation to the discrete Ricci energies for all the schemes with all back ground geometries, and the corresponding Hessian matrices. The unified frame work deepens our understanding to the discrete surface Ricci flow theory, and has inspired us to discover the new schemes, improved the flexibility and robustness of the algorithms, greatly simplified the implementation and improved the efficiency. Experimental results show the unified surface Ricci flow algorithms can handle general surfaces with different topologies, and is robust to meshes with different qualities, and is effective for solving real problems. | The unified discrete surface Ricci flow |
S152407031400040X | We present algorithms for computing the differential geometry properties of lines of curvature of parametric surfaces. We derive a unit tangent vector, curvature vector, binormal vector, torsion, and algorithms to evaluate their higher-order derivatives of lines of curvature of parametric surfaces. Among these quantities, it is shown that the curvature and its first derivative of the lines of curvature lend a hand for the formation of curved plates in shipbuilding. We also visualize the twist of lines of curvature, which enables us to observe how much the osculating plane of the line of curvature turns about the tangent vector. | Differential geometry properties of lines of curvature of parametric surfaces and their visualization |
S1524070314000447 | Minimum enclosing ball algorithms are studied extensively as a tool in approximation and classification of multidimensional data. We present pruning techniques that can accelerate several existing algorithms by continuously removing interior points from the input. By recognizing a key property shared by these algorithms, we derive tighter bounds than have previously been presented, resulting in twice the effect on performance. Furthermore, only minor modifications are required to incorporate the pruning procedure. The presented bounds are independent of the dimension, and empirical evidence shows that the pruning procedure remains effective in dimensions up to at least 200. In some cases, performance improvements of two orders of magnitude are observed for large data sets. | Improved pruning of large data sets for the minimum enclosing ball problem |
S1524070314000459 | In this paper we describe and test a pipeline for the extraction and semantic labelling of geometrically salient points on acquired human body models. Points of interest are extracted on the preprocessed scanned geometries as maxima of the autodiffusion function at different scales and annotated by an expert, where possible, with a corresponding semantic label related to a specific anatomical location. On the extracted points we computed several descriptors (e.g. Heat Kernel Signature, Wave Kernel Signature, Derivatives of Heat Kernel Signature) and used labels and descriptors to train supervised classifiers, in order to understand if it is possible to recognize the points on new models. Experimental results show that this approach can be used to detect and recognize robustly at least a selection of landmarks on subjects with different body types and independently on pose and could therefore applied for automatic anthropometric analysis. | Automatic labelling of anatomical landmarks on 3D body scans |
S1524070314000460 | We present a new fairing method for planar curves, which is particularly well suited for the regularization of the medial axis of a planar domain. It is based on the concept of total variation regularization. The original boundary (given as a closed B-spline curve or several such curves for multiply connected domains) is approximated by another curve that possesses a smaller number of curvature extrema. Consequently, the modified curve leads to a smaller number of branches of the medial axis. In order to compute the medial axis, we use the state-of-the-art algorithm from [1] which is based on arc spline approximation and a domain decomposition approach. We improve this algorithm by using a different decomposition strategy that allows to reduce the number of base cases from 13 to only 5. Moreover, the algorithm reduces the number of conic arcs in the output by approx. 50%. | Total curvature variation fairing for medial axis regularization |
S1524070314000496 | A fast and exact algorithm to eliminate intersections from arbitrary triangle meshes is presented, without any strict requirement on the input. Differently from most recent approaches, our method does not rely on any intermediate representation of the solid. Conversely, we directly cut and stitch mesh parts along the intersections, which allows to easily inherit possible surface attributes such as colors and textures. We rely on standard floating point arithmetics whenever possible, while switching to exact arithmetics only when the available fixed precision is insufficient to guarantee the topological correctness of the result. Our experiments show that the number of these switches is extremely low in practice, and this makes our algorithm outperform all the state-of-the-art methods that provide a guarantee of success. We show how our method can be exploited to quickly extract the so-called outer hull even if the resulting model is non manifold by design and the single parts have boundaries, as long as the outer hull itself is unambiguously definable. | Direct repair of self-intersecting meshes |
S1524070314000502 | In this paper, we present an algorithm for efficient encoding of triangle meshes. The algorithm preserves the local relations between vertices by encoding their Laplacian coordinates, while at the same time, it uses a hierarchy of additional vertex constraints that provides global rigidity and low absolute error, even for large meshes. Our scheme outperforms traversal based as well as Laplacian-based compression schemes in terms of both absolute and perceived distortion at a given data rate. | Hierarchical Laplacian-based compression of triangle meshes |
S1524070314000514 | In this paper, we present a structure-aligned approach for surface parameterization using eigenfunctions from the Laplace–Beltrami operator. Several methods are designed to combine multiple eigenfunctions using isocontours or characteristic values of the eigenfunctions. The combined gradient information of eigenfunctions is then used as a guidance for the cross field construction. Finally, a global parameterization is computed on the surface, with an anisotropy enabled by adapting the cross field to non-uniform parametric line spacings. By combining the gradient information from different eigenfunctions, the generated parametric lines are automatically aligned with the structural features at various scales, and they are insensitive to local detailed features on the surface when low-mode eigenfunctions are used. | Structure-aligned guidance estimation in surface parameterization using eigenfunction-based cross field |
S1524070314000538 | We present an approach to robustly track the geometry of an object that deforms over time from a set of input point clouds captured from a single viewpoint. The deformations we consider are caused by applying forces to known locations on the object’s surface. Our method combines the use of prior information on the geometry of the object modeled by a smooth template and the use of a linear finite element method to predict the deformation. This allows the accurate reconstruction of both the observed and the unobserved sides of the object. We present tracking results for noisy low-quality point clouds acquired by either a stereo camera or a depth camera, and simulations with point clouds corrupted by different error terms. We show that our method is also applicable to large non-linear deformations. | Finite element based tracking of deforming surfaces |
S1524070314000769 | Numerical dissipation acts as artificial viscosity to make smoke viscous. Reducing numerical dissipation is able to recover visual details smeared out by the numerical dissipation. Great efforts have been devoted to suppress the numerical dissipation in smoke simulation in the past few years. In this paper we investigate methods of combating the numerical dissipation. We describe visual consequences of the numerical dissipation and explore sources that introduce the numerical dissipation into course of smoke simulation. Methods are investigated from various aspects including grid variation, high-order advection, sub-grid compensation, invariant conservation, and particle-based improvement, followed by discussion and comparison in terms of visual quality, computational overhead, ease of implementation, adaptivity, and scalability, which leads to their different applicability to various application scenarios. | Reducing numerical dissipation in smoke simulation |
S1524070315000028 | It has proved difficult to visualize the iridescent colors of objects coated with multilayer films in ray-based tracers. A physical-based full spectrum scattering model is proposed in this paper; this model applies the multi-beam interference equations to explain the multiple reflection and refraction of light inside the thin films in order to simulate the phase and amplitude variations of light related to iridescent colors. The Fresnel coefficients for the metallic and dielectric films are introduced respectively to photo-realistically render the wave properties of multilayer films. The solution is further extended to take into account the geometry of rough surfaces instead of a smooth averaging surface where the anisotropic and isotropic scattering characteristics are well explained. We demonstrate the validity of the proposed model by visualizing wave effects for diverse metallic or dielectric multilayer film structures and a Morpho butterfly. | Microfacet-based interference simulation for multilayer films |
S152407031500003X | Given a set of symmetric/antisymmetric filter vectors containing only regular multiresolution filters, the method we present in this article can establish a balanced multiresolution scheme for images, allowing their balanced decomposition and subsequent perfect reconstruction without the use of any extraordinary boundary filters. We define balanced multiresolution such that it allows balanced decomposition i.e. decomposition of a high-resolution image into a low-resolution image and corresponding details of equal size. Such a balanced decomposition makes on-demand reconstruction of regions of interest efficient in both computational load and implementation aspects. We find this balanced decomposition and perfect reconstruction based on an appropriate combination of symmetric/antisymmetric extensions near the image and detail boundaries. In our method, exploiting such extensions correlates to performing sample (pixel/voxel) split operations. Our general approach is demonstrated for some commonly used symmetric/antisymmetric multiresolution filters. We also show the application of such a balanced multiresolution scheme in real-time focus+context visualization. | Balanced multiresolution for symmetric/antisymmetric filters |
S1524070315000181 | We propose a 3D symmetric homotopic thinning method based on the critical kernels framework. It may produce either curvilinear or surface skeletons, depending on the criterion that is used to prevent salient features of the object from deletion. In our new method, rather than detecting curve or surface extremities, we detect isthmuses, that is, parts of an object that are “locally like a curve or a surface”. This allows us to propose a natural extension of our new method that copes with the robustness to noise issue, this extension is based on a notion of “isthmus persistence”. As far as we know, this is the first method that permits to obtain 3D symmetric and robust curvilinear/surface skeletons of objects made of voxels. | Isthmus based parallel and symmetric 3D thinning algorithms |
S1524070315000193 | This paper presents a system for design and simulation of supporting tube structure. We model each freeform tube component as a swept surface, and employ boundary control and skeletal control to manipulate its cross-sections and its embedding respectively. With the parametrization of the swept surface, a quadrilateral mesh consisting of nine-node general shell elements is automatically generated and the stress distribution of the structure is simulated using the finite element method. In order to accelerate the complex finite element simulation, we adopt a two-level subspace simulation strategy, which constructs a secondary complementary subspace to improve the subspace simulation accuracy. Together with the domain decomposition method, our system is able to provide interactive feedback for parametric freeform tube editing. Experiments show that our system is able to predict the structural character of the tube structure efficiently and accurately. | Interactive design and simulation of tubular supporting structure |
S152407031500020X | Continuous collision detection is a key technique to meet non-penetration requirements in many applications. Even though it is possible to perform efficient culling operations in the broad stage of a continuous collision detection algorithm, such as bounding volume hierarchies, a huge number of potentially colliding triangles still survive and go to the succeeding narrow stage. This heavily burdens the elementary collision tests in a collision detection algorithm and affects the performance of the entire pipeline, especially for fast moving or deforming objects. This paper presents a low-cost filtering algorithm using algebraic analysis techniques. It can significantly reduce the number of elementary collision tests that occur in the narrow stage. We analyze the root existence during the time interval [0, 1] for a standard cubic equation defining an elementary collision test. We demonstrate the efficiency of the algebraic filter in our experiments. Cubic-solvers augmented by our filtering algorithm are able to achieve up to 99% filtering ratios and more than 10 × performance improvement against the standard cubic-solver without any filters. | A fast algebraic non-penetration filter for continuous collision detection |
S1524070315000211 | An important area of reverse engineering is to produce digital models of mechanical parts from measured data points. In this process inaccuracies may occur due to noise and the numerical nature of the algorithms, such as, aligning point clouds, mesh processing, segmentation and surface fitting. As a consequence, faces will not be precisely parallel or orthogonal, smooth connections may be of poor quality, axes of concentric cylinders may be slightly tilted, and so on. In this paper we present algorithms to eliminate these inaccuracies and create “perfected†B-rep models suitable for downstream CAD/CAM applications. Using a segmented and classified set of smooth surface regions we enforce various constraints for automatically selected groups of surfaces. We extend a formerly published technology of Benkő et al. (2002). It is an essential element of our approach, however, that we do not know in advance the set of surfaces that will actually get involved in the final constrained fitting. We propose local methods to select and synchronize “likely†geometric constraints, detected between pairs of entities. We also propose global methods to determine constraints related to the whole object, although the best-fit coordinate systems, reference grids and symmetry planes will be determined only by surface entities qualified as relevant. Lots of examples illustrate how these constrained fitting algorithms improve the quality of reconstructed objects. | Applying geometric constraints for perfecting CAD models in reverse engineering |
S1524070315000223 | We introduce a novel approach to solve the stationary Stokes equations on very large voxel geometries. One main idea is to coarsen a voxel geometry in areas where velocity does not vary much while keeping the original resolution near the solid surfaces. For spatial partitioning a simplified LIR-tree is used which is a generalization of the Octree and KD-tree. The other main idea is to arrange variables in a way such that each cell is able to satisfy the Stokes equations independently. Pressure and velocity are discretized on staggered grids. But instead of using one velocity variable on the cell surface we introduce two variables. The discretization of momentum and mass conservation yields a small linear system (block) per cell that allows to use the block Gauss–Seidel algorithm as iterative solver. We compare our method to other solvers and conclude superior performance in runtime and memory for high porosity geometries. | The LIR space partitioning system applied to the Stokes equations |
S1524070315000235 | Smooth surface approximation is an important problem in many applications. We consider an implicit surface description which has many well known properties, such as being well suited to perform collision detection. We describe a method to smooth a triangle mesh by constructing an implicit convolution-based surface. Both the convolution kernel and the implicitization of the mesh are linearized. We employ the straight skeleton to linearize the latter. The resulting implicit function is globally C 2 continuous, even for non-surface points, and can be explicitly analytically evaluated. This allows the function to be used in simulation systems requiring C 2 continuity, for which we give an example from industrial simulation, in contrast to methods which only locally smooth the surface itself. | Smooth convolution-based distance functions |
S1524070315000260 | Based on the computation of a superset of the implicit support, implicitization of a parametrically given hypersurface is reduced to computing the nullspace of a numeric matrix. Our approach predicts the Newton polytope of the implicit equation by exploiting the sparseness of the given parametric equations and of the implicit polynomial, without being affected by the presence of any base points. In this work, we study how this interpolation matrix expresses the implicit equation as a matrix determinant, which is useful for certain operations such as ray shooting, and how it can be used to reduce some key geometric predicates on the hypersurface, namely membership and sidedness for given query points, to simple numerical operations on the matrix, without need to develop the implicit equation. We illustrate our results with examples based on our Maple implementation. | Geometric operations using sparse interpolation matrices |
S1524070315000296 | We discuss bi-harmonic fields which approximate signed distance fields. We conclude that the bi-harmonic field approximation can be a powerful tool for mesh completion in general and complex cases. We present an adaptive, multigrid algorithm to extrapolate signed distance fields. By defining a volume mask in a closed region bounding the area that must be repaired, the algorithm computes a signed distance field in well-defined regions and uses it as an over-determined boundary condition constraint for the biharmonic field computation in the remaining regions. The algorithm operates locally, within an expanded bounding box of each hole, and therefore scales well with the number of holes in a single, complex model. We discuss this approximation in practical examples in the case of triangular meshes resulting from laser scan acquisitions which require massive hole repair. We conclude that the proposed algorithm is robust and general, and is able to deal with complex topological cases. | Biharmonic fields and mesh completion |
S1524070315000302 | PHT-splines (polynomials splines over hierarchical T-meshes) are a generalization of B-splines over hierarchical T-meshes which possess a very efficient local refinement property. This property makes PHT-splines preferable in geometric processing, adaptive finite elements and isogeometric analysis. In this paper, we first make analysis of the previously constructed basis functions of PHT-splines and observe a decay phenomenon of the basis functions under certain refinement of T-meshes, which is not expected in applications. We then propose a new basis consisting of a set of local tensor product B-splines for PHT-splines which overcomes the decay phenomenon. Some examples are provided for solving numerical PDEs with the new basis, and comparison is made between the new basis and the original basis. Experimental results suggest that the new basis provides better numerical stability in solving numerical PDEs. | A new basis for PHT-splines |
S1524070315000387 | The introduction of multi-material additive manufacturing makes it possible to fabricate objects with varying material properties, leading to new types of designs that exhibit interesting and complicated behaviours. But, computational design methods typically focus on the structure and geometry of designed objects, and do not incorporate material properties or behaviour. This paper explores how material properties can be included in computational design, by formally modelling them as weights in shape computations. Shape computations, such as shape grammars, formalise the description and manipulations of pictorial representation in creative design processes. The paper explores different ways that material properties can be formally modelled as weights, and presents examples in which multi-material surfaces are modelled as weighted planes, giving rise to flexible behaviours that can be considered in design exploration. | Exploration of multi-material surfaces as weighted shapes |
S1524070315000405 | This paper introduces a new approach for non-rigid registration which is able to align two surfaces under large deformation. At the first stage, the two surfaces are remeshed to create two sub-meshes. Then, correspondences between the two sub-meshes are obtained by employing a robust probabilistic method. Reliable correspondences are used to find a global rigid transformation to bring the two surfaces closer. Using these correspondences and taking one sub-mesh as a deformation graph, the second stage applies a deformation model to align the two surfaces together. This stage solves for more correct correspondences as well as for a local transformation at each node simultaneously. With these two stages, the method deals robustly with large motion and preserves small details during the alignment process of the two surfaces. The efficiency of the method is demonstrated by comparing the alignment results it achieves with the results obtained by state-of-the-art methods. | A two-stage approach to align two surfaces of deformable objects |
S1524070315000417 | Symmetries exist in many 3D models while efficiently finding their symmetry planes is important and useful for many related applications. This paper presents a simple and efficient view-based reflection symmetry detection method based on the viewpoint entropy features of a set of sample views of a 3D model. Before symmetry detection, we align the 3D model based on the Continuous Principal Component Analysis (CPCA) method. To avoid the high computational load resulting from a directly combinatorial matching among the sample views, we develop a fast symmetry plane detection method by first generating a candidate symmetry plane based on a matching pair of sample views and then verifying whether the number of remaining matching pairs is within a minimum number. Experimental results and two related applications demonstrate better accuracy, efficiency, robustness and versatility of our algorithm than state-of-the-art approaches. | Efficient 3D reflection symmetry detection: A view-based approach |
S1524070316000175 | We face the problem of obtaining the optimal polygonal approximation of a digital planar curve. Given an ordered set of points on the Euclidean plane, an efficient method to obtain a polygonal approximation with the minimum number of segments, such that, the distortion error does not excess a threshold, is proposed. We present a novel algorithm to determine the optimal solution for the min-# polygonal approximation problem using the sum of square deviations criterion on closed curves. Our proposal, which is based on Mixed Integer Programming, has been tested using a set of contours of real images, obtaining significant differences in the computation time needed in comparison to the state-of-the-art methods. | Fast computation of optimal polygonal approximations of digital planar closed curves |
S1524070316000187 | In the Oil and Gas industry, processing and visualizing 3D models is of paramount importance for making exploratory and production decisions. Hydrocarbons reservoirs are entities buried deep in the earth’s crust, and a simplified 3D geological model that mimics this environment is generated to run simulations and help understand geological and physical concepts. For the task of visually inspecting these models, we advocate the use of Cutaways: an illustrative technique to emphasize important structures or parts of the model by selectively discarding occluding parts, while keeping the contextual information. However, the complexity of reservoir models imposes severe restrictions and limitations when using generic illustrative techniques previously proposed by the computer graphics community. To overcome this challenge, we propose an interactive Cutaway method, strongly relying on screen-space GPU techniques, specially designed for inspecting 3D reservoir models represented as corner-point grids, the industry’s standard. | Interactive cutaways of oil reservoirs |
S1524070316000199 | Posing objects in their upright orientations is the very first step of 3D shape analysis. However, 3D models in existing repositories may be far from their right orientations due to various reasons. In this paper, we present a data-driven method for 3D object upright orientation estimation using 3D Convolutional Networks (ConvNets), and the method is designed in the style of divide-and-conquer due to the interference effect. Thanks to the public big 3D datasets and the feature learning ability of ConvNets, our method can handle not only man-made objects but also natural ones. Besides, without any regularity assumptions, our method can deal with asymmetric and several other failure cases of existing approaches. Furthermore, a distance based clustering technique is proposed to reduce the memory cost and a test-time augmentation procedure is used to improve the accuracy. Its efficiency and effectiveness are demonstrated in the experimental results. | Upright orientation of 3D shapes with Convolutional Networks |
S1524070316000266 | This paper describes a new efficient algorithm for the rapid computation of exact shortest distances between a point cloud and another object (e.g. triangulated, point-based, etc.) in three dimensions. It extends the work presented in Eriksson and Shellshear (2014) where only approximate distances were computed on a simplification of a massive point cloud. Here, the fast computation of the exact shortest distance is achieved by pruning large subsets of the point cloud known not to be closest to the other object. The approach works for massive point clouds even with a small amount of RAM and is able to provide real time performance. Given a standard PC with only 8GB of RAM, this resulted in real-time shortest distance computations of 15 frames per second for a point cloud having 1 billion points in three dimensions. | Fast exact shortest distance queries for massive point clouds |
S1524070316300029 | The binary orientation tree (BOT), proposed in [Chen, Y.-L., Chen, B.-Y., Lai, S.-H., and Nishita, T. (2010). Binary orientation trees for volume and surface reconstruction from unoriented point clouds. Comput. Graph. Forum, 29(7), 2011–2019.], is a useful spatial hierarchical data structure for geometric processing such as 3D reconstruction and implicit surface approximation from an input point set. BOT is an octree in which all the vertices of the leaf nodes in the tree are tagged with an ‘in/out’ label based on their spatial relationship to the underlying surface enclosed by the octree. Unfortunately, such a data structure in [Chen, Y.-L., Chen, B.-Y., Lai, S.-H., and Nishita, T. (2010). Binary orientation trees for volume and surface reconstruction from unoriented point clouds. Comput. Graph. Forum, 29(7), 2011–2019.] is only valid for watertight surfaces, which restricts its application. In this paper, we extend the ‘in/out’ relationship to ‘front/back/NA’ relationship to either a closed or an open surface, and propose a new method to build such a spatial data structure from a given arbitrary point set. We first classify the edges of the leaf nodes into two different categories based on whether their two end points are in the same side of the surface or not, and attach respective labels to the edges accordingly. A global propagation process is then applied to get the consistent labels of those end points that are in the same side of the surface. Experiments show that our BOT building method is much more robust, efficient and applicable to various input compared to existing methods, and the applications of BOT when doing RBF reconstructions and envelope surface computations of given 3D objects are shown in the experimental part. | Building binary orientation octree for an arbitrary scattered point set |
S1524070316300054 | An index of measuring the variation on a surface called the smooth shrink index (SSI) which presents robustness to noise and non-uniform sampling is developed in this work. Afterwards, a new algorithm used for extracting the feature lines was proposed. Firstly, the points with an absolute value of SSI greater than a given threshold are selected as potential feature points. Then, the SSI is applied as the growth condition to conduct region segmentation of the potential feature points. Finally, a bilateral filter algorithm is employed to obtain the final feature points by thinning the potential feature points iteratively. While thinning the potential feature points, the tendency of the feature lines is acquired using principle component analysis (PCA) to restrict the drift direction of the potential feature points, so as to prevent the shrink in the endpoints of the feature lines and breaking of the feature lines induced by non-uniform sampling. | Extracting feature lines from point clouds based on smooth shrink and iterative thinning |
S1532046413000324 | Early detection and accurate characterization of disease outbreaks are important tasks of public health. Infectious diseases that present symptomatically like influenza (SLI), including influenza itself, constitute an important class of diseases that are monitored by public-health epidemiologists. Monitoring emergency department (ED) visits for presentations of SLI could provide an early indication of the presence, extent, and dynamics of such disease in the population. We investigated the use of daily over-the-counter thermometer-sales data to estimate daily ED SLI counts in Allegheny County (AC), Pennsylvania. We found that a simple linear model fits the data well in predicting daily ED SLI counts from daily counts of thermometer sales in AC. These results raise the possibility that this model could be applied, perhaps with adaptation, in other regions of the country, where commonly thermometer sales data are available, but daily ED SLI counts are not. | A method for estimating from thermometer sales the incidence of diseases that are symptomatically similar to influenza |
S1532046413000373 | Introduction Managing chronic disease through automated systems has the potential to both benefit the patient and reduce health-care costs. We have developed and evaluated a disease management system for patients with chronic obstructive pulmonary disease (COPD). Its aim is to predict and detect exacerbations and, through this, help patients self-manage their disease to prevent hospitalisation. Materials The carefully crafted intelligent system consists of a mobile device that is able to collect case-specific, subjective and objective, physiological data, and to alert the patient by a patient-specific interpretation of the data by means of probabilistic reasoning. Collected data are also sent to a central server for inspection by health-care professionals. Methods We evaluated the probabilistic model using cross-validation and ROC analyses on data from an earlier study and by an independent data set. Furthermore a pilot with actual COPD patients has been conducted to test technical feasibility and to obtain user feedback. Results Model evaluation results show that we can reliably detect exacerbations. Pilot study results suggest that an intervention based on this system could be successful. | An autonomous mobile system for the management of COPD |
S1532046413000385 | In this paper an approach for developing a temporal domain ontology for biomedical simulations is introduced. The ideas are presented in the context of simulations of blood flow in aneurysms using the Lattice Boltzmann Method. The advantages in using ontologies are manyfold: On the one hand, ontologies having been proven to be able to provide medical special knowledge e.g., key parameters for simulations. On the other hand, based on a set of rules and the usage of a reasoner, a system for checking the plausibility as well as tracking the outcome of medical simulations can be constructed. Likewise, results of simulations including data derived from them can be stored and communicated in a way that can be understood by computers. Later on, this set of results can be analyzed. At the same time, the ontologies provide a way to exchange knowledge between researchers. Lastly, this approach can be seen as a black-box abstraction of the internals of the simulation for the biomedical researcher as well. This approach is able to provide the complete parameter sets for simulations, part of the corresponding results and part of their analysis as well as e.g., geometry and boundary conditions. These inputs can be transferred to different simulation methods for comparison. Variations on the provided parameters can be automatically used to drive these simulations. Using a rule base, unphysical inputs or outputs of the simulation can be detected and communicated to the physician in a suitable and familiar way. An example for an instantiation of the blood flow simulation ontology and exemplary rules for plausibility checking are given. | A novel approach for connecting temporal-ontologies with blood flow simulations |
S1532046413000403 | Purpose The goal of this work is to contribute to personalized clinical management in home-based telemonitoring scenarios by developing an ontology-driven solution that enables a wide range of remote chronic patients to be monitored at home. Methods Through three stages, the challenges of integration and management were met through the ontology development and evaluation. The first stage dealt with the ontology design and implementation. The second stage dealt with the ontology application study in order to specifically address personalization issues. For both stages, interviews and working sessions were planned with clinicians. Clinical guidelines and MDs (medical device) interoperability were taken into account as well during these stages. Finally the third stage dealt with a software prototype implementation. Results An ontology was developed as an outcome of the first stage. The structure, based on the autonomic computing paradigm, provides a clear and simple manner to automate and integrate the data management procedure. During the second stage, the application of the ontology was studied to monitor patients with different and multiple morbidities. After this task, the ontology design was successfully adjusted to provide useful personalized medical care. In the third and final stage, a proof-of-concept on the software required to remote monitor patients by means of the ontology-based solution was developed and evaluated. Conclusions Our proposed ontology provides an understandable and simple solution to address integration and personalized care challenges in home-based telemonitoring scenarios. Furthermore, our three-stage approach contributes to enhance the understanding, re-usability and transferability of our solution. | A three stage ontology-driven solution to provide personalized care to chronic patients at home |
S1532046413000427 | We developed an EXpectation Propagation LOgistic REgRession (EXPLORER) model for distributed privacy-preserving online learning. The proposed framework provides a high level guarantee for protecting sensitive information, since the information exchanged between the server and the client is the encrypted posterior distribution of coefficients. Through experimental results, EXPLORER shows the same performance (e.g., discrimination, calibration, feature selection, etc.) as the traditional frequentist logistic regression model, but provides more flexibility in model updating. That is, EXPLORER can be updated one point at a time rather than having to retrain the entire data set when new observations are recorded. The proposed EXPLORER supports asynchronized communication, which relieves the participants from coordinating with one another, and prevents service breakdown from the absence of participants or interrupted communications. | EXpectation Propagation LOgistic REgRession (EXPLORER): Distributed privacy-preserving online model learning |
S1532046413000439 | Gene selection is an important task in bioinformatics studies, because the accuracy of cancer classification generally depends upon the genes that have biological relevance to the classifying problems. In this work, randomization test (RT) is used as a gene selection method for dealing with gene expression data. In the method, a statistic derived from the statistics of the regression coefficients in a series of partial least squares discriminant analysis (PLSDA) models is used to evaluate the significance of the genes. Informative genes are selected for classifying the four gene expression datasets of prostate cancer, lung cancer, leukemia and non-small cell lung cancer (NSCLC) and the rationality of the results is validated by multiple linear regression (MLR) modeling and principal component analysis (PCA). With the selected genes, satisfactory results can be obtained. | Selecting significant genes by randomization test for cancer classification using gene expression data |
S1532046413000440 | Personalized medicine is to deliver the right drug to the right patient in the right dose. Pharmacogenomics (PGx) is to identify genetic variants that may affect drug efficacy and toxicity. The availability of a comprehensive and accurate PGx-specific drug–gene relationship knowledge base is important for personalized medicine. However, building a large-scale PGx-specific drug–gene knowledge base is a difficult task. In this study, we developed a bootstrapping, semi-supervised learning approach to iteratively extract and rank drug–gene pairs according to their relevance to drug pharmacogenomics. Starting with a single PGx-specific seed pair and 20 million MEDLINE abstracts, the extraction algorithm achieved a precision of 0.219, recall of 0.368 and F1 of 0.274 after two iterations, a significant improvement over the results of using non-PGx-specific seeds (precision: 0.011, recall: 0.018, and F1: 0.014) or co-occurrence (precision: 0.015, recall: 1.000, and F1: 0.030). After the extraction step, the ranking algorithm further improved the precision from 0.219 to 0.561 for top ranked pairs. By comparing to a dictionary-based approach with PGx-specific gene lexicon as input, we showed that the bootstrapping approach has better performance in terms of both precision and F1 (precision: 0.251 vs. 0.152, recall: 0.396 vs. 0.856 and F1: 0.292 vs. 0.254). By integrative analysis using a large drug adverse event database, we have shown that the extracted drug–gene pairs strongly correlate with drug adverse events. In conclusion, we developed a novel semi-supervised bootstrapping approach for effective PGx-specific drug–gene pair extraction from large number of MEDLINE articles with minimal human input. | A semi-supervised approach to extract pharmacogenomics-specific drug–gene pairs from biomedical literature for personalized medicine |
S1532046413000464 | A major goal of Natural Language Processing in the public health informatics domain is the automatic extraction and encoding of data stored in free text patient records. This extracted data can then be utilized by computerized systems to perform syndromic surveillance. In particular, the chief complaint—a short string that describes a patient’s symptoms—has come to be a vital resource for syndromic surveillance in the North American context due to its near ubiquity. This paper reviews fifteen systems in North America—at the city, county, state and federal level—that use chief complaints for syndromic surveillance. | Using chief complaints for syndromic surveillance: A review of chief complaint based classifiers in North America |
S1532046413000476 | A new model of health care is emerging in which individuals can take charge of their health by connecting to online communities and social networks for personalized support and collective knowledge. Web 2.0 technologies expand the traditional notion of online support groups into a broad and evolving range of informational, emotional, as well as community-based concepts of support. In order to apply these technologies to patient-centered care, it is necessary to incorporate more inclusive conceptual frameworks of social support and community-based research methodologies. This paper introduces a conceptualization of online social support, reviews current challenges in online support research, and outlines six recommendations for the design, evaluation, and implementation of social support in online communities, networks, and groups. The six recommendations are illustrated by CanConnect, an online community for cancer survivors in middle Tennessee. These recommendations address the interdependencies between online and real-world support and emphasize an inclusive framework of interpersonal and community-based support. The applications of these six recommendations are illustrated through a discussion of online support for cancer survivors. | Recommendations for the design, implementation and evaluation of social support in online communities, networks, and groups |
S153204641300049X | Purpose Despite years of effort and millions of dollars spent to create unified electronic communicable disease reporting systems, the goal remains elusive. A major barrier has been a lack of understanding by system designers of communicable disease (CD) work and the public health workers who perform this work. This study reports on the application of user-centered design representations, traditionally used for improving interface design, to translate the complex CD work identified through ethnographic studies to guide designers and developers of CD systems. The purpose of this work is to: (1) better understand public health practitioners and their information workflow with respect to CD monitoring and control at a local health agency, and (2) to develop evidence-based design representations that model this CD work to inform the design of future disease surveillance systems. Methods We performed extensive onsite semi-structured interviews, targeted work shadowing and a focus group to characterize local health agency CD workflow. Informed by principles of design ethnography and user-centered design we created persona, scenarios and user stories to accurately represent the user to system designers. Results We sought to convey to designers the key findings from ethnographic studies: (1) public health CD work is mobile and episodic, in contrast to current CD reporting systems, which are stationary and fixed, (2) health agency efforts are focused on CD investigation and response rather than reporting and (3) current CD information systems must conform to public health workflow to ensure their usefulness. In an effort to illustrate our findings to designers, we developed three contemporary design-support representations: persona, scenario, and user story. Conclusions Through application of user-centered design principles, we were able to create design representations that illustrate complex public health communicable disease workflow and key user characteristics to inform the design of CD information systems for public health. | Scenarios, personas and user stories: User-centered evidence-based design representations of communicable disease investigations |
S1532046413000506 | Usability testing is recognized as an effective means to improve the usability of medical devices and prevent harm for patients and users. Effectiveness of problem discovery in usability testing strongly depends on size and representativeness of the sample. We introduce the late control strategy, which is to continuously monitor effectiveness of a study towards a preset target. A statistical model, the LNBzt model, is presented, supporting the late control strategy. We report on a case study, where a prototype medical infusion pump underwent a usability test with 34 users. On the data obtained in this study, the LNBzt model is evaluated and compared against earlier prediction models. The LNBzt model fits the data much better than previously suggested approaches and improves prediction. We measure the effectiveness of problem identification, and observe that it is lower than is suggested by much of the literature. Larger sample sizes seem to be in order. In addition, the testing process showed high levels of uncertainty and volatility at small to moderate sample sizes, partly due to users’ individual differences. In reaction, we propose the idiosyncrasy score as a means to obtain representative samples. Statistical programs are provided to assist practitioners and researchers in applying the late control strategy. | With how many users should you test a medical infusion pump? Sampling strategies for usability tests on high-risk systems |
S1532046413000518 | The potential of plant-based remedies has been documented in both traditional and contemporary biomedical literature. Such types of text sources may thus be sources from which one might identify potential plant-based therapies (“phyto-therapies”). Concept-based analytic approaches have been shown to uncover knowledge embedded within biomedical literature. However, to date there has been limited attention towards leveraging such techniques for the identification of potential phyto-therapies. This study presents concept-based analytic approaches for the retrieval and ranking of associations between plants and human diseases. Focusing on identification of phyto-therapies described in MEDLINE, both MeSH descriptors used for indexing and MetaMap inferred UMLS concepts are considered. Furthermore, the identification and ranking consider both direct (i.e., plant concepts directly correlated with disease concepts) and inferred (i.e., plant concepts associated with disease concepts based on shared signs and symptoms) relationships. Based on the two scoring methodologies used in this study, it was found that a Vector Space Model approach outperformed probabilistic reliability based inferences. An evaluation of the approach is provided based on therapeutic interventions catalogued in both ClinicalTrials.gov and NDF-RT. The promising findings from this feasibility study highlight the challenges and applicability of concept-based analytic strategies for distilling phyto-therapeutic knowledge from text based knowledge sources like MEDLINE. | Leveraging concept-based approaches to identify potential phyto-therapies |
S153204641300052X | In this paper we discuss the design and development of TRAK (Taxonomy for RehAbilitation of Knee conditions), an ontology that formally models information relevant for the rehabilitation of knee conditions. TRAK provides the framework that can be used to collect coded data in sufficient detail to support epidemiologic studies so that the most effective treatment components can be identified, new interventions developed and the quality of future randomized control trials improved to incorporate a control intervention that is well defined and reflects clinical practice. TRAK follows design principles recommended by the Open Biomedical Ontologies (OBO) Foundry. TRAK uses the Basic Formal Ontology (BFO) as the upper-level ontology and refers to other relevant ontologies such as Information Artifact Ontology (IAO), Ontology for General Medical Science (OGMS) and Phenotype And Trait Ontology (PATO). TRAK is orthogonal to other bio-ontologies and represents domain-specific knowledge about treatments and modalities used in rehabilitation of knee conditions. Definitions of typical exercises used as treatment modalities are supported with appropriate illustrations, which can be viewed in the OBO-Edit ontology editor. The vast majority of other classes in TRAK are cross-referenced to the Unified Medical Language System (UMLS) to facilitate future integration with other terminological sources. TRAK is implemented in OBO, a format widely used by the OBO community. TRAK is available for download from http://www.cs.cf.ac.uk/trak. In addition, its public release can be accessed through BioPortal, where it can be browsed, searched and visualized. | TRAK ontology: Defining standard care for the rehabilitation of knee conditions |
S1532046413000531 | We describe a clinical research visit scheduling system that can potentially coordinate clinical research visits with patient care visits and increase efficiency at clinical sites where clinical and research activities occur simultaneously. Participatory Design methods were applied to support requirements engineering and to create this software called Integrated Model for Patient Care and Clinical Trials (IMPACT). Using a multi-user constraint satisfaction and resource optimization algorithm, IMPACT automatically synthesizes temporal availability of various research resources and recommends the optimal dates and times for pending research visits. We conducted scenario-based evaluations with 10 clinical research coordinators (CRCs) from diverse clinical research settings to assess the usefulness, feasibility, and user acceptance of IMPACT. We obtained qualitative feedback using semi-structured interviews with the CRCs. Most CRCs acknowledged the usefulness of IMPACT features. Support for collaboration within research teams and interoperability with electronic health records and clinical trial management systems were highly requested features. Overall, IMPACT received satisfactory user acceptance and proves to be potentially useful for a variety of clinical research settings. Our future work includes comparing the effectiveness of IMPACT with that of existing scheduling solutions on the market and conducting field tests to formally assess user adoption. | An Integrated Model for Patient Care and Clinical Trials (IMPACT) to support clinical research visit scheduling workflow for future learning health systems |
S1532046413000671 | Whilst the future for social media in chronic disease management appears to be optimistic, there is limited concrete evidence indicating whether and how social media use significantly improves patient outcomes. This review examines the health outcomes and related effects of using social media, while also exploring the unique affordances underpinning these effects. Few studies have investigated social media’s potential in chronic disease, but those we found indicate impact on health status and other effects are positive, with none indicating adverse events. Benefits have been reported for psychosocial management via the ability to foster support and share information; however, there is less evidence of benefits for physical condition management. We found that studies covered a very limited range of social media platforms and that there is an ongoing propensity towards reporting investigations of earlier social platforms, such as online support groups (OSG), discussion forums and message boards. Finally, it is hypothesized that for social media to form a more meaningful part of effective chronic disease management, interventions need to be tailored to the individualized needs of sufferers. The particular affordances of social media that appear salient in this regard from analysis of the literature include: identity, flexibility, structure, narration and adaptation. This review suggests further research of high methodological quality is required to investigate the affordances of social media and how these can best serve chronic disease sufferers. Evidence-based practice (EBP) using social media may then be considered. | Health outcomes and related effects of using social media in chronic disease management: A literature review and analysis of affordances |
S1532046413000683 | This paper proposes an encoding system for 1D biomedical signals that allows embedding metadata and provides security and privacy. The design is based on the analysis of requirements for secure and efficient storage, transmission and access to medical tests in e-health environment. This approach uses the 1D SPIHT algorithm to compress 1D biomedical signals with clinical quality, metadata embedding in the compressed domain to avoid extra distortion, digital signature to implement security and attribute-level encryption to support Role-Based Access Control. The implementation has been extensively tested using standard electrocardiogram and electroencephalogram databases (MIT-BIH Arrhythmia, MIT-BIH Compression and SCCN-EEG), demonstrating high embedding capacity (e.g. 3KB in resting ECGs, 200KB in stress tests, 30MB in ambulatory ECGs), short delays (2–3.3s in real-time transmission) and compression of the signal (by ≃3 in real-time transmission, by ≃5 in offline operation) despite of the embedding of security elements and metadata to enable e-health services. | Secure information embedding into 1D biomedical signals based on SPIHT |
S1532046413000695 | Although technological or organizational systems that enforce systematic procedures and best practices can lead to improvements in quality, these systems must also be designed to allow users to adapt to the inherent uncertainty, complexity, and variations in healthcare. We present a framework, called Systematic Yet Flexible Systems Analysis (SYFSA) that supports the design and analysis of Systematic Yet Flexible (SYF) systems (whether organizational or technical) by formally considering the tradeoffs between systematicity and flexibility. SYFSA is based on analyzing a task using three related problem spaces: the idealized space, the natural space, and the system space. The idealized space represents the best practice—how the task is to be accomplished under ideal conditions. The natural space captures the task actions and constraints on how the task is currently done. The system space specifies how the task is done in a redesigned system, including how it may deviate from the idealized space, and how the system supports or enforces task constraints. The goal of the framework is to support the design of systems that allow graceful degradation from the idealized space to the natural space. We demonstrate the application of SYFSA for the analysis of a simplified central line insertion task. We also describe several information-theoretic measures of flexibility that can be used to compare alternative designs, and to measure how efficiently a system supports a given task, the relative cognitive workload, and learnability. | SYFSA: A framework for Systematic Yet Flexible Systems Analysis |
S1532046413000701 | Clinical decision-support systems (CDSSs) comprise systems as diverse as sophisticated platforms to store and manage clinical data, tools to alert clinicians of problematic situations, or decision-making tools to assist clinicians. Irrespective of the kind of decision-support task CDSSs should be smoothly integrated within the clinical information system, interacting with other components, in particular with the electronic health record (EHR). However, despite decades of developments, most CDSSs lack interoperability features. We deal with the interoperability problem of CDSSs and EHRs by exploiting the dual-model methodology. This methodology distinguishes a reference model and archetypes. A reference model is represented by a stable and small object-oriented model that describes the generic properties of health record information. For their part, archetypes are reusable and domain-specific definitions of clinical concepts in the form of structured and constrained combinations of the entities of the reference model. We rely on archetypes to make the CDSS compatible with EHRs from different institutions. Concretely, we use archetypes for modelling the clinical concepts that the CDSS requires, in conjunction with a series of knowledge-intensive mappings relating the archetypes to the data sources (EHR and/or other archetypes) they depend on. We introduce a comprehensive approach, including a set of tools as well as methodological guidelines, to deal with the interoperability of CDSSs and EHRs based on archetypes. Archetypes are used to build a conceptual layer of the kind of a virtual health record (VHR) over the EHR whose contents need to be integrated and used in the CDSS, associating them with structural and terminology-based semantics. Subsequently, the archetypes are mapped to the EHR by means of an expressive mapping language and specific-purpose tools. We also describe a case study where the tools and methodology have been employed in a CDSS to support patient recruitment in the framework of a clinical trial for colorectal cancer screening. The utilisation of archetypes not only has proved satisfactory to achieve interoperability between CDSSs and EHRs but also offers various advantages, in particular from a data model perspective. First, the VHR/data models we work with are of a high level of abstraction and can incorporate semantic descriptions. Second, archetypes can potentially deal with different EHR architectures, due to their deliberate independence of the reference model. Third, the archetype instances we obtain are valid instances of the underlying reference model, which would enable e.g. feeding back the EHR with data derived by abstraction mechanisms. Lastly, the medical and technical validity of archetype models would be assured, since in principle clinicians should be the main actors in their development. | Interoperability of clinical decision-support systems and electronic health records using archetypes: A case study in clinical trial eligibility |
S1532046413000713 | PharmGKB is a leading resource of high quality pharmacogenomics data that provides information about how genetic variations modulate an individual’s response to drugs. PharmGKB contains information about genetic variations, pharmacokinetic and pharmacodynamic pathways, and the effect of variations on drug-related phenotypes. These relationships are represented using very general terms, however, and the precise semantic relationships among drugs, and diseases are not often captured. In this paper we develop a protocol to detect and disambiguate general clinical associations between drugs and diseases using more precise annotation terms from other data sources. PharmGKB provides very detailed clinical associations between genetic variants and drug response, including genotype-specific drug dosing guidelines, and this procedure will armGKB. The availability of more detailed data will help investigators to conduct more precise queries, such as finding particular diseases caused or treated by a specific drug. We first mapped drugs extracted from PharmGKB drug–disease relationships to those in the National Drug File Reference Terminology (NDF-RT) and to Structured Product Labels (SPLs). Specifically, we retrieved drug and disease role relationships describing and defining concepts according to their relationships with other concepts from NDF-RT. We also used the NCBO (National Center for Biomedical Ontology) annotator to annotate disease terms from the free text extracted from five SPL sections (indication, contraindication, ADE, precaution, and warning). Finally, we used the detailed drug and disease relationship information from NDF-RT and the SPLs to annotate and disambiguate the more general PharmGKB drug and disease associations. | Disambiguation of PharmGKB drug–disease relations with NDF-RT and SPL |
S1532046413000725 | Objectives Drug safety surveillance using observational data requires valid adverse event, or health outcome of interest (HOI) measurement. The objectives of this study were to develop a method to review HOI definitions in claims databases using (1) web-based digital tools to present de-identified patient data, (2) a systematic expert panel review process, and (3) a data collection process enabling analysis of concepts-of-interest that influence panelists’ determination of HOI. Methods De-identified patient data were presented via an interactive web-based dashboard to enable case review and determine if specific HOIs were present or absent. Criteria for determining HOIs and their severity were provided to each panelist. Using a modified Delphi method, six panelist pairs independently reviewed approximately 200 cases across each of three HOIs (acute liver injury, acute kidney injury, and acute myocardial infarction) such that panelist pairs independently reviewed the same cases. Panelists completed an assessment within the dashboard for each case that included their assessment of the presence or absence of the HOI, HOI severity (if present), and data contributing to their decision. Discrepancies within panelist pairs were resolved during a consensus process. Results Dashboard development was iterative, focusing on data presentation and recording panelists’ assessments. Panelists reported quickly learning how to use the dashboard. The assessment module was used consistently. The dashboard was reliable, enabling an efficient review process for panelists. Modifications were made to the dashboard and review process when necessary to facilitate case review. Our methods should be applied to other health outcomes of interest to further refine the dashboard and case review process. Conclusion The expert review process was effective and was supported by the web-based dashboard. Our methods for case review and classification can be applied to future methods for case identification in observational data sources. | Developing an expert panel process to refine health outcome definitions in observational data |
S1532046413000737 | As information technology permeates healthcare (particularly provider-facing systems), maximizing system effectiveness requires the ability to document and analyze tricky or troublesome usage scenarios. However, real-world health IT systems are typically replete with privacy-sensitive data regarding patients, diagnoses, clinicians, and EMR user interface details; instrumentation for screen capture (capturing and recording the scenario depicted on the screen) needs to respect these privacy constraints. Furthermore, real-world health IT systems are typically composed of modules from many sources, mission-critical and often closed-source; any instrumentation for screen capture can rely neither on access to structured output nor access to software internals. In this paper, we present a tool to help solve this problem: a system that combines keyboard video mouse (KVM) capture with automatic text redaction (and interactively selectable unredaction) to produce precise technical content that can enrich stakeholder communications and improve end-user influence on system evolution. KVM-based capture makes our system both application-independent and OS-independent because it eliminates software-interface dependencies on capture targets. Using a corpus of EMR screenshots, we present empirical measurements of redaction effectiveness and processing latency to demonstrate system performances. We discuss how these techniques can translate into instrumentation systems that improve real-world health IT deployments. | Privacy-preserving screen capture: Towards closing the loop for health IT usability |
S1532046413000749 | Our main interest in supervised classification of gene expression data is to infer whether the expressions can discriminate biological characteristics of samples. With thousands of gene expressions to consider, a gene selection has been advocated to decrease classification by including only the discriminating genes. We propose to make the gene selection based on partial least squares and logistic regression random-effects (RE) estimates before the selected genes are evaluated in classification models. We compare the selection with that based on the two-sample t-statistics, a current practice, and modified t-statistics. The results indicate that gene selection based on logistic regression RE estimates is recommended in a general situation, while the selection based on the PLS estimates is recommended when the number of samples is low. Gene selection based on the modified t-statistics performs well when the genes exhibit moderate-to-high variability with moderate group separation. Respecting the characteristics of the data is a key aspect to consider in gene selection. | Partial least squares and logistic regression random-effects estimates for gene selection in supervised classification of gene expression data |
S1532046413000750 | In order to enable secondary use of Electronic Health Records (EHRs) by bridging the interoperability gap between clinical care and research domains, in this paper, a unified methodology and the supporting framework is introduced which brings together the power of metadata registries (MDR) and semantic web technologies. We introduce a federated semantic metadata registry framework by extending the ISO/IEC 11179 standard, and enable integration of data element registries through Linked Open Data (LOD) principles where each Common Data Element (CDE) can be uniquely referenced, queried and processed to enable the syntactic and semantic interoperability. Each CDE and their components are maintained as LOD resources enabling semantic links with other CDEs, terminology systems and with implementation dependent content models; hence facilitating semantic search, much effective reuse and semantic interoperability across different application domains. There are several important efforts addressing the semantic interoperability in healthcare domain such as IHE DEX profile proposal, CDISC SHARE and CDISC2RDF. Our architecture complements these by providing a framework to interlink existing data element registries and repositories for multiplying their potential for semantic interoperability to a greater extent. Open source implementation of the federated semantic MDR framework presented in this paper is the core of the semantic interoperability layer of the SALUS project which enables the execution of the post marketing safety analysis studies on top of existing EHR systems. | A federated semantic metadata registry framework for enabling interoperability across clinical research and care domains |
S1532046413000762 | Previous research on standardization of eligibility criteria and its feasibility has traditionally been conducted on clinical trial protocols from ClinicalTrials.gov (CT). The portability and use of such standardization for full-text industry-standard protocols has not been studied in-depth. Towards this end, in this study we first compare the representation characteristics and textual complexity of a set of Pfizer’s internal full-text protocols to their corresponding entries in CT. Next, we identify clusters of similar criteria sentences from both full-text and CT protocols and outline methods for standardized representation of eligibility criteria. We also study the distribution of eligibility criteria in full-text and CT protocols with respect to pre-defined semantic classes used for eligibility criteria classification. We find that in comparison to full-text protocols, CT protocols are not only more condensed but also convey less information. We also find no correlation between the variations in word-counts of the ClinicalTrials.gov and full-text protocols. While we identify 65 and 103 clusters of inclusion and exclusion criteria from full text protocols, our methods found only 36 and 63 corresponding clusters from CT protocols. For both the full-text and CT protocols we are able to identify ‘templates’ for standardized representations with full-text standardization being more challenging of the two. In our exploration of the semantic class distributions we find that the majority of the inclusion criteria from both full-text and CT protocols belong to the semantic class “Diagnostic and Lab Results” while “Disease, Sign or Symptom” forms the majority for exclusion criteria. Overall, we show that developing a template set of eligibility criteria for clinical trials, specifically in their full-text form, is feasible and could lead to more efficient clinical trial protocol design. | Analysis of eligibility criteria representation in industry-standard clinical trial protocols |
S1532046413000774 | Objective Pediatric dose rounding is a unique and complex process whose complexity is rarely supported by e-prescribing systems, though amenable to automation and deployment from a central service provider. The goal of this project was to validate an automated dose-rounding algorithm for pediatric dose rounding. Methods We developed a dose-rounding algorithm, STEPSTools, based on expert consensus about the rounding process and knowledge about the therapeutic/toxic window for each medication. We then used a 60% subsample of electronically-generated prescriptions from one academic medical center to further refine the web services. Once all issues were resolved, we used the remaining 40% of the prescriptions as a test sample and assessed the degree of concordance between automatically calculated optimal doses and the doses in the test sample. Cases with discrepant doses were compiled in a survey and assessed by pediatricians from two academic centers. The response rate for the survey was 25%. Results Seventy-nine test cases were tested for concordance. For 20 cases, STEPSTools was unable to provide a recommended dose. The dose recommendation provided by STEPSTools was identical to that of the test prescription for 31 cases. For 14 out of the 24 discrepant cases included in the survey, respondents significantly preferred STEPSTools recommendations (p <0.05, binomial test). Overall, when combined with the data from all test cases, STEPSTools either matched or exceeded the performance of the test cases in 45/59 (76%) of the cases. The majority of other cases were challenged by the need to provide an extremely small dose. We estimated that with the addition of two dose-selection rules, STEPSTools would achieve an overall performance of 82% or higher. Conclusions Results of this pilot study suggest that automated dose rounding is a feasible mechanism for providing guidance to e-prescribing systems. These results also demonstrate the need for validating decision-support systems to support targeted and iterative improvement in performance. | Assessing the reliability of an automated dose-rounding algorithm |
S1532046413000786 | The use of family information is a key issue to deal with inheritance illnesses. This kind of information use to come in the form of pedigree files, which contain structured information as tree or graphs, which explains the family relationships. Knowledge-based systems should incorporate the information gathered by pedigree tools to assess medical decision making. In this paper, we propose a method to achieve such a goal, which consists on the definition of new indicators, and methods and rules to compute them from family trees. The method is illustrated with several case studies. We provide information about its implementation and integration on a case-based reasoning tool. The method has been experimentally tested with breast cancer diagnosis data. The results show the feasibility of our methodology. | Enabling the use of hereditary information from pedigree tools in medical knowledge-based systems |
S1532046413000798 | Natural language processing (NLP) is crucial for advancing healthcare because it is needed to transform relevant information locked in text into structured data that can be used by computer processes aimed at improving patient care and advancing medicine. In light of the importance of NLP to health, the National Library of Medicine (NLM) recently sponsored a workshop to review the state of the art in NLP focusing on text in English, both in biomedicine and in the general language domain. Specific goals of the NLM-sponsored workshop were to identify the current state of the art, grand challenges and specific roadblocks, and to identify effective use and best practices. This paper reports on the main outcomes of the workshop, including an overview of the state of the art, strategies for advancing the field, and obstacles that need to be addressed, resulting in recommendations for a research agenda intended to advance the field. | Natural language processing: State of the art and prospects for significant progress, a workshop sponsored by the National Library of Medicine |
S1532046413000816 | Surgical Process Modelling (SPM) was introduced to improve understanding the different parameters that influence the performance of a Surgical Process (SP). Data acquired from SPM methodology is enormous and complex. Several analysis methods based on comparison or classification of Surgical Process Models (SPMs) have previously been proposed. Such methods compare a set of SPMs to highlight specific parameters explaining differences between populations of patients, surgeons or systems. In this study, procedures performed at three different international University hospitals were compared using SPM methodology based on a similarity metric focusing on the sequence of activities occurring during surgery. The proposed approach is based on Dynamic Time Warping (DTW) algorithm combined with a clustering algorithm. SPMs of 41 Anterior Cervical Discectomy (ACD) surgeries were acquired at three Neurosurgical departments; in France, Germany, and Canada. The proposed approach distinguished the different surgical behaviors according to the location where surgery was performed as well as between the categorized surgical experience of individual surgeons. We also propose the use of Multidimensional Scaling to induce a new space of representation of the sequences of activities. The approach was compared to a time-based approach (e.g. duration of surgeries) and has been shown to be more precise. We also discuss the integration of other criteria in order to better understand what influences the way the surgeries are performed. This first multi-site study represents an important step towards the creation of robust analysis tools for processing SPMs. It opens new perspectives for the assessment of surgical approaches, tools or systems as well as objective assessment and comparison of surgeon’s expertise. | Multi-site study of surgical practice in neurosurgery based on surgical process models |
S1532046413000828 | Introduction Social networks applied through Web 2.0 tools have gained importance in health domain, because they produce improvements on the communication and coordination capabilities among health professionals. This is highly relevant for multimorbidity patients care because there is a large number of health professionals in charge of patient care, and this requires to obtain clinical consensus in their decisions. Our objective is to develop a tool for collaborative work among health professionals for multimorbidity patient care. We describe the architecture to incorporate decision support functionalities in a social network tool to enable the adoption of shared decisions among health professionals from different care levels. As part of the first stage of the project, this paper describes the results obtained in a pilot study about acceptance and use of the social network component in our healthcare setting. Methods At Virgen del Rocío University Hospital we have designed and developed the Shared Care Platform (SCP) to provide support in the continuity of care for multimorbidity patients. The SCP has two consecutively developed components: social network component, called Clinical Wall, and Clinical Decision Support (CDS) system. The Clinical Wall contains a record where health professionals are able to debate and define shared decisions. We conducted a pilot study to assess the use and acceptance of the SCP by healthcare professionals through questionnaire based on the theory of the Technology Acceptance Model. Results In March 2012 we released and deployed the SCP, but only with the social network component. The pilot project lasted 6months in the hospital and 2 primary care centers. From March to September 2012 we created 16 records in the Clinical Wall, all with a high priority. A total of 10 professionals took part in the exchange of messages: 3 internists and 7 general practitioners generated 33 messages. 12 of the 16 record (75%) were answered by the destination health professionals. The professionals valued positively all the items in the questionnaire. As part of the SCP, opensource tools for CDS will be incorporated to provide recommendations for medication and problem interactions, as well as to calculate indexes or scales from validated questionnaires. They will receive the patient summary information provided by the regional Electronic Health Record system through a web service with the information defined according to the virtual Medical Record specification. Conclusions Clinical Wall has been developed to allow communication and coordination between the healthcare professionals involved in multimorbidity patient care. Agreed decisions were about coordination for appointment changing, patient conditions, diagnosis tests, and prescription changes and renewal. The application of interoperability standards and open source software can bridge the gap between knowledge and clinical practice, while enabling interoperability and scalability. Open source with the social network encourages adoption and facilitates collaboration. Although the results obtained for use indicators are still not as high as it was expected, based on the promising results obtained in the acceptance questionnaire of SMP, we expect that the new CDS tools will increase the use by the health professionals. | Sharing clinical decisions for multimorbidity case management using social network and open-source tools |
S153204641300083X | Objective To report on the results of a review concerning the use of mobile phones for health with older adults. Methods PubMed and CINAHL were searched for articles using “older adults” and “mobile phones” along with related terms and synonyms between 1965 and June 2012. Identified articles were filtered by the following inclusion criteria: original research project utilizing a mobile phone as an intervention, involve/target adults 60years of age or older, and have an aim emphasizing the mobile phone’s use in health. Results Twenty-one different articles were found and categorized into ten different clinical domains, including diabetes, activities of daily life, and dementia care, among others. The largest group of articles focused on diabetes care (4 articles), followed by COPD (3 articles), Alzheimer’s/dementia Care (3 articles) and osteoarthritis (3 articles). Areas of interest studied included feasibility, acceptability, and effectiveness. While there were many different clinical domains, the majority of studies were pilot studies that needed more work to establish a stronger base of evidence. Conclusions Current work in using mobile phones for older adult use are spread across a variety of clinical domains. While this work is promising, current studies are generally smaller feasibility studies, and thus future work is needed to establish more generalizable, stronger base of evidence for effectiveness of these interventions. | Older adults and mobile phones for health: A review |
S1532046413000841 | Clinical practice guidelines (CPGs) aim to improve the quality of care, reduce unjustified practice variations and reduce healthcare costs. In order for them to be effective, clinical guidelines need to be integrated with the care flow and provide patient-specific advice when and where needed. Hence, their formalization as computer-interpretable guidelines (CIGs) makes it possible to develop CIG-based decision-support systems (DSSs), which have a better chance of impacting clinician behavior than narrative guidelines. This paper reviews the literature on CIG-related methodologies since the inception of CIGs, while focusing and drawing themes for classifying CIG research from CIG-related publications in the Journal of Biomedical Informatics (JBI). The themes span the entire life-cycle of CIG development and include: knowledge acquisition and specification for improved CIG design, including (1) CIG modeling languages and (2) CIG acquisition and specification methodologies, (3) integration of CIGs with electronic health records (EHRs) and organizational workflow, (4) CIG validation and verification, (5) CIG execution engines and supportive tools, (6) exception handling in CIGs, (7) CIG maintenance, including analyzing clinician’s compliance to CIG recommendations and CIG versioning and evolution, and finally (8) CIG sharing. I examine the temporal trends in CIG-related research and discuss additional themes that were not identified in JBI papers, including existing themes such as overcoming implementation barriers, modeling clinical goals, and temporal expressions, as well as futuristic themes, such as patient-centric CIGs and distributed CIGs. | Computer-interpretable clinical guidelines: A methodological review |
S1532046413000853 | We demonstrate the importance of explicit definitions of electronic health record (EHR) data completeness and how different conceptualizations of completeness may impact findings from EHR-derived datasets. This study has important repercussions for researchers and clinicians engaged in the secondary use of EHR data. We describe four prototypical definitions of EHR completeness: documentation, breadth, density, and predictive completeness. Each definition dictates a different approach to the measurement of completeness. These measures were applied to representative data from NewYork–Presbyterian Hospital’s clinical data warehouse. We found that according to any definition, the number of complete records in our clinical database is far lower than the nominal total. The proportion that meets criteria for completeness is heavily dependent on the definition of completeness used, and the different definitions generate different subsets of records. We conclude that the concept of completeness in EHR is contextual. We urge data consumers to be explicit in how they define a complete record and transparent about the limitations of their data. | Defining and measuring completeness of electronic health records for secondary use |
S1532046413000865 | Patient condition is a key element in communication between clinicians. However, there is no generally accepted definition of patient condition that is independent of diagnosis and that spans acuity levels. We report the development and validation of a continuous measure of general patient condition that is independent of diagnosis, and that can be used for medical-surgical as well as critical care patients. A survey of Electronic Medical Record data identified common, frequently collected non-static candidate variables as the basis for a general, continuously updated patient condition score. We used a new methodology to estimate in-hospital risk associated with each of these variables. A risk function for each candidate input was computed by comparing the final pre-discharge measurements with 1-year post-discharge mortality. Step-wise logistic regression of the variables against 1-year mortality was used to determine the importance of each variable. The final set of selected variables consisted of 26 clinical measurements from four categories: nursing assessments, vital signs, laboratory results and cardiac rhythms. We then constructed a heuristic model quantifying patient condition (overall risk) by summing the single-variable risks. The model’s validity was assessed against outcomes from 170,000 medical-surgical and critical care patients, using data from three US hospitals. Outcome validation across hospitals yields an area under the receiver operating characteristic curve(AUC) of ⩾0.92when separating hospice/deceased from all other discharge categories, an AUC of ⩾0.93 when predicting 24-h mortalityand an AUC of 0.62 when predicting 30-day readmissions. Correspondence with outcomesreflective of patient condition across the acuity spectrum indicates utility in both medical-surgical unitsand critical care units. The model output, which we call the Rothman Index, may provide clinicians witha longitudinal view of patient condition to help address known challenges in caregiver communication,continuity of care, and earlier detection of acuity trends. | Development and validation of a continuous measure of patient condition using the Electronic Medical Record |
S1532046413000877 | The Gene Ontology (GO), a set of three sub-ontologies, is one of the most popular bio-ontologies used for describing gene product characteristics. GO annotation data containing terms from multiple sub-ontologies and at different levels in the ontologies is an important source of implicit relationships between terms from the three sub-ontologies. Data mining techniques such as association rule mining that are tailored to mine from multiple ontologies at multiple levels of abstraction are required for effective knowledge discovery from GO annotation data. We present a data mining approach, Multi-ontology data mining at All Levels (MOAL) that uses the structure and relationships of the GO to mine multi-ontology multi-level association rules. We introduce two interestingness measures: Multi-ontology Support (MOSupport) and Multi-ontology Confidence (MOConfidence) customized to evaluate multi-ontology multi-level association rules. We also describe a variety of post-processing strategies for pruning uninteresting rules. We use publicly available GO annotation data to demonstrate our methods with respect to two applications (1) the discovery of co-annotation suggestions and (2) the discovery of new cross-ontology relationships. | Interestingness measures and strategies for mining multi-ontology multi-level association rules from gene ontology annotations for the discovery of new GO relationships |
S1532046413000889 | Background Determining similarity between two individual concepts or two sets of concepts extracted from a free text document is important for various aspects of biomedicine, for instance, to find prior clinical reports for a patient that are relevant to the current clinical context. Using simple concept matching techniques, such as lexicon based comparisons, is typically not sufficient to determine an accurate measure of similarity. Methods In this study, we tested an enhancement to the standard document vector cosine similarity model in which ontological parent–child (is-a) relationships are exploited. For a given concept, we define a semantic vector consisting of all parent concepts and their corresponding weights as determined by the shortest distance between the concept and parent after accounting for all possible paths. Similarity between the two concepts is then determined by taking the cosine angle between the two corresponding vectors. To test the improvement over the non-semantic document vector cosine similarity model, we measured the similarity between groups of reports arising from similar clinical contexts, including anatomy and imaging procedure. We further applied the similarity metrics within a k-nearest-neighbor (k-NN) algorithm to classify reports based on their anatomical and procedure based groups. 2150 production CT radiology reports (952 abdomen reports and 1128 neuro reports) were used in testing with SNOMED CT, restricted to Body structure, Clinical finding and Procedure branches, as the reference ontology. Results The semantic algorithm preferentially increased the intra-class similarity over the inter-class similarity, with a 0.07 and 0.08 mean increase in the neuro–neuro and abdomen–abdomen pairs versus a 0.04 mean increase in the neuro–abdomen pairs. Using leave-one-out cross-validation in which each document was iteratively used as a test sample while excluding it from the training data, the k-NN based classification accuracy was shown in all cases to be consistently higher with the semantics based measure compared with the non-semantic case. Moreover, the accuracy remained steady even as k value was increased – for the two anatomy related classes accuracy for k =41 was 93.1% with semantics compared to 86.7% without semantics. Similarly, for the eight imaging procedures related classes, accuracy (for k =41) with semantics was 63.8% compared to 60.2% without semantics. At the same k, accuracy improved significantly to 82.8% and 77.4% respectively when procedures were logically grouped together into four classes (such as ignoring contrast information in the imaging procedure description). Similar results were seen at other k-values. Conclusions The addition of semantic context into the document vector space model improves the ability of the cosine similarity to differentiate between radiology reports of different anatomical and image procedure-based classes. This effect can be leveraged for document classification tasks, which suggests its potential applicability for biomedical information retrieval. | An ontology-based similarity measure for biomedical data – Application to radiology reports |
S1532046413000890 | Objective To compare linear and Laplacian SVMs on a clinical text classification task; to evaluate the effect of unlabeled training data on Laplacian SVM performance. Background The development of machine-learning based clinical text classifiers requires the creation of labeled training data, obtained via manual review by clinicians. Due to the effort and expense involved in labeling data, training data sets in the clinical domain are of limited size. In contrast, electronic medical record (EMR) systems contain hundreds of thousands of unlabeled notes that are not used by supervised machine learning approaches. Semi-supervised learning algorithms use both labeled and unlabeled data to train classifiers, and can outperform their supervised counterparts. Methods We trained support vector machines (SVMs) and Laplacian SVMs on a training reference standard of 820 abdominal CT, MRI, and ultrasound reports labeled for the presence of potentially malignant liver lesions that require follow up (positive class prevalence 77%). The Laplacian SVM used 19,845 randomly sampled unlabeled notes in addition to the training reference standard. We evaluated SVMs and Laplacian SVMs on a test set of 520 labeled reports. Results The Laplacian SVM trained on labeled and unlabeled radiology reports significantly outperformed supervised SVMs (Macro-F1 0.773 vs. 0.741, Sensitivity 0.943 vs. 0.911, Positive Predictive value 0.877 vs. 0.883). Performance improved with the number of labeled and unlabeled notes used to train the Laplacian SVM (pearson’s ρ =0.529 for correlation between number of unlabeled notes and macro-F1 score). These results suggest that practical semi-supervised methods such as the Laplacian SVM can leverage the large, unlabeled corpora that reside within EMRs to improve clinical text classification. | Semi-supervised clinical text classification with Laplacian SVMs: An application to cancer case management |
S1532046413000907 | With the growing understanding of complex diseases, the focus of drug discovery has shifted from the well-accepted “one target, one drug” model, to a new “multi-target, multi-drug” model, aimed at systemically modulating multiple targets. In this context, polypharmacology has emerged as a new paradigm to overcome the recent decline in productivity of pharmaceutical research. However, finding methods to evaluate multicomponent therapeutics and ranking synergistic agent combinations is still a demanding task. At the same time, the data gathered on complex diseases has been progressively collected in public data and knowledge repositories, such as protein–protein interaction (PPI) databases. The PPI networks are increasingly used as universal platforms for data integration and analysis. A novel computational network-based approach for feasible and efficient identification of multicomponent synergistic agents is proposed in this paper. Given a complex disease, the method exploits the topological features of the related PPI network to identify possible combinations of hit targets. The best ranked combinations are subsequently computed on the basis of a synergistic score. We illustrate the potential of the method through a study on Type 2 Diabetes Mellitus. The results highlight its ability to retrieve novel target candidates, which role is also confirmed by the analysis of the related literature. | Network-based target ranking for polypharmacological therapies |
S1532046413001007 | Since the completion of the Human Genome project at the turn of the Century, there has been an unprecedented proliferation of genomic sequence data. A consequence of this is that the medical discoveries of the future will largely depend on our ability to process and analyse large genomic data sets, which continue to expand as the cost of sequencing decreases. Herein, we provide an overview of cloud computing and big data technologies, and discuss how such expertise can be used to deal with biology’s big data sets. In particular, big data technologies such as the Apache Hadoop project, which provides distributed and parallelised data processing and analysis of petabyte (PB) scale data sets will be discussed, together with an overview of the current usage of Hadoop within the bioinformatics community. | ‘Big data’, Hadoop and cloud computing in genomics |
S1532046413001019 | Motivation The inference, or ‘reverse-engineering’, of gene regulatory networks from expression data and the description of the complex dependency structures among genes are open issues in modern molecular biology. Results In this paper we compared three regularized methods of covariance selection for the inference of gene regulatory networks, developed to circumvent the problems raising when the number of observations n is smaller than the number of genes p. The examined approaches provided three alternative estimates of the inverse covariance matrix: (a) the ‘PINV’ method is based on the Moore–Penrose pseudoinverse, (b) the ‘RCM’ method performs correlation between regression residuals and (c) ‘ℓ2C ’ method maximizes a properly regularized log-likelihood function. Our extensive simulation studies showed that ℓ2C outperformed the other two methods having the most predictive partial correlation estimates and the highest values of sensitivity to infer conditional dependencies between genes even when a few number of observations was available. The application of this method for inferring gene networks of the isoprenoid biosynthesis pathways in Arabidopsis thaliana allowed to enlighten a negative partial correlation coefficient between the two hubs in the two isoprenoid pathways and, more importantly, provided an evidence of cross-talk between genes in the plastidial and the cytosolic pathways. When applied to gene expression data relative to a signature of HRAS oncogene in human cell cultures, the method revealed 9 genes (p-value<0.0005) directly interacting with HRAS, sharing the same Ras-responsive binding site for the transcription factor RREB1. This result suggests that the transcriptional activation of these genes is mediated by a common transcription factor downstream of Ras signaling. Availability Software implementing the methods in the form of Matlab scripts are available at: http://users.ba.cnr.it/issia/iesina18/CovSelModelsCodes.zip. | A comparative study of covariance selection models for the inference of gene regulatory networks |
S1532046413001020 | More than 80% of biomedical data is embedded in plain text. The unstructured nature of these text-based documents makes it challenging to easily browse and query the data of interest in them. One approach to facilitate browsing and querying biomedical text is to convert the plain text to a linked web of data, i.e., converting data originally in free text to structured formats with defined meta-level semantics. In this paper, we introduce Semantator (Semantic Annotator), a semantic-web-based environment for annotating data of interest in biomedical documents, browsing and querying the annotated data, and interactively refining annotation results if needed. Through Semantator, information of interest can be either annotated manually or semi-automatically using plug-in information extraction tools. The annotated results will be stored in RDF and can be queried using the SPARQL query language. In addition, semantic reasoners can be directly applied to the annotated data for consistency checking and knowledge inference. Semantator has been released online and was used by the biomedical ontology community who provided positive feedbacks. Our evaluation results indicated that (1) Semantator can perform the annotation functionalities as designed; (2) Semantator can be adopted in real applications in clinical and transactional research; and (3) the annotated results using Semantator can be easily used in Semantic-web-based reasoning tools for further inference. | Semantator: Semantic annotator for converting biomedical text to linked data |
S1532046413001032 | Temporal information in clinical narratives plays an important role in patients’ diagnosis, treatment and prognosis. In order to represent narrative information accurately, medical natural language processing (MLP) systems need to correctly identify and interpret temporal information. To promote research in this area, the Informatics for Integrating Biology and the Bedside (i2b2) project developed a temporally annotated corpus of clinical narratives. This corpus contains 310 de-identified discharge summaries, with annotations of clinical events, temporal expressions and temporal relations. This paper describes the process followed for the development of this corpus and discusses annotation guideline development, annotation methodology, and corpus quality. | Annotating temporal information in clinical narratives |
S1532046413001044 | Integration of clinical decision support services (CDSS) into electronic health records (EHRs) may be integral to widespread dissemination and use of clinical prediction rules in the emergency department (ED). However, the best way to design such services to maximize their usefulness in such a complex setting is poorly understood. We conducted a multi-site cross-sectional qualitative study whose aim was to describe the sociotechnical environment in the ED to inform the design of a CDSS intervention to implement the Pediatric Emergency Care Applied Research Network (PECARN) clinical prediction rules for children with minor blunt head trauma. Informed by a sociotechnical model consisting of eight dimensions, we conducted focus groups, individual interviews and workflow observations in 11 EDs, of which 5 were located in academic medical centers and 6 were in community hospitals. A total of 126 ED clinicians, information technology specialists, and administrators participated. We clustered data into 19 categories of sociotechnical factors through a process of thematic analysis and subsequently organized the categories into a sociotechnical matrix consisting of three high-level sociotechnical dimensions (workflow and communication, organizational factors, human factors) and three themes (interdisciplinary assessment processes, clinical practices related to prediction rules, EHR as a decision support tool). Design challenges that emerged from the analysis included the need to use structured data fields to support data capture and re-use while maintaining efficient care processes, supporting interdisciplinary communication, and facilitating family-clinician interaction for decision-making. | Informing the design of clinical decision support services for evaluation of children with minor blunt head trauma in the emergency department: A sociotechnical analysis |
S153204641300107X | Although biomedical information available in articles and patents is increasing exponentially, we continue to rely on the same information retrieval methods and use very few keywords to search millions of documents. We are developing a fundamentally different approach for finding much more precise and complete information with a single query using predicates instead of keywords for both query and document representation. Predicates are triples that are more complex datastructures than keywords and contain more structured information. To make optimal use of them, we developed a new predicate-based vector space model and query-document similarity function with adjusted tf-idf and boost function. Using a test bed of 107,367 PubMed abstracts, we evaluated the first essential function: retrieving information. Cancer researchers provided 20 realistic queries, for which the top 15 abstracts were retrieved using a predicate-based (new) and keyword-based (baseline) approach. Each abstract was evaluated, double-blind, by cancer researchers on a 0–5 point scale to calculate precision (0 versus higher) and relevance (0–5 score). Precision was significantly higher (p <.001) for the predicate-based (80%) than for the keyword-based (71%) approach. Relevance was almost doubled with the predicate-based approach—2.1 versus 1.6 without rank order adjustment (p <.001) and 1.34 versus 0.98 with rank order adjustment (p <.001) for predicate—versus keyword-based approach respectively. Predicates can support more precise searching than keywords, laying the foundation for rich and sophisticated information search. | Development and evaluation of a biomedical search engine using a predicate-based vector space model |
S1532046413001081 | Objectives The role of social media in biomedical knowledge mining, including clinical, medical and healthcare informatics, prescription drug abuse epidemiology and drug pharmacology, has become increasingly significant in recent years. Social media offers opportunities for people to share opinions and experiences freely in online communities, which may contribute information beyond the knowledge of domain professionals. This paper describes the development of a novel semantic web platform called PREDOSE (PREscription Drug abuse Online Surveillance and Epidemiology), which is designed to facilitate the epidemiologic study of prescription (and related) drug abuse practices using social media. PREDOSE uses web forum posts and domain knowledge, modeled in a manually created Drug Abuse Ontology (DAO – pronounced dow), to facilitate the extraction of semantic information from User Generated Content (UGC), through combination of lexical, pattern-based and semantics-based techniques. In a previous study, PREDOSE was used to obtain the datasets from which new knowledge in drug abuse research was derived. Here, we report on various platform enhancements, including an updated DAO, new components for relationship and triple extraction, and tools for content analysis, trend detection and emerging patterns exploration, which enhance the capabilities of the PREDOSE platform. Given these enhancements, PREDOSE is now more equipped to impact drug abuse research by alleviating traditional labor-intensive content analysis tasks. Methods Using custom web crawlers that scrape UGC from publicly available web forums, PREDOSE first automates the collection of web-based social media content for subsequent semantic annotation. The annotation scheme is modeled in the DAO, and includes domain specific knowledge such as prescription (and related) drugs, methods of preparation, side effects, and routes of administration. The DAO is also used to help recognize three types of data, namely: (1) entities, (2) relationships and (3) triples. PREDOSE then uses a combination of lexical and semantic-based techniques to extract entities and relationships from the scraped content, and a top-down approach for triple extraction that uses patterns expressed in the DAO. In addition, PREDOSE uses publicly available lexicons to identify initial sentiment expressions in text, and then a probabilistic optimization algorithm (from related research) to extract the final sentiment expressions. Together, these techniques enable the capture of fine-grained semantic information, which facilitate search, trend analysis and overall content analysis using social media on prescription drug abuse. Moreover, extracted data are also made available to domain experts for the creation of training and test sets for use in evaluation and refinements in information extraction techniques. Results A recent evaluation of the information extraction techniques applied in the PREDOSE platform indicates 85% precision and 72% recall in entity identification, on a manually created gold standard dataset. In another study, PREDOSE achieved 36% precision in relationship identification and 33% precision in triple extraction, through manual evaluation by domain experts. Given the complexity of the relationship and triple extraction tasks and the abstruse nature of social media texts, we interpret these as favorable initial results. Extracted semantic information is currently in use in an online discovery support system, by prescription drug abuse researchers at the Center for Interventions, Treatment and Addictions Research (CITAR) at Wright State University. Conclusion A comprehensive platform for entity, relationship, triple and sentiment extraction from such abstruse texts has never been developed for drug abuse research. PREDOSE has already demonstrated the importance of mining social media by providing data from which new findings in drug abuse research were uncovered. Given the recent platform enhancements, including the refined DAO, components for relationship and triple extraction, and tools for content, trend and emerging pattern analysis, it is expected that PREDOSE will play a significant role in advancing drug abuse epidemiology in future. | PREDOSE: A semantic web platform for drug abuse epidemiology using social media |
S153204641300110X | Efficient identification of patient, intervention, comparison, and outcome (PICO) components in medical articles is helpful in evidence-based medicine. The purpose of this study is to clarify whether first sentences of these components are good enough to train naive Bayes classifiers for sentence-level PICO element detection. We extracted 19,854 structured abstracts of randomized controlled trials with any P/I/O label from PubMed for naive Bayes classifiers training. Performances of classifiers trained by first sentences of each section ( CF ) and those trained by all sentences ( CA ) were compared using all sentences by ten-fold cross-validation. The results measured by recall, precision, and F-measures show that there are no significant differences in performance between CF and CA for detection of O-element (F-measure=0.731±0.009 vs. 0.738±0.010, p =0.123). However, CA perform better for I-elements, in terms of recall (0.752±0.012 vs. 0.620±0.007, p <0.001) and F-measures (0.728±0.006 vs. 0.662±0.007, p <0.001). For P-elements, CF have higher precision (0.714±0.009 vs. 0.665±0.010, p <0.001), but lower recall (0.766±0.013 vs. 0.811±0.012, p <0.001). CF are not always better than CA in sentence-level PICO element detection. Their performance varies in detecting different elements. | PICO element detection in medical text without metadata: Are first sentences enough? |
S1532046413001111 | Background The association of genotyping information with common traits is not satisfactorily solved. One of the most complex traits is pain and association studies have failed so far to provide reproducible predictions of pain phenotypes from genotypes in the general population despite a well-established genetic basis of pain. We therefore aimed at developing a method able to prospectively and highly accurately predict pain phenotype from the underlying genotype. Methods Complex phenotypes and genotypes were obtained from experimental pain data including four different pain stimuli and genotypes with respect to 30 reportedly pain relevant variants in 10 genes. The training data set was obtained in 125 healthy volunteers and the independent prospective test data set was obtained in 89 subjects. The approach involved supervised machine learning. Results The phenotype–genotype association was reached in three major steps. First, the pain phenotype data was projected and clustered by means of emergent self-organizing map (ESOM) analysis and subsequent U-matrix visualization. Second, pain sub-phenotypes were identified by interpreting the cluster structure using classification and regression tree classifiers. Third, a supervised machine learning algorithm (Unweighted Label Rule generation) was applied to genetic markers reportedly modulating pain to obtain a complex genotype underlying the identified subgroups of subjects with homogenous pain response. This procedure correctly identified 80% of the subjects as belonging to an extreme pain phenotype in an independently and prospectively assessed cohort. Conclusion The developed methodology is a suitable basis for complex genotype–phenotype associations in pain. It may provide personalized treatments of complex traits. Due to its generality, this new method should also be applicable to other association tasks except pain. | A machine-learned knowledge discovery method for associating complex phenotypes with complex genotypes. Application to pain |
S1532046413001123 | The management of drug–drug interactions (DDIs) is a critical issue resulting from the overwhelming amount of information available on them. Natural Language Processing (NLP) techniques can provide an interesting way to reduce the time spent by healthcare professionals on reviewing biomedical literature. However, NLP techniques rely mostly on the availability of the annotated corpora. While there are several annotated corpora with biological entities and their relationships, there is a lack of corpora annotated with pharmacological substances and DDIs. Moreover, other works in this field have focused in pharmacokinetic (PK) DDIs only, but not in pharmacodynamic (PD) DDIs. To address this problem, we have created a manually annotated corpus consisting of 792 texts selected from the DrugBank database and other 233 Medline abstracts. This fined-grained corpus has been annotated with a total of 18,502 pharmacological substances and 5028 DDIs, including both PK as well as PD interactions. The quality and consistency of the annotation process has been ensured through the creation of annotation guidelines and has been evaluated by the measurement of the inter-annotator agreement between two annotators. The agreement was almost perfect (Kappa up to 0.96 and generally over 0.80), except for the DDIs in the MedLine database (0.55–0.72). The DDI corpus has been used in the SemEval 2013 DDIExtraction challenge as a gold standard for the evaluation of information extraction techniques applied to the recognition of pharmacological substances and the detection of DDIs from biomedical texts. DDIExtraction 2013 has attracted wide attention with a total of 14 teams from 7 different countries. For the task of recognition and classification of pharmacological names, the best system achieved an F1 of 71.5%, while, for the detection and classification of DDIs, the best result was F1 of 65.1%. These results show that the corpus has enough quality to be used for training and testing NLP techniques applied to the field of Pharmacovigilance. The DDI corpus and the annotation guidelines are free for use for academic research and are available at http://labda.inf.uc3m.es/ddicorpus. | The DDI corpus: An annotated corpus with pharmacological substances and drug–drug interactions |
S1532046413001135 | Temporal information extraction from clinical narratives is of critical importance to many clinical applications. We participated in the EVENT/TIMEX3 track of the 2012 i2b2 clinical temporal relations challenge, and presented our temporal information extraction system, MedTime. MedTime comprises a cascade of rule-based and machine-learning pattern recognition procedures. It achieved a micro-averaged f-measure of 0.88 in both the recognitions of clinical events and temporal expressions. We proposed and evaluated three time normalization strategies to normalize relative time expressions in clinical texts. The accuracy was 0.68 in normalizing temporal expressions of dates, times, durations, and frequencies. This study demonstrates and evaluates the integration of rule-based and machine-learning-based approaches for high performance temporal information extraction from clinical narratives. | MedTime: A temporal information extraction system for clinical narratives |
S1532046413001147 | Systems biology approach to investigate biological phenomena seems to be very promising because it is capable to capture one of the fundamental properties of living organisms, i.e. their inherent complexity. It allows for analysis biological entities as complex systems of interacting objects. The first and necessary step of such an analysis is building a precise model of the studied biological system. This model is expressed in the language of some branch of mathematics, as for example, differential equations. During the last two decades the theory of Petri nets has appeared to be very well suited for building models of biological systems. The structure of these nets reflects the structure of interacting biological molecules and processes. Moreover, on one hand, Petri nets have intuitive graphical representation being very helpful in understanding the structure of the system and on the other hand, there is a lot of mathematical methods and software tools supporting an analysis of the properties of the nets. In this paper a Petri net based model of the hemojuvelin–hepcidin axis involved in the maintenance of the human body iron homeostasis is presented. The analysis based mainly on T-invariants of the model properties has been made and some biological conclusions have been drawn. | Hemojuvelin–hepcidin axis modeled and analyzed using Petri nets |
S1532046413001159 | Objective Information overload is a significant problem facing online clinical trial searchers. We present eTACTS, a novel interactive retrieval framework using common eligibility tags to dynamically filter clinical trial search results. Materials and methods eTACTS mines frequent eligibility tags from free-text clinical trial eligibility criteria and uses these tags for trial indexing. After an initial search, eTACTS presents to the user a tag cloud representing the current results. When the user selects a tag, eTACTS retains only those trials containing that tag in their eligibility criteria and generates a new cloud based on tag frequency and co-occurrences in the remaining trials. The user can then select a new tag or unselect a previous tag. The process iterates until a manageable number of trials is returned. We evaluated eTACTS in terms of filtering efficiency, diversity of the search results, and user eligibility to the filtered trials using both qualitative and quantitative methods. Results eTACTS (1) rapidly reduced search results from over a thousand trials to ten; (2) highlighted trials that are generally not top-ranked by conventional search engines; and (3) retrieved a greater number of suitable trials than existing search engines. Discussion eTACTS enables intuitive clinical trial searches by indexing eligibility criteria with effective tags. User evaluation was limited to one case study and a small group of evaluators due to the long duration of the experiment. Although a larger-scale evaluation could be conducted, this feasibility study demonstrated significant advantages of eTACTS over existing clinical trial search engines. Conclusion A dynamic eligibility tag cloud can potentially enhance state-of-the-art clinical trial search engines by allowing intuitive and efficient filtering of the search result space. | eTACTS: A method for dynamically filtering clinical trial search results |
S1532046413001160 | Background Over two decades of research has been conducted using mobile devices for health related behaviors yet many of these studies lack rigor. There are few evaluation frameworks for assessing the usability of mHealth, which is critical as the use of this technology proliferates. As the development of interventions using mobile technology increase, future work in this domain necessitates the use of a rigorous usability evaluation framework. Methods We used two exemplars to assess the appropriateness of the Health IT Usability Evaluation Model (Health-ITUEM) for evaluating the usability of mHealth technology. In the first exemplar, we conducted 6 focus group sessions to explore adolescents’ use of mobile technology for meeting their health Information needs. In the second exemplar, we conducted 4 focus group sessions following an Ecological Momentary Assessment study in which 60 adolescents were given a smartphone with pre-installed health-related applications (apps). Data analysis We coded the focus group data using the 9 concepts of the Health-ITUEM: Error prevention, Completeness, Memorability, Information needs, Flexibility/Customizability, Learnability, Performance speed, Competency, Other outcomes. To develop a finer granularity of analysis, the nine concepts were broken into positive, negative, and neutral codes. A total of 27 codes were created. Two raters (R1 and R2) initially coded all text and a third rater (R3) reconciled coding discordance between raters R1 and R2. Results A total of 133 codes were applied to Exemplar 1. In Exemplar 2 there were a total of 286 codes applied to 195 excerpts. Performance speed, Other outcomes, and Information needs were among the most frequently occurring codes. Conclusion Our two exemplars demonstrated the appropriateness and usefulness of the Health-ITUEM in evaluating mobile health technology. Further assessment of this framework with other study populations should consider whether Memorability and Error prevention are necessary to include when evaluating mHealth technology. | Assessment of the Health IT Usability Evaluation Model (Health-ITUEM) for evaluating mobile health (mHealth) technology |
S1532046413001172 | As hospital departments continue to introduce electronic whiteboards in real clinical settings a range of human factor issues have emerged and it has become clear that there is a need for improved methods for designing and testing these systems. In this study, we employed a longitudinal and naturalistic method in the usability evaluation of an electronic whiteboard system. The goal of the evaluation was to explore the extent to which usability issues experienced by users change as they gain more experience with the system. In addition, the paper explores the use of a new approach to collection and analysis of continuous digital video recordings of naturalistic “live” user interactions. The method developed and employed in the study included recording the users’ interactions with system during actual use using screen-capturing software and analyzing these recordings for usability issues. In this paper we describe and discuss both the method and the results of the evaluation. We found that the electronic whiteboard system contains system-related usability issues that did not change over time as the clinicians collectively gained more experience with the system. Furthermore, we also found user-related issues that seemed to change as the users gained more experience and we discuss the underlying reasons for these changes. We also found that the method used in the study has certain advantages over traditional usability evaluation methods, including the ability to collect analyze live user data over time. However, challenges and drawbacks to using the method (including the time taken for analysis and logistical issues in doing live recordings) should be considered before utilizing a similar approach. In conclusion we summarize our findings and call for an increased focus on longitudinal and naturalistic evaluations of health information systems and encourage others to apply and refine the method utilized in this study. | Digital video analysis of health professionals’ interactions with an electronic whiteboard: A longitudinal, naturalistic study of changes to user interactions |
S1532046413001184 | We address the TLINK track of the 2012 i2b2 challenge on temporal relations. Unlike other approaches to this task, we (1) employ sophisticated linguistic knowledge derived from semantic and discourse relations, rather than focus on morpho-syntactic knowledge; and (2) leverage a novel combination of rule-based and learning-based approaches, rather than rely solely on one or the other. Experiments show that our knowledge-rich, hybrid approach yields an F-score of 69.3, which is the best result reported to date on this dataset. | Classifying temporal relations in clinical data: A hybrid, knowledge-rich approach |
S1532046413001196 | Named entity recognition is a crucial component of biomedical natural language processing, enabling information extraction and ultimately reasoning over and knowledge discovery from text. Much progress has been made in the design of rule-based and supervised tools, but they are often genre and task dependent. As such, adapting them to different genres of text or identifying new types of entities requires major effort in re-annotation or rule development. In this paper, we propose an unsupervised approach to extracting named entities from biomedical text. We describe a stepwise solution to tackle the challenges of entity boundary detection and entity type classification without relying on any handcrafted rules, heuristics, or annotated data. A noun phrase chunker followed by a filter based on inverse document frequency extracts candidate entities from free text. Classification of candidate entities into categories of interest is carried out by leveraging principles from distributional semantics. Experiments show that our system, especially the entity classification step, yields competitive results on two popular biomedical datasets of clinical notes and biological literature, and outperforms a baseline dictionary match approach. Detailed error analysis provides a road map for future work. | Unsupervised biomedical named entity recognition: Experiments with clinical and biological texts |
S1532046413001202 | We describe a domain-independent methodology to extend SemRep coverage beyond the biomedical domain. SemRep, a natural language processing application originally designed for biomedical texts, uses the knowledge sources provided by the Unified Medical Language System (UMLS©). Ontological and terminological extensions to the system are needed in order to support other areas of knowledge. We extended SemRep’s application by developing a semantic representation of a previously unsupported domain. This was achieved by adapting well-known ontology engineering phases and integrating them with the UMLS knowledge sources on which SemRep crucially depends. While the process to extend SemRep coverage has been successfully applied in earlier projects, this paper presents in detail the step-wise approach we followed and the mechanisms implemented. A case study in the field of medical informatics illustrates how the ontology engineering phases have been adapted for optimal integration with the UMLS. We provide qualitative and quantitative results, which indicate the validity and usefulness of our methodology. | A methodology for extending domain coverage in SemRep |
S1532046413001214 | Advances in “omics” hardware and software technologies are bringing rare diseases research back from the sidelines. Whereas in the past these disorders were seldom considered relevant, in the era of whole genome sequencing the direct connections between rare phenotypes and a reduced set of genes are of vital relevance. This increased interest in rare genetic diseases research is pushing forward investment and effort towards the creation of software in the field, and leveraging the wealth of available life sciences data. Alas, most of these tools target one or more rare diseases, are focused solely on a single type of user, or are limited to the most relevant scientific breakthroughs for a specific niche. Furthermore, despite some high quality efforts, the ever-growing number of resources, databases, services and applications is still a burden to this area. Hence, there is a clear interest in new strategies to deliver a holistic perspective over the entire rare genetic diseases research domain. This is Diseasecard’s reasoning, to build a true lightweight knowledge base covering rare genetic diseases. Developed with the latest semantic web technologies, this portal delivers unified access to a comprehensive network for researchers, clinicians, patients and bioinformatics developers. With in-context access covering over 20 distinct heterogeneous resources, Diseasecard’s workspace provides access to the most relevant scientific knowledge regarding a given disorder, whether through direct common identifiers or through full-text search over all connected resources. In addition to its user-oriented features, Diseasecard’s semantic knowledge base is also available for direct querying, enabling everyone to include rare genetic diseases knowledge in new or existing information systems. Diseasecard is publicly available at http://bioinformatics.ua.pt/diseasecard/. | An innovative portal for rare genetic diseases research: The semantic Diseasecard |
S1532046413001226 | Building classification models from clinical data using machine learning methods often relies on labeling of patient examples by human experts. Standard machine learning framework assumes the labels are assigned by a homogeneous process. However, in reality the labels may come from multiple experts and it may be difficult to obtain a set of class labels everybody agrees on; it is not uncommon that different experts have different subjective opinions on how a specific patient example should be classified. In this work we propose and study a new multi-expert learning framework that assumes the class labels are provided by multiple experts and that these experts may differ in their class label assessments. The framework explicitly models different sources of disagreements and lets us naturally combine labels from different human experts to obtain: (1) a consensus classification model representing the model the group of experts converge to, as well as, and (2) individual expert models. We test the proposed framework by building a model for the problem of detection of the Heparin Induced Thrombocytopenia (HIT) where examples are labeled by three experts. We show that our framework is superior to multiple baselines (including standard machine learning framework in which expert differences are ignored) and that our framework leads to both improved consensus and individual expert models. | Learning classification models from multiple experts |
S1532046413001287 | Background Time is a measurable and critical resource that affects the quality of services provided in clinical practice. There is limited insight into the effects of time restrictions on clinicians’ cognitive processes with the electronic health record (EHR) in providing ambulatory care. Objective To understand the impact of time constraints on clinicians’ synthesis of text-based EHR clinical notes. Methods We used an established clinician cognitive framework based on a think-aloud protocol. We studied interns’ thought processes as they accomplished a set of four preformed ambulatory care clinical scenarios with and without time restrictions in a controlled setting. Results Interns most often synthesized details relevant to patients’ problems and treatment, regardless of whether or not the time available for task performance was restricted. In contrast to previous findings, subsequent information commonly synthesized by clinicians related most commonly to the chronology of clinical events for the unrestricted time observations and to investigative procedures for the time-restricted sessions. There was no significant difference in the mean number of omission errors and incorrect deductions when interns synthesized the EHR clinical notes with and without time restrictions (3.5±0.5 vs. 2.3±0.5, p =0.14). Conclusion Our results suggest that the incidence of errors during clinicians’ synthesis of EHR clinical notes is not increased with modest time restrictions, possibly due to effective adjustments of information processing strategies learned from the usual time-constrained nature of patient visits. Further research is required to investigate the effects of similar or more extreme time variations on cognitive processes employed with different levels of expertise, specialty, and with different care settings. | Effects of time constraints on clinician–computer interaction: A study on information synthesis from EHR clinical notes |
S1532046413001299 | This paper addresses an important task of event and timex extraction from clinical narratives in context of the i2b2 2012 challenge. State-of-the-art approaches for event extraction use a multi-class classifier for finding the event types. However, such approaches consider each event in isolation. In this paper, we present a sentence-level inference strategy which enforces consistency constraints on attributes of those events which appear close to one another. Our approach is general and can be used for other tasks as well. We also design novel features like clinical descriptors (from medical ontologies) which encode a lot of useful information about the concepts. For timex extraction, we adapt a state-of-the-art system, HeidelTime, for use in clinical narratives and also develop several rules which complement HeidelTime. We also give a robust algorithm for date extraction. For the event extraction task, we achieved an overall F1 score of 0.71 for determining span of the events along with their attributes. For the timex extraction task, we achieved an F1 score of 0.79 for determining span of the temporal expressions. We present detailed error analysis of our system and also point out some factors which can help to improve its accuracy. | Extraction of events and temporal expressions from clinical narratives |
S1532046413001391 | Objectives Patients increasingly visit online health communities to get help on managing health. The large scale of these online communities makes it impossible for the moderators to engage in all conversations; yet, some conversations need their expertise. Our work explores low-cost text classification methods to this new domain of determining whether a thread in an online health forum needs moderators’ help. Methods We employed a binary classifier on WebMD’s online diabetes community data. To train the classifier, we considered three feature types: (1) word unigram, (2) sentiment analysis features, and (3) thread length. We applied feature selection methods based on χ 2 statistics and under sampling to account for unbalanced data. We then performed a qualitative error analysis to investigate the appropriateness of the gold standard. Results Using sentiment analysis features, feature selection methods, and balanced training data increased the AUC value up to 0.75 and the F1-score up to 0.54 compared to the baseline of using word unigrams with no feature selection methods on unbalanced data (0.65 AUC and 0.40 F1-score). The error analysis uncovered additional reasons for why moderators respond to patients’ posts. Discussion We showed how feature selection methods and balanced training data can improve the overall classification performance. We present implications of weighing precision versus recall for assisting moderators of online health communities. Our error analysis uncovered social, legal, and ethical issues around addressing community members’ needs. We also note challenges in producing a gold standard, and discuss potential solutions for addressing these challenges. Conclusion Social media environments provide popular venues in which patients gain health-related information. Our work contributes to understanding scalable solutions for providing moderators’ expertise in these large-scale, social media environments. | Text classification for assisting moderators in online health communities |
S1532046413001408 | Clinical text, such as clinical trial eligibility criteria, is largely underused in state-of-the-art medical search engines due to difficulties of accurate parsing. This paper proposes a novel methodology to derive a semantic index for clinical eligibility documents based on a controlled vocabulary of frequent tags, which are automatically mined from the text. We applied this method to eligibility criteria on ClinicalTrials.gov and report that frequent tags (1) define an effective and efficient index of clinical trials and (2) are unlikely to grow radically when the repository increases. We proposed to apply the semantic index to filter clinical trial search results and we concluded that frequent tags reduce the result space more efficiently than an uncontrolled set of UMLS concepts. Overall, unsupervised mining of frequent tags from clinical text leads to an effective semantic index for the clinical eligibility documents and promotes their computational reuse. | Unsupervised mining of frequent tags for clinical eligibility text indexing |
S153204641300141X | Objective In medical information retrieval research, semantic resources have been mostly used by expanding the original query terms or estimating the concept importance weight. However, implicit term-dependency information contained in semantic concept terms has been overlooked or at least underused in most previous studies. In this study, we incorporate a semantic concept-based term-dependence feature into a formal retrieval model to improve its ranking performance. Design Standardized medical concept terms used by medical professionals were assumed to have implicit dependency within the same concept. We hypothesized that, by elaborately revising the ranking algorithms to favor documents that preserve those implicit dependencies, the ranking performance could be improved. The implicit dependence features are harvested from the original query using MetaMap. These semantic concept-based dependence features were incorporated into a semantic concept-enriched dependence model (SCDM). We designed four different variants of the model, with each variant having distinct characteristics in the feature formulation method. Measurements We performed leave-one-out cross validations on both a clinical document corpus (TREC Medical records track) and a medical literature corpus (OHSUMED), which are representative test collections in medical information retrieval research. Results Our semantic concept-enriched dependence model consistently outperformed other state-of-the-art retrieval methods. Analysis shows that the performance gain has occurred independently of the concept’s explicit importance in the query. Conclusion By capturing implicit knowledge with regard to the query term relationships and incorporating them into a ranking model, we could build a more robust and effective retrieval model, independent of the concept importance. | Semantic concept-enriched dependence model for medical information retrieval |
S1532046413001421 | Objectives This research is concerned with the study of a new social-network platform, which (1) provides people with disabilities of neurological origin, their relatives, health professionals, therapists, carers and institutions with an interoperable platform that supports standard indicators, (2) promotes knowledge democratization and user empowerment, and (3) allows making decisions with a more informed opinion. Methods A new social network, Circles of Health, has been designed, developed and tested by end-users. To allow monitoring the evolution of people’s health status and comparing it with other users and with their cohort, anonymized data of 2675 people from comprehensive and multidimensional medical evaluations, carried out yearly from 2006 to 2010, have been standardized to the International Classification of Functioning, Disability and Health, integrated into the corresponding medical health records and then used to automatically generate and graphically represent multidimensional indicators. These indicators have been integrated into Circles of Health’s social environment, which has been then evaluated via expert and user-experience analyses. Results Patients used Circles of Health to exchange bio-psycho-social information (medical and otherwise) about their everyday lives. Health professionals remarked that the use of color-coding in graphical representations is useful to quickly diagnose deficiencies, difficulties or barriers in rehabilitation. Most people with disabilities complained about the excessive amount of information and the difficulty in interpreting graphical representations. Conclusions Health professionals found Circles of Health useful to generate a more integrative understanding of health based on a comprehensive profile of individuals instead of being focused on patient’s diseases and injuries. People with disabilities found enriching personal knowledge with the experiences of other users helpful. The number of descriptors used at the same time in the graphical interface should be reduced in future versions of the social-network platform. | Circles of Health: Towards an advanced social network about disabilities of neurological origin |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.