FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S1361841514000358
Determining corresponding regions between an MRI and an X-ray mammogram is a clinically useful task that is challenging for radiologists due to the large deformation that the breast undergoes between the two image acquisitions. In this work we propose an intensity-based image registration framework, where the biomechanical transformation model parameters and the rigid-body transformation parameters are optimised simultaneously. Patient-specific biomechanical modelling of the breast derived from diagnostic, prone MRI has been previously used for this task. However, the high computational time associated with breast compression simulation using commercial packages, did not allow the optimisation of both pose and FEM parameters in the same framework. We use a fast explicit Finite Element (FE) solver that runs on a graphics card, enabling the FEM-based transformation model to be fully integrated into the optimisation scheme. The transformation model has seven degrees of freedom, which include parameters for both the initial rigid-body pose of the breast prior to mammographic compression, and those of the biomechanical model. The framework was tested on ten clinical cases and the results were compared against an affine transformation model, previously proposed for the same task. The mean registration error was 11.6 ± 3.8 mm for the CC and 11 ± 5.4 mm for the MLO view registrations, indicating that this could be a useful clinical tool.
MRI to X-ray mammography intensity-based registration with simultaneous optimisation of pose and biomechanical transformation parameters
S1361841514000668
The Magnetic Resonance Imaging (MRI) signal can be made sensitive to functional parameters that provide information about tissues. In dynamic contrast enhanced (DCE) MRI these functional parameters are related to the microvasculature environment and the concentration changes that occur rapidly after the injection of a contrast agent. Typically DCE images are reconstructed individually and kinetic parameters are estimated by fitting a pharmacokinetic model to the time-enhancement response; these methods can be denoted as “indirect”. If undersampling is present to accelerate the acquisition, techniques such as kt-FOCUSS can be employed in the reconstruction step to avoid image degradation. This paper suggests a Bayesian inference framework to estimate functional parameters directly from the measurements at high temporal resolution. The current implementation estimates pharmacokinetic parameters (related to the extended Tofts model) from undersampled (k, t)-space DCE MRI. The proposed scheme is evaluated on a simulated abdominal DCE phantom and prostate DCE data, for fully sampled, 4 and 8-fold undersampled (k, t)-space data. Direct kinetic parameters demonstrate better correspondence (up to 70% higher mutual information) to the ground truth kinetic parameters (of the simulated abdominal DCE phantom) than the ones derived from the indirect methods. For the prostate DCE data, direct kinetic parameters depict the morphology of the tumour better. To examine the impact on cancer diagnosis, a peripheral zone prostate cancer diagnostic model was employed to calculate a probability map for each method.
Direct parametric reconstruction from undersampled (k, t)-space data in dynamic contrast enhanced MRI
S1361841514000929
We first propose a large deformation diffeomorphic metric mapping algorithm to align multiple b-value diffusion weighted imaging (mDWI) data, specifically acquired via hybrid diffusion imaging (HYDI). We denote this algorithm as LDDMM-HYDI. We then propose a Bayesian probabilistic model for estimating the white matter atlas from HYDIs. We adopt the work given in Hosseinbor et al. (2013) and represent the q-space diffusion signal with the Bessel Fourier orientation reconstruction (BFOR) signal basis. The BFOR framework provides the representation of mDWI in the q-space and the analytic form of the emsemble average propagator (EAP) reconstruction, as well as reduces memory requirement. In addition, since the BFOR signal basis is orthonormal, the L 2 norm that quantifies the differences in the q-space signals of any two mDWI datasets can be easily computed as the sum of the squared differences in the BFOR expansion coefficients. In this work, we show that the reorientation of the q-space signal due to spatial transformation can be easily defined on the BFOR signal basis. We incorporate the BFOR signal basis into the LDDMM framework and derive the gradient descent algorithm for LDDMM-HYDI with explicit orientation optimization. Additionally, we extend the previous Bayesian atlas estimation framework for scalar-valued images to HYDIs and derive the expectation–maximization algorithm for solving the HYDI atlas estimation problem. Using real HYDI datasets, we show that the Bayesian model generates the white matter atlas with anatomical details. Moreover, we show that it is important to consider the variation of mDWI reorientation due to a small change in diffeomorphic transformation in the LDDMM-HYDI optimization and to incorporate the full information of HYDI for aligning mDWI. Finally, we show that the LDDMM-HYDI outperforms the LDDMM algorithm with diffusion tensors generated from each shell of HYDI.
Diffeomorphic metric mapping and probabilistic atlas generation of hybrid diffusion imaging based on BFOR signal basis
S1361841514001029
Echo Planar Imaging (EPI) is routinely used in diffusion and functional MR imaging due to its rapid acquisition time. However, the long readout period makes it prone to susceptibility artefacts which results in geometric and intensity distortions of the acquired image. The use of these distorted images for neuronavigation hampers the effectiveness of image-guided surgery systems as critical white matter tracts and functionally eloquent brain areas cannot be accurately localised. In this paper, we present a novel method for correction of distortions arising from susceptibility artefacts in EPI images. The proposed method combines fieldmap and image registration based correction techniques in a unified framework. A phase unwrapping algorithm is presented that can efficiently compute the B0 magnetic field inhomogeneity map as well as the uncertainty associated with the estimated solution through the use of dynamic graph cuts. This information is fed to a subsequent image registration step to further refine the results in areas with high uncertainty. This work has been integrated into the surgical workflow at the National Hospital for Neurology and Neurosurgery and its effectiveness in correcting for geometric distortions due to susceptibility artefacts is demonstrated on EPI images acquired with an interventional MRI scanner during neurosurgery.
Susceptibility artefact correction using dynamic graph cuts: Application to neurosurgery
S1361841514001091
Contrast agent enhanced magnetic resonance (MR) perfusion imaging provides an early, non-invasive indication of defects in the coronary circulation. However, the large variation of contrast agent properties, physiological state and imaging protocols means that optimisation of image acquisition is difficult to achieve. This situation motivates the development of a computational framework that, in turn, enables the efficient mapping of this parameter space to provide valuable information for optimisation of perfusion imaging in the clinical context. For this purpose a single-compartment porous medium model of capillary blood flow is developed which is coupled with a scalar transport model, to characterise the behaviour of both blood-pool and freely-diffusive contrast agents characterised by their ability to diffuse through the capillary wall into the extra-cellular space. A parameter space study is performed on the nondimensionalised equations using a 2D model for both healthy and diseased myocardium, examining the sensitivity of system behaviour to Peclet number, Damköhler number (Da), diffusivity ratio and fluid porosity. Assuming a linear MR signal response model, sample concentration time series data are calculated, and the sensitivity of clinically-relevant properties of these signals to the model parameters is quantified. Both upslope and peak values display significant non-monotonic behaviour with regard to the Damköhler number, with these properties showing a high degree of sensitivity in the parameter range relevant to contrast agents currently in use. However, the results suggest that signal upslope is the more robust and discerning metric for perfusion quantification, in particular for correlating with perfusion defect size. Finally, the results were examined in the context of nonlinear signal response, flow quantification via Fermi deconvolution and perfusion reserve index, which demonstrated that there is no single best set of contrast agent parameters, instead the contrast agents should be tailored to the specific imaging protocol and post-processing method to be used.
A spatially-distributed computational model to quantify behaviour of contrast agents in MR perfusion imaging
S1361841514001819
Segmentation of anatomical structures, from modalities like computed tomography (CT), magnetic resonance imaging (MRI) and ultrasound, is a key enabling technology for medical applications such as diagnostics, planning and guidance. More efficient implementations are necessary, as most segmentation methods are computationally expensive, and the amount of medical imaging data is growing. The increased programmability of graphic processing units (GPUs) in recent years have enabled their use in several areas. GPUs can solve large data parallel problems at a higher speed than the traditional CPU, while being more affordable and energy efficient than distributed systems. Furthermore, using a GPU enables concurrent visualization and interactive segmentation, where the user can help the algorithm to achieve a satisfactory result. This review investigates the use of GPUs to accelerate medical image segmentation methods. A set of criteria for efficient use of GPUs are defined and each segmentation method is rated accordingly. In addition, references to relevant GPU implementations and insight into GPU optimization are provided and discussed. The review concludes that most segmentation methods may benefit from GPU processing due to the methods’ data parallel structure and high thread count. However, factors such as synchronization, branch divergence and memory usage can limit the speedup.
Medical image segmentation on GPUs – A comprehensive review
S136184151400187X
We propose a framework for the robust and fully-automatic segmentation of magnetic resonance (MR) brain images called “Multi-Atlas Label Propagation with Expectation–Maximisation based refinement” (MALP-EM). The presented approach is based on a robust registration approach (MAPER), highly performant label fusion (joint label fusion) and intensity-based label refinement using EM. We further adapt this framework to be applicable for the segmentation of brain images with gross changes in anatomy. We propose to account for consistent registration errors by relaxing anatomical priors obtained by multi-atlas propagation and a weighting scheme to locally combine anatomical atlas priors and intensity-refined posterior probabilities. The method is evaluated on a benchmark dataset used in a recent MICCAI segmentation challenge. In this context we show that MALP-EM is competitive for the segmentation of MR brain scans of healthy adults when compared to state-of-the-art automatic labelling techniques. To demonstrate the versatility of the proposed approach, we employed MALP-EM to segment 125 MR brain images into 134 regions from subjects who had sustained traumatic brain injury (TBI). We employ a protocol to assess segmentation quality if no manual reference labels are available. Based on this protocol, three independent, blinded raters confirmed on 13 MR brain scans with pathology that MALP-EM is superior to established label fusion techniques. We visually confirm the robustness of our segmentation approach on the full cohort and investigate the potential of derived symmetry-based imaging biomarkers that correlate with and predict clinically relevant variables in TBI such as the Marshall Classification (MC) or Glasgow Outcome Score (GOS). Specifically, we show that we are able to stratify TBI patients with favourable outcomes from non-favourable outcomes with 64.7% accuracy using acute-phase MR images and 66.8% accuracy using follow-up MR images. Furthermore, we are able to differentiate subjects with the presence of a mass lesion or midline shift from those with diffuse brain injury with 76.0% accuracy. The thalamus, putamen, pallidum and hippocampus are particularly affected. Their involvement predicts TBI disease progression.
Robust whole-brain segmentation: Application to traumatic brain injury
S1361841514001881
Intensity variations in image texture can provide powerful quantitative information about physical properties of biological tissue. However, tissue patterns can vary according to the utilized imaging system and are intrinsically correlated to the scale of analysis. In the case of ultrasound, the Nakagami distribution is a general model of the ultrasonic backscattering envelope under various scattering conditions and densities where it can be employed for characterizing image texture, but the subtle intra-heterogeneities within a given mass are difficult to capture via this model as it works at a single spatial scale. This paper proposes a locally adaptive 3D multi-resolution Nakagami-based fractal feature descriptor that extends Nakagami-based texture analysis to accommodate subtle speckle spatial frequency tissue intensity variability in volumetric scans. Local textural fractal descriptors – which are invariant to affine intensity changes – are extracted from volumetric patches at different spatial resolutions from voxel lattice-based generated shape and scale Nakagami parameters. Using ultrasound radio-frequency datasets we found that after applying an adaptive fractal decomposition label transfer approach on top of the generated Nakagami voxels, tissue characterization results were superior to the state of art. Experimental results on real 3D ultrasonic pre-clinical and clinical datasets suggest that describing tumor intra-heterogeneity via this descriptor may facilitate improved prediction of therapy response and disease characterization.
Quantification of ultrasonic texture intra-heterogeneity via volumetric stochastic modeling for tissue characterization
S136184151400190X
We propose an automated framework for predicting gestational age (GA) and neurodevelopmental maturation of a fetus based on 3D ultrasound (US) brain image appearance. Our method capitalizes on age-related sonographic image patterns in conjunction with clinical measurements to develop, for the first time, a predictive age model which improves on the GA-prediction potential of US images. The framework benefits from a manifold surface representation of the fetal head which delineates the inner skull boundary and serves as a common coordinate system based on cranial position. This allows for fast and efficient sampling of anatomically-corresponding brain regions to achieve like-for-like structural comparison of different developmental stages. We develop bespoke features which capture neurosonographic patterns in 3D images, and using a regression forest classifier, we characterize structural brain development both spatially and temporally to capture the natural variation existing in a healthy population ( N = 447 ) over an age range of active brain maturation (18–34weeks). On a routine clinical dataset ( N = 187 ) our age prediction results strongly correlate with true GA ( r = 0.98 , accurate within ± 6.10 days ) , confirming the link between maturational progression and neurosonographic activity observable across gestation. Our model also outperforms current clinical methods by ±4.57 days in the third trimester—a period complicated by biological variations in the fetal population. Through feature selection, the model successfully identified the most age-discriminating anatomies over this age range as being the Sylvian fissure, cingulate, and callosal sulci.
Learning-based prediction of gestational age from ultrasound images of the fetal brain
S1361841515000316
We present new pulmonary nodule segmentation algorithms for computed tomography (CT). These include a fully-automated (FA) system, a semi-automated (SA) system, and a hybrid system. Like most traditional systems, the new FA system requires only a single user-supplied cue point. On the other hand, the SA system represents a new algorithm class requiring 8 user-supplied control points. This does increase the burden on the user, but we show that the resulting system is highly robust and can handle a variety of challenging cases. The proposed hybrid system starts with the FA system. If improved segmentation results are needed, the SA system is then deployed. The FA segmentation engine has 2 free parameters, and the SA system has 3. These parameters are adaptively determined for each nodule in a search process guided by a regression neural network (RNN). The RNN uses a number of features computed for each candidate segmentation. We train and test our systems using the new Lung Image Database Consortium and Image Database Resource Initiative (LIDC–IDRI) data. To the best of our knowledge, this is one of the first nodule-specific performance benchmarks using the new LIDC–IDRI dataset. We also compare the performance of the proposed methods with several previously reported results on the same data used by those other methods. Our results suggest that the proposed FA system improves upon the state-of-the-art, and the SA system offers a considerable boost over the FA system.
Segmentation of pulmonary nodules in computed tomography using a regression neural network approach and its application to the Lung Image Database Consortium and Image Database Resource Initiative dataset
S1361841515000651
An automated segmentation method is presented for multi-organ segmentation in abdominal CT images. Dictionary learning and sparse coding techniques are used in the proposed method to generate target specific priors for segmentation. The method simultaneously learns dictionaries which have reconstructive power and classifiers which have discriminative ability from a set of selected atlases. Based on the learnt dictionaries and classifiers, probabilistic atlases are then generated to provide priors for the segmentation of unseen target images. The final segmentation is obtained by applying a post-processing step based on a graph-cuts method. In addition, this paper proposes a voxel-wise local atlas selection strategy to deal with high inter-subject variation in abdominal CT images. The segmentation performance of the proposed method with different atlas selection strategies are also compared. Our proposed method has been evaluated on a database of 150 abdominal CT images and achieves a promising segmentation performance with Dice overlap values of 94.9%, 93.6%, 71.1%, and 92.5% for liver, kidneys, pancreas, and spleen, respectively.
Discriminative dictionary learning for abdominal multi-organ segmentation
S1361841515000742
The electrical activation of the heart is a complex physiological process that is essential for the understanding of several cardiac dysfunctions, such as ventricular tachycardia (VT). Nowadays, patient-specific activation times on ventricular chambers can be estimated from electro-anatomical maps, providing crucial information to clinicians for guiding cardiac radio-frequency ablation treatment. However, some relevant electrical pathways such as those of the Purkinje system are very difficult to interpret from these maps due to sparsity of data and the limited spatial resolution of the system. We present here a novel method to estimate these fast electrical pathways from the local activations maps (LATs) obtained from electro-anatomical maps. The location of Purkinje-myocardial junctions (PMJs) is estimated considering them as critical points of a distance map defined by the activation maps, and then minimal cost geodesic paths are computed on the ventricular surface between the detected junctions. Experiments to validate the proposed method have been carried out in simplified and realistic simulated data, showing good performance on recovering the main characteristics of simulated Purkinje networks (e.g. PMJs). A feasibility study with real cases of fascicular VT was also performed, showing promising results.
Estimation of Purkinje trees from electro-anatomical mapping of the left ventricle using minimal cost geodesics
S1361841515001012
Medical ultrasound (US) image segmentation and quantification can be challenging due to signal dropouts, missing boundaries, and presence of speckle, which gives images of similar objects quite different appearance. Typically, purely intensity-based methods do not lead to a good segmentation of the structures of interest. Prior work has shown that local phase and feature asymmetry, derived from the monogenic signal, extract structural information from US images. This paper proposes a new US segmentation approach based on the fuzzy connectedness framework. The approach uses local phase and feature asymmetry to define a novel affinity function, which drives the segmentation algorithm, incorporates a shape-based object completion step, and regularises the result by mean curvature flow. To appreciate the accuracy and robustness of the methodology across clinical data of varying appearance and quality, a novel entropy-based quantitative image quality assessment of the different regions of interest is introduced. The new method is applied to 81 US images of the fetal arm acquired at multiple gestational ages, as a means to define a new automated image-based biomarker of fetal nutrition. Quantitative and qualitative evaluation shows that the segmentation method is comparable to manual delineations and robust across image qualities that are typical of clinical practice.
Feature-based fuzzy connectedness segmentation of ultrasound images with an object completion step
S1361841515001267
We present a framework for simulating cross-sectional or longitudinal biomarker data sets from neurodegenerative disease cohorts that reflect the temporal evolution of the disease and population diversity. The simulation system provides a mechanism for evaluating the performance of data-driven models of disease progression, which bring together biomarker measurements from large cross-sectional (or short term longitudinal) cohorts to recover the average population-wide dynamics. We demonstrate the use of the simulation framework in two different ways. First, to evaluate the performance of the Event Based Model (EBM) for recovering biomarker abnormality orderings from cross-sectional datasets. Second, to evaluate the performance of a differential equation model (DEM) for recovering biomarker abnormality trajectories from short-term longitudinal datasets. Results highlight several important considerations when applying data-driven models to sporadic disease datasets as well as key areas for future work. The system reveals several important insights into the behaviour of each model. For example, the EBM is robust to noise on the underlying biomarker trajectory parameters, under-sampling of the underlying disease time course and outliers who follow alternative event sequences. However, the EBM is sensitive to accurate estimation of the distribution of normal and abnormal biomarker measurements. In contrast, we find that the DEM is sensitive to noise on the biomarker trajectory parameters, resulting in an over estimation of the time taken for biomarker trajectories to go from normal to abnormal. This over estimate is approximately twice as long as the actual transition time of the trajectory for the expected noise level in neurodegenerative disease datasets. This simulation framework is equally applicable to a range of other models and longitudinal analysis techniques.
A simulation system for biomarker evolution in neurodegenerative disease
S1361841515001310
This paper introduces a novel method for inferring spatially varying regularisation in non-linear registration. This is achieved through full Bayesian inference on a probabilistic registration model, where the prior on the transformation parameters is parameterised as a weighted mixture of spatially localised components. Such an approach has the advantage of allowing the registration to be more flexibly driven by the data than a traditional globally defined regularisation penalty, such as bending energy. The proposed method adaptively determines the influence of the prior in a local region. The strength of the prior may be reduced in areas where the data better support deformations, or can enforce a stronger constraint in less informative areas. Consequently, the use of such a spatially adaptive prior may reduce unwanted impacts of regularisation on the inferred transformation. This is especially important for applications where the deformation field itself is of interest, such as tensor based morphometry. The proposed approach is demonstrated using synthetic images, and with application to tensor based morphometry analysis of subjects with Alzheimer’s disease and healthy controls. The results indicate that using the proposed spatially adaptive prior leads to sparser deformations, which provide better localisation of regional volume change. Additionally, the proposed regularisation model leads to more data driven and localised maps of registration uncertainty. This paper also demonstrates for the first time the use of Bayesian model comparison for selecting different types of regularisation.
Probabilistic non-linear registration with spatially adaptive regularisation
S1361841515001371
Pressure difference is an accepted clinical biomarker for cardiovascular disease conditions such as aortic coarctation. Currently, measurements of pressure differences in the clinic rely on invasive techniques (catheterization), prompting development of non-invasive estimates based on blood flow. In this work, we propose a non-invasive estimation procedure deriving pressure difference from the work-energy equation for a Newtonian fluid. Spatial and temporal convergence is demonstrated on in silico Phase Contrast Magnetic Resonance Image (PC-MRI) phantoms with steady and transient flow fields. The method is also tested on an image dataset generated in silico from a 3D patient-specific Computational Fluid Dynamics (CFD) simulation and finally evaluated on a cohort of 9 subjects. The performance is compared to existing approaches based on steady and unsteady Bernoulli formulations as well as the pressure Poisson equation. The new technique shows good accuracy, robustness to noise, and robustness to the image segmentation process, illustrating the potential of this approach for non-invasive pressure difference estimation.
Non-invasive pressure difference estimation from PC-MRI using the work-energy equation
S1361841515001383
Atlas-based analysis methods rely on the morphological similarity between the atlas and target images, and on the availability of labelled images. Problems can arise when the deformations introduced by pathologies affect the similarity between the atlas and a patient’s image. The aim of this work is to exploit the morphological dissimilarities between atlas databases and pathological images to diagnose the underlying clinical condition, while avoiding the dependence on labelled images. We propose a voxelwise atlas rating approach (VoxAR) relying on multiple atlas databases, each representing a particular condition. Using a local image similarity measure to assess the morphological similarity between the atlas and target images, a rating map displaying for each voxel the condition of the atlases most similar to the target is defined. The final diagnosis is established by assigning the condition of the database the most represented in the rating map. We applied the method to diagnose three different conditions associated with dextro-transposition of the great arteries, a congenital heart disease. The proposed approach outperforms other state-of-the-art methods using annotated images, with an accuracy of 97.3% when evaluated on a set of 60 whole heart MR images containing healthy and pathological subjects using cross validation.
Voxelwise atlas rating for computer assisted diagnosis: Application to congenital heart diseases of the great arteries
S1361841515001486
Statistical shape models of soft-tissue organ motion provide a useful means of imposing physical constraints on the displacements allowed during non-rigid image registration, and can be especially useful when registering sparse and/or noisy image data. In this paper, we describe a method for generating a subject-specific statistical shape model that captures prostate deformation for a new subject given independent population data on organ shape and deformation obtained from magnetic resonance (MR) images and biomechanical modelling of tissue deformation due to transrectal ultrasound (TRUS) probe pressure. The characteristics of the models generated using this method are compared with corresponding models based on training data generated directly from subject-specific biomechanical simulations using a leave-one-out cross validation. The accuracy of registering MR and TRUS images of the prostate using the new prostate models was then estimated and compared with published results obtained in our earlier research. No statistically significant difference was found between the specificity and generalisation ability of prostate shape models generated using the two approaches. Furthermore, no statistically significant difference was found between the landmark-based target registration errors (TREs) following registration using different models, with a median (95th percentile) TRE of 2.40 (6.19) mm versus 2.42 (7.15) mm using models generated with the new method versus a model built directly from patient-specific biomechanical simulation data, respectively (N = 800; 8 patient datasets; 100 registrations per patient). We conclude that the proposed method provides a computationally efficient and clinically practical alternative to existing complex methods for modelling and predicting subject-specific prostate deformation, such as biomechanical simulations, for new subjects. The method may also prove useful for generating shape models for other organs, for example, where only limited shape training data from dynamic imaging is available.
Population-based prediction of subject-specific prostate deformation for MR-to-ultrasound image registration
S1361841516000049
Segregating the human cortex into distinct areas based on structural connectivity criteria is of widespread interest in neuroscience. This paper presents a groupwise connectivity-based parcellation framework for the whole cortical surface using a new high quality diffusion dataset of 79 healthy subjects. Our approach performs gyrus by gyrus to parcellate the whole human cortex. The main originality of the method is to compress for each gyrus the connectivity profiles used for the clustering without any anatomical prior information. This step takes into account the interindividual cortical and connectivity variability. To this end, we consider intersubject high density connectivity areas extracted using a surface-based watershed algorithm. A wide validation study has led to a fully automatic pipeline which is robust to variations in data preprocessing (tracking type, cortical mesh characteristics and boundaries of initial gyri), data characteristics (including number of subjects), and the main algorithmic parameters. A remarkable reproducibility is achieved in parcellation results for the whole cortex, leading to clear and stable cortical patterns. This reproducibility has been tested across non-overlapping subgroups and the validation is presented mainly on the pre- and postcentral gyri.
Groupwise connectivity-based parcellation of the whole human cortical surface using watershed-driven dimension reduction
S1361841516000050
Studies have demonstrated the feasibility of late Gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) imaging for guiding the management of patients with sequelae to myocardial infarction, such as ventricular tachycardia and heart failure. Clinical implementation of these developments necessitates a reproducible and reliable segmentation of the infarcted regions. It is challenging to compare new algorithms for infarct segmentation in the left ventricle (LV) with existing algorithms. Benchmarking datasets with evaluation strategies are much needed to facilitate comparison. This manuscript presents a benchmarking evaluation framework for future algorithms that segment infarct from LGE CMR of the LV. The image database consists of 30 LGE CMR images of both humans and pigs that were acquired from two separate imaging centres. A consensus ground truth was obtained for all data using maximum likelihood estimation. Six widely-used fixed-thresholding methods and five recently developed algorithms are tested on the benchmarking framework. Results demonstrate that the algorithms have better overlap with the consensus ground truth than most of the n-SD fixed-thresholding methods, with the exception of the Full-Width-at-Half-Maximum (FWHM) fixed-thresholding method. Some of the pitfalls of fixed thresholding methods are demonstrated in this work. The benchmarking evaluation framework, which is a contribution of this work, can be used to test and benchmark future algorithms that detect and quantify infarct in LGE CMR images of the LV. The datasets, ground truth and evaluation code have been made publicly available through the website: https://www.cardiacatlas.org/web/guest/challenges.
Evaluation of state-of-the-art segmentation algorithms for left ventricle infarct from late Gadolinium enhancement MR images
S1361841516000165
In this work, various wavelet based methods like the discrete wavelet transform, the dual-tree complex wavelet transform, the Gabor wavelet transform, curvelets, contourlets and shearlets are applied for the automated classification of colonic polyps. The methods are tested on 8 HD-endoscopic image databases, where each database is acquired using different imaging modalities (Pentax’s i-Scan technology combined with or without staining the mucosa), 2 NBI high-magnification databases and one database with chromoscopy high-magnification images. To evaluate the suitability of the wavelet based methods with respect to the classification of colonic polyps, the classification performances of 3 wavelet transforms and the more recent curvelets, contourlets and shearlets are compared using a common framework. Wavelet transforms were already often and successfully applied to the classification of colonic polyps, whereas curvelets, contourlets and shearlets have not been used for this purpose so far. We apply different feature extraction techniques to extract the information of the subbands of the wavelet based methods. Most of the in total 25 approaches were already published in different texture classification contexts. Thus, the aim is also to assess and compare their classification performance using a common framework. Three of the 25 approaches are novel. These three approaches extract Weibull features from the subbands of curvelets, contourlets and shearlets. Additionally, 5 state-of-the-art non wavelet based methods are applied to our databases so that we can compare their results with those of the wavelet based methods. It turned out that extracting Weibull distribution parameters from the subband coefficients generally leads to high classification results, especially for the dual-tree complex wavelet transform, the Gabor wavelet transform and the Shearlet transform. These three wavelet based transforms in combination with Weibull features even outperform the state-of-the-art methods on most of the databases. We will also show that the Weibull distribution is better suited to model the subband coefficient distribution than other commonly used probability distributions like the Gaussian distribution and the generalized Gaussian distribution. So this work gives a reasonable summary of wavelet based methods for colonic polyp classification and the huge amount of endoscopic polyp databases used for our experiments assures a high significance of the achieved results.
Directional wavelet based features for colonic polyp classification
S1361841516000190
Dental radiography plays an important role in clinical diagnosis, treatment and surgery. In recent years, efforts have been made on developing computerized dental X-ray image analysis systems for clinical usages. A novel framework for objective evaluation of automatic dental radiography analysis algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2015 Bitewing Radiography Caries Detection Challenge and Cephalometric X-ray Image Analysis Challenge. In this article, we present the datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. The main contributions of the challenge include the creation of the dental anatomy data repository of bitewing radiographs, the creation of the anatomical abnormality classification data repository of cephalometric radiographs, and the definition of objective quantitative evaluation for comparison and ranking of the algorithms. With this benchmark, seven automatic methods for analysing cephalometric X-ray image and two automatic methods for detecting bitewing radiography caries have been compared, and detailed quantitative evaluation results are presented in this paper. Based on the quantitative evaluation results, we believe automatic dental radiography analysis is still a challenging and unsolved problem. The datasets and the evaluation software will be made available to the research community, further encouraging future developments in this field. (http://www-o.ntust.edu.tw/~cweiwang/ISBI2015/)
A benchmark for comparison of dental radiography analysis algorithms
S1361841516000347
Rectal tumour segmentation in dynamic contrast-enhanced MRI (DCE-MRI) is a challenging task, and an automated and consistent method would be highly desirable to improve the modelling and prediction of patient outcomes from tissue contrast enhancement characteristics – particularly in routine clinical practice. A framework is developed to automate DCE-MRI tumour segmentation, by introducing: perfusion-supervoxels to over-segment and classify DCE-MRI volumes using the dynamic contrast enhancement characteristics; and the pieces-of-parts graphical model, which adds global (anatomic) constraints that further refine the supervoxel components that comprise the tumour. The framework was evaluated on 23 DCE-MRI scans of patients with rectal adenocarcinomas, and achieved a voxelwise area-under the receiver operating characteristic curve (AUC) of 0.97 compared to expert delineations. Creating a binary tumour segmentation, 21 of the 23 cases were segmented correctly with a median Dice similarity coefficient (DSC) of 0.63, which is close to the inter-rater variability of this challenging task. A second study is also included to demonstrate the method’s generalisability and achieved a DSC of 0.71. The framework achieves promising results for the underexplored area of rectal tumour segmentation in DCE-MRI, and the methods have potential to be applied to other DCE-MRI and supervoxel segmentation problems.
Pieces-of-parts for supervoxel segmentation with global context: Application to DCE-MRI tumour delineation
S1361841516300068
Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy.
Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets
S1364815213000716
Vegetable farms are one of many nitrogen (N) sources adversely affecting Lake Taihu in eastern China. Given the lack of quantitative “cause and effect” relationships and data relating to these systems, we developed a conceptual Bayesian network to investigate and demonstrate causal relationships and the effects of different mitigation strategies on N exports from vegetable farms in the Lake Taihu region. Structurally, the network comprised one primary transport factor, one primary source factor and three post-mobilisation strategies, and three output factors. In general the network suggests that N exports are more sensitive to transport factors (i.e. runoff volumes) than source factors (i.e. fertiliser application rates) although the cumulative effects of excessive fertiliser were not considered. Post-mobilisation mitigations such as wetlands and ecoditches appear to be particularly effective in decreasing N exports however their implementation on a regional scale may be limited by land availability. While optimising N inputs would be prudent, the network suggests that better irrigation practice, including improved irrigation scheduling, using less imported water and optimising rainfall utilisation would be more effective in achieving environmental goals than simply limiting N supply.
Using a conceptual Bayesian network to investigate environmental management of vegetable production in the Lake Taihu region of China
S1364815213000911
Data-driven modelling is used to develop two alternative types of predictive environmental model: a simulator, a model of a real-world process developed from either a conceptual understanding of physical relations and/or using measured records, and an emulator, an imitator of some other model developed on predicted outputs calculated by that source model. A simple four-way typology called Emulation Simulation Typology (EST) is proposed that distinguishes between (i) model type and (ii) different uses of model development period and model test period datasets. To address the question of to what extent simulator and emulator solutions might be considered interchangeable i.e. provide similar levels of output accuracy when tested on data different from that used in their development, a pair of counterpart pan evaporation models was created using symbolic regression. Each model type delivered similar levels of predictive skill to that other of published solutions. Input–output sensitivity analysis of the two different model types likewise confirmed two very similar underlying response functions. This study demonstrates that the type and quality of data on which a model is tested, has a greater influence on model accuracy assessment, than the type and quality of data on which a model is developed, providing that the development record is sufficiently representative of the conceptual underpinnings of the system being examined. Thus, previously reported substantial disparities occurring in goodness-of-fit statistics for pan evaporation models are most likely explained by the use of either measured or calculated data to test particular models, where lower scores do not necessarily represent major deficiencies in the solution itself.
A typology of different development and testing options for symbolic regression modelling of measured and calculated datasets
S1364815213001072
Spatial conservation prioritization concerns the effective allocation of conservation action. Its stages include development of an ecologically based model of conservation value, data pre-processing, spatial prioritization analysis, and interpretation of results for conservation action. Here we investigate the details of each stage for analyses done using the Zonation prioritization framework. While there is much literature about analytical methods implemented in Zonation, there is only scattered information available about what happens before and after the computational analysis. Here we fill this information gap by summarizing the pre-analysis and post-analysis stages of the Zonation framework. Concerning the entire process, we summarize the full workflow and list examples of operational best-case, worst-case, and typical scenarios for each analysis stage. We discuss resources needed in different analysis stages. We also discuss benefits, disadvantages, and risks involved in the application of spatial prioritization from the perspective of different stakeholders. Concerning pre-analysis stages, we explain the development of the ecological model and discuss the setting of priority weights and connectivity responses. We also explain practical aspects of data pre-processing and the post-processing interpretation of results for different conservation objectives. This work facilitates well-informed design and application of Zonation analyses for the purpose of spatial conservation planning. It should be useful for both scientists working on conservation related research as well as for practitioners looking for useful tools for conservation resource allocation.
Methods and workflow for spatial conservation prioritization using Zonation
S1364815213001163
In the field of water distribution system (WDS) analysis, case study research is needed for testing or benchmarking optimisation strategies and newly developed software. However, data availability for the investigation of real cases is limited due to time and cost needed for data collection and model setup. We present a new algorithm that addresses this problem by generating WDSs from GIS using population density, housing density and elevation as input data. We show that the resulting WDSs are comparable to actual systems in terms of network properties and hydraulic performance. For example, comparing the pressure heads for an actual and a generated WDS results in pressure head differences of ±4 m or less for 75% of the supply area. Although elements like valves and pumps are not included, the new methodology can provide water distribution systems of varying levels of complexity (e.g., network layouts, connectivity, etc.) to allow testing design/optimisation algorithms on a large number of networks. The new approach can be used to estimate the construction costs of planned WDSs aimed at addressing population growth or at comparisons of different expansion strategies in growth corridors. WDS Designer (Water Distribution System Designer) Robert Sitzenfrei, Unit of Environmental Engineering, University of Innsbruck, Technikerstr. 13, 6020 Innsbruck, Austria. Tel: ++43 (512) 507-6695; Fax: ++43 (512) 507-2911 2010 MCRInstaller.exe, version 7.13 or Matlab (R2010a) Version 7.10.0.499 Matlab ∼16 MB Contact the authors to obtain this software and user manual Free
Automatic generation of water distribution systems based on GIS data
S1364815213001370
Automated easy-to-use tools capable of generating spatial-temporal weather scenarios for the present day or downscaled future climate projections are highly desirable. Such tools would greatly support the analysis of hazard, risk and reliability of systems such as urban infrastructure, river catchments and water resources. However, the automatic parameterization of such models to the properties of a selected scenario requires the characterization of both point and spatial statistics. Whilst point statistics, such as the mean daily rainfall, may be described by a map, spatial properties such as cross-correlation vary according to a pair of sample points, and should ideally be available for every possible pair of locations. For such properties simple automatic representations are needed for any pair of locations. To address this need simple empirical models are developed of the lag-zero cross-correlation-distance (XCD) properties of United Kingdom daily rainfall. Following error and consistency checking, daily rainfall timeseries for the period 1961–1990 from 143 raingauges are used to calculate observed XCD properties. A three parameter double exponential expression is then fitted to appropriate data partitions assuming isotropic and piecewise-homogeneous XCD properties. Three models are developed: 1) a national aseasonal model; 2) a national model partitioned by calendar month; and 3) a regional model partitioned by nine UK climatic regions and by calendar month. These models provide estimates of lag-zero cross-correlation properties of any two locations in the UK. These cross-correlation models can facilitate the development of automated spatial rainfall modelling tools. This is demonstrated through implementation of the regional model into a spatial modelling framework and by application to two simulation domains (both ∼10,000 km2), one in north-west England and one in south-east England. The required point statistics are generally well simulated and a good match is found between simulated and observed XCD properties. The models developed here are straightforward to implement, incorporate correction of data errors, are pre-calculated for computational efficiency, provide smoothing of sample variability arising from sporadic coverage of observations and are repeatable. They may be used to parameterise spatial rainfall models in the UK and the methodology is likely to be easily adaptable to other regions of the world. Crosscorrelation_UK_Daily School of Civil Engineering and Geosciences, Newcastle University, NE1 7RU, UK Aidan Burton, School of Civil Engineering and Geosciences, Newcastle University, NE1 7RU, UK, [email protected] 2013 R, C, Excel spreadsheet or formulae Download from authors website lag-one daily Auto-Correlation Central and East England Conditional Exceedance Probability December–January–February season (winter) Environment Agency Rainfall and Weather Impacts Generator East Scotland June–July–August season (summer) March–April–May season (spring) North East England Northern Ireland North-West and North Scotland National-Seasonal-Parameters Neyman Scott Rectangular Pulses North-West England and north Wales Proportion of Dry Days (days with <1 mm of rainfall) Proportion of Dry Hours (hours with <0.1 mm of rainfall) Root of the Mean Square Error South East England South-West and South Scotland Spatial Temporal Neyman Scott Rectangular Pulses September–October–November season (autumn) South West England and south Wales United Kingdom Climate Projections 2009 United Kingdom Meteorological Office Weather Generator cross-Correlation-Distance
Models of daily rainfall cross-correlation for the United Kingdom
S1364815213001977
The appropriateness of spatial prediction methods such as Kriging, or aggregation methods such as summing observation values over an area, is currently judged by domain experts using their knowledge and expertise. In order to provide support from information systems for automatically discouraging or proposing prediction or aggregation methods for a dataset, expert knowledge needs to be formalized. This involves, in particular, knowledge about phenomena represented by data and models, as well as about underlying procedures. In this paper, we introduce a novel notion of meaningfulness of prediction and aggregation. To this end, we present a formal theory about spatio-temporal variable types, observation procedures, as well as interpolation and aggregation procedures relevant in Spatial Statistics. Meaningfulness is defined as correspondence between functions and data sets, the former representing data generation procedures such as observation and prediction. Comparison is based on semantic reference systems, which are types of potential outputs of a procedure. The theory is implemented in higher-order logic (HOL), and theorems about meaningfulness are proved in the semi-automated prover Isabelle. The type system of our theory is available as a Web Ontology Language (OWL) pattern for use in the Semantic Web. In addition, we show how to implement a data-model recommender system in the statistics tool environment R. We consider our theory groundwork to automate semantic interoperability of data and models.
Meaningful spatial prediction and aggregation
S1364815213002338
Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin near Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones. PSUADE Charles Tong C++ https://computation.llnl.gov/casc/uncertainty_quantification/ Free for non-commercial academic research
A comprehensive evaluation of various sensitivity analysis methods: A case study with a hydrological model
S1364815213002648
Given strong year-to-year variability, increasing competition for natural resources, and climate change impacts on agriculture, monitoring global crop and natural vegetation conditions is highly relevant, particularly in food insecure areas. Data from remote sensing image series at high temporal and low spatial resolution can help to assist in this monitoring as they provide key information in near-real time over large areas. The SPIRITS software, presented in this paper, is a stand-alone toolbox developed for environmental monitoring, particularly to produce clear and evidence-based information for crop production analysts and decision makers. It includes a large number of tools with the main aim of extracting vegetation indicators from image time series, estimating the potential impact of anomalies on crop production and sharing this information with different audiences. SPIRITS offers an integrated and flexible analysis environment with a user-friendly graphical interface, which allows sequential tasking and a high level of automation of processing chains. It is freely distributed for non-commercial use and extensively documented. SPIRITS (Software for the Processing and Interpretation of Remotely sensed Image Time Series) Herman Eerens, Dominique Haesen, Contact address: Boeretang 200, b-2400 Mol, Belgium ([email protected], [email protected]) 2013 PC Microsoft Windows (XP or later), Java (version 1.6 or higher) C, Java 240 MB http://spirits.jrc.ec.europa.eu/, https://rs.vito.be/africa/en/software/Pages/Spirits.aspx Free for non-commercial use Manual, tutorial with test data, training sessions
Image time series processing for agriculture monitoring
S1364815214000218
The Pastoral Properties Futures Simulator (PPFS) is a dynamic systems model, developed within a participatory action research partnership with the pastoral industry of Australia's Northern Territory. The model was purpose-built to support the industry's strategic planning capacity in the face of environmental, market and institutional uncertainty. The mediated modelling process sought to maximise social learning of industry stakeholders. Simulations were conducted using scenarios representing combinations of climatic, market, institutional and technological assumptions. Stochastic parameters included rainfall and product prices. Economic and environmental performance of model farms, including greenhouse gas emissions, were estimated. A critical evaluation of the tool finds the PPFS fit for purpose. However, limitations include lack of output validation, small number of scenarios and simplistic treatment of environmental impact dimensions. With further development, the PPFS can provide a platform (a) to assist with industry planning across the whole of Northern Australia and beyond, and (b) for policy analysis and development in the context of the Australian pastoral industry.
Scenario modelling to support industry strategic planning and decision making
S136481521400022X
This systematic review considers how water quality and aquatic ecology models represent the phosphorus cycle. Although the focus is on phosphorus, many of the observations and discussion points here relate to aquatic ecosystem models in general. The review considers how models compare across domains of application, the degree to which current models are fit for purpose, how to choose between multiple alternative formulations, and how models might be improved. Lake and marine models have been gradually increasing in complexity, with increasing emphasis on inorganic processes and ecosystems. River models have remained simpler, but have been more rigorously assessed. Processes important in less eutrophic systems have often been neglected: these include the biogeochemistry of organic phosphorus, transformations associated with fluxes through soils and sediments, transfer rate-limited phosphorus uptake, and responses of plants to pulsed nutrient inputs. Arguments for and against increasing model complexity, physical and physiological realism are reviewed.
State of the art in modelling of phosphorus in aquatic systems: Review, criticisms and commentary
S1364815214000280
Climate impacts the growth of trees and also affects disturbance regimes such as wildfire frequency. The European Alps have warmed considerably over the past half-century, but incomplete records make it difficult to definitively link alpine wildfire to climate change. Complicating this is the influence of forest composition and fuel loading on fire ignition risk, which is not considered by purely meteorological risk indices. Biogeochemical forest growth models track several variables that may be used as proxies for fire ignition risk. This study assesses the usefulness of the ecophysiological model BIOME-BGC's ‘soil water’ and ‘labile litter carbon’ variables in predicting fire ignition. A brief application case examines historic fire occurrence trends over pre-defined regions of Austria from 1960 to 2008. Results show that summer fire ignition risk is largely a function of low soil moisture, while winter fire ignitions are linked to the mass of volatile litter and atmospheric dryness.
Deriving forest fire ignition risk with biogeochemical process modelling
S1364815214000309
The GroundWater Spatiotemporal Data Analysis Tool (GWSDAT) is a user friendly, open source, decision support tool for the analysis and reporting of groundwater monitoring data. Uniquely, GWSDAT applies a spatiotemporal model smoother for a more coherent and smooth interpretation of the interaction in spatial and time-series components of groundwater solute concentrations. Data entry is via a standardised Microsoft Excel input template whilst the underlying statistical modelling and graphical output are generated using the open source statistical program R. This paper describes in detail the various plotting options available and how the graphical user interface can be used for rapid, rigorous and interactive trend analysis with facilitated report generation. GWSDAT has been used extensively in the assessment of soil and groundwater conditions at Shell's downstream assets and the discussion section describes the benefits of its applied use. Finally, some consideration is given to possible future developments. GWSDAT (GroundWater Spatiotemporal Data Analysis Tool) Wayne R. Jones Shell Global Solutions (UK) ([email protected]) 2013 Standard PC Microsoft Windows (XP or later) Microsoft Office (Excel, Word and PowerPoint) and R (www.r-project.org) 13 MB www.claire.co.uk/GWSDAT Free under a GNU General Public License (www.gnu.org) agreement. User manual, example data sets, FAQ document, presentations and posters.
A software tool for the spatiotemporal analysis and reporting of groundwater monitoring data
S1364815214000772
As the volume of collected data continues to increase in the environmental sciences, so too does the need for effective means for accessing those data. We have developed an Open Modeling Interface (OpenMI) data component that retrieves input data for model components from environmental information systems and delivers output data to those systems. The adoption of standards for both model component input–output interfaces and web services make it possible for the component to be reconfigured for use with different linked models and various online systems. The data component employs three techniques tailored to the unique design of the OpenMI that enable efficient operation: caching, prefetching, and buffering, making it capable of scaling to large numbers of simultaneous simulations executing on a computational grid. We present the design of the component, an evaluation of its performance, and a case study demonstrating how it can be incorporated into modeling studies. DataComponent GRoWE/Kansas State University 234 Nichols Hall, Kansas State University, Manhattan, KS, 66502, 785-532-6350 [email protected] 2013 Architecture independent Windows/Linux C# 2 MB www.github.com/CNH-Hyper-Extractive/data-component Free
A distributed data component for the Open Modeling Interface
S136481521400108X
Simulation modelling in ecology is a field that is becoming increasingly compartmentalized. Here we propose a Database Approach To Modelling (DATM) to create unity in dynamical ecosystem modelling with differential equations. In this approach the storage of ecological knowledge is independent of the language and platform in which the model will be run. To create an instance of the model, the information in the database is translated and augmented with the language and platform specifics. This process is automated so that a new instance can be created each time the database is updated. We describe the approach using the simple Lotka–Volterra model and the complex ecosystem model for shallow lakes PCLake, which we automatically implement in the frameworks OSIRIS, GRIND for MATLAB, ACSL, R, DUFLOW and DELWAQ. A clear advantage of working in a database is the overview it provides. The simplicity of the approach only adds to its elegance.
Serving many at once: How a database approach can create unity in dynamical ecosystem modelling
S1364815214001339
In this paper a new receptor modelling method is developed to identify and characterise emission sources. The method is an extension of the commonly used conditional probability function (CPF). The CPF approach is extended to the bivariate case to produce a conditional bivariate probability function (CBPF) plot using wind speed as a third variable plotted on the radial axis. The bivariate case provides more information on the type of sources being identified by providing important dispersion characteristic information. By considering intervals of concentration, considerably more source information can be revealed that is absent in the basic CPF or CBPF. We demonstrate the application of the approach by considering an area of high source complexity, where many new sources can be identified and characterised compared with currently used techniques. Dispersion model simulations are undertaken to verify the approach. The technique has been made available through the openair R package. The methods described in this work are available as part of software called openair. The openair software is freely available as an R package. Details on installing R and optional packages including openair can be found at R Core Team (2014) and http://www.r-project.org. R will run on Microsoft Windows, linux and Apple Mac computers. No special hardware is required to run openair other than a standard desktop computer. Some large data sets or complex analyses may require a 64-bit platform. Ref: R Core Team (2014). R: A language and environment for statistical computing. RFoundation for Statistical Computing, Vienna, Austria. URL http://www.R-project.org/.
Conditional bivariate probability function for source identification
S136481521400139X
This paper makes a critical review of the available techniques for analysing, completing and generating influent data for WWTP modelling. The solutions found in literature are classified according to three different situations from engineering practice: 1) completing an incomplete dataset about the quantity and quality of the influent wastewater; 2) translating the common quality measurements (COD, TSS, TKN, etc.) into the ASM family components (fractionation problem); 3) characterising the uncertainty in the quality and quantity of the influent wastewater. In the first case (Situation 1), generators based on Fourier models are very useful to describe the daily and weekly wastewater patterns. Another specially promising solution is related to the construction of phenomenological models that provide wastewater influent profiles in accordance with data about the catchment properties (number of inhabitant equivalents, sewer network, type of industries, rainfall and temperature profiles, etc.). This option has the advantage that using hypothetical catchment characteristics (other climate, sewer network, etc.) the modeller is able to extrapolate and generate influent data for WWTPs in other scenarios. With a much lower modelling effort, the generators based on the use of databases can provide realistic influent profiles based on the patterns observed. With regard to the influent characterisation (Situation 2), the WWTP modelling protocols summarise well established methodologies to translate the common measurements (COD, TSS, TKN, etc.) into ASM family components. Finally, some statistical models based on autoregressive functions are suitable to represent the uncertainty involved in influent data profiles (Situation 3). However, more fundamental research should be carried out to model the uncertainty involved in the underlying mechanisms related to the wastewater generation (rainfall profiles, household and industries pollutant discharges, assumed daily and weekly patterns, etc.).
Analysing, completing, and generating influent data for WWTP modelling: A critical review
S1364815214001406
Water management in the Netherlands applies to a dense network of surface waters for discharge, storage and distribution, serving highly valuable land-use. National and regional water authorities develop long-term plans for sustainable water use and safety under changing climate conditions. The decisions about investments on adaptive measures are based on analysis supported by the Netherlands Hydrological Instrument NHI based on the best available data and state-of-the-art technology and developed through collaboration between national research institutes. The NHI consists of various physical models at appropriate temporal and spatial scales for all parts of the water system. Intelligent connectors provide transfer between different scales and fast computation, by coupling model codes at a deep level in software. A workflow and version management system guarantees consistency in the data, software, computations and results. The NHI is freely available to hydrologists via an open web interface that enables exchange of all data and tools. All model data are freely available (http://www.nhi.nu/) and the NHI-specific software is free and open. The model-codes are partly open-source and partly in transition to the open domain.
An operational, multi-scale, multi-model system for consensus-based, integrated water management and policy analysis: The Netherlands Hydrological Instrument
S1364815214001595
Exploring adaptation pathways is an emerging approach for supporting decision making under uncertain changing conditions. An adaptation pathway is a sequence of policy actions to reach specified objectives. To develop adaptation pathways, interactions between environment and policy response need to be analysed over time for an ensemble of plausible futures. A fast, integrated model can facilitate this. Here, we describe the development and evaluation of such a model, an Integrated Assessment Metamodel (IAMM), to explore adaptation pathways in the Rhine delta for a decision problem currently faced by the Dutch Government. The theory-motivated metamodel is a simplified physically based model. Closed questions reflecting the required accuracy were used to evaluate the model's fitness. The results show that such a model fits the purpose of screening and ranking of policy options and pathways to support the strategic decision making. A complex model can subsequently be used to obtain more detailed information.
Fit for purpose? Building and evaluating a fast, integrated model for exploring water policy pathways
S1364815214001698
Surrogate modeling uses cheap “surrogates” to represent the response surface of simulation models. It involves several steps, including initial sampling, regression and adaptive sampling. This study evaluates an adaptive surrogate modeling based optimization (ASMO) method on two benchmark problems: the Hartman function and calibration of the SAC-SMA hydrologic model. Our results show that: 1) Gaussian Processes are the best surrogate model construction method. A minimum Interpolation Surface method is the best adaptive sampling method. Low discrepancy Quasi Monte Carlo methods are the most suitable initial sampling designs. Some 15–20 times the dimension of the problem may be the proper initial sample size; 2) The ASMO method is much more efficient than the widely used Shuffled Complex Evolution global optimization method. However, ASMO can provide only approximate optimal solutions, whose precision is limited by surrogate modeling methods and problem-specific features; and 3) The identifiability of model parameters is correlated with parameter sensitivity.
An evaluation of adaptive surrogate modeling based optimization with two benchmark problems
S1364815214002011
Systematic sampling is more precise than simple random sampling when spatial autocorrelation is present and the sampling effort is equal, but there is no unbiased method to estimate the variance from a systematic sample. The objective of this paper is to assess selected variance estimation methods and evaluate the influence of spatial structure on the results. These methods are treated as models and a complete enumeration of Norway was used as the modeling environment. The paper demonstrates that the advantage of systematic sampling is closely related to autocorrelation in the material, but also that the improvement is influenced by periodicity and drift in the variables. Variance estimation by stratification with the smallest possible strata gave the best overall results but may underestimate the variance when spatial autocorrelation is absent. Treating the sample as a simple random sample is a safe and conservative alternative when spatial autocorrelation is absent or unknown.
Comparison of variance estimation methods for use with two-dimensional systematic sampling of land use/land cover data
S1364815214002217
Rubber agroforests in the mostly deforested lowlands of Sumatra, Indonesia are threatened by conversion into monoculture rubber or oil palm plantations. We applied an agent-based model to explore the potential effectiveness of a payment for ecosystem services (PES) design through a biodiversity rich rubber eco-certification scheme. We integrated conditionality, where compliance with biodiversity performance indicators is prerequisite for awarding incentives. We compared a PES policy scenario to ‘business-as-usual’ and ‘subsidized land use change’ scenarios to explore potential trade-offs between ecosystem services delivery and rural income. Results indicated that a rubber agroforest eco-certification scheme could reduce carbon emissions and species loss better than alternative scenarios. However, the suggested premiums were too low to compete with income from other land uses. Nevertheless, integrating our understanding of household agent behavior through a spatially explicit and agent-specific assessment of the trade-offs can help refine the design of conservation initiatives such as PES.
Biodiversity in rubber agroforests, carbon emissions, and rural livelihoods: An agent-based model of land-use dynamics in lowland Sumatra
S1364815214002278
Agricultural droughts can create serious threats to food security. Tools for dynamic prediction of drought impacts on yields over large geographical regions can provide valuable information for drought management. Based on the DeNitrification-DeComposition (DNDC) model, the current research proposes a Drought Risk Analysis System (DRAS) that allows for the scenario-based analysis of drought-induced yield losses. We assess impacts on corn yields using two case studies, the 2012 U.S.A. drought and the 2000 and 2009 droughts in Liaoning Province, China. The results show that the system is able to perform daily simulations of corn growth and to dynamically evaluate the large-scale grain production in both regions. It is also capable of mapping the up-to-date yield losses on a daily basis, the additional losses under different drought development scenarios, and the yield-based drought return periods at multiple scales of geographic regions. In addition, detailed information about the water-stress process, biomass development, and the uncertainty of drought impacts on crop growth at a specific site can be displayed in the system. Remote sensing data were used to map the areas of drought-affected crops for comparison with the modeling results. Beyond the conventional drought information from meteorological and hydrological data, this system can provide comprehensive and predictive yield information for various end-users, including farmers, decision makers, insurance agencies, and food consumers.
Dynamic assessment of the impact of drought on agricultural yield and scale-dependent return periods over large geographic regions
S1364815214002588
The Plant Modelling Framework (PMF) is a software framework for creating models that represent the plant components of farm system models in the agricultural production system simulator (APSIM). It is the next step in the evolution of generic crop templates for APSIM, building on software and science lessons from past versions and capitalising on new software approaches. The PMF contains a top-level Plant class that provides an interface with the APSIM model environment and controls the other classes in the plant model. Other classes include mid-level Organ, Phenology, Structure and Arbitrator classes that represent specific elements or processes of the crop and sub-classes that the mid-level classes use to represent repeated data structures. It also contains low-level Function classes which represent generic mathematical, logical, procedural or reference code and provide values to the processes carried out by mid-level classes. A plant configuration file specifies which mid-level and Function classes are to be included and how they are to be arranged and parameterised to represent a particular crop model. The PMF has an integrated design environment to allow plant models to be created visually. The aims of the PMF are to maximise code reuse and allow flexibility in the structure of models. Four examples are included to demonstrate the flexibility of application of the PMF; 1. Slurp, a simple model of the water use of a static crop, 2. Oat, an annual grain crop model with detailed growth, development and resource use processes, 3. Lucerne, perennial forage model with detailed growth, development and resource use processes, 4. Wheat, another detailed annual crop model constructed using an alternative set of organ and process classes. These examples show the PMF can be used to develop models of different complexities and allows flexibility in the approach for implementing crop physiology concepts into model set up. The Plant Modelling Framework source code is freely available for non commercial use and can be viewed at http://apsrunet.apsim.info/websvn/listing.php?repname=apsim&path=/trunk/then clicking on the “Model” then “Plant2” folders. Note that the PMF (called Plant2 in internal documentation) does not stand alone and users will need to download the Agricultural Production Systems Simulator (http://www.apsim.info/Products/Downloads.aspx) to build and use PMF models.
Plant Modelling Framework: Software for building and running crop models on the APSIM platform
S1364815214002618
Large-scale ecosystem management involves consideration of many factors for informed decision making. The EverVIEW Data Viewer is a cross-platform desktop decision support tool to help decision makers compare simulation model outputs from competing plans for restoring Florida's Greater Everglades. The integration of NetCDF metadata conventions into EverVIEW allows end-users from multiple institutions within and beyond the Everglades restoration community to share information and tools. Our development process incorporates continuous interaction with targeted end-users for increased likelihood of adoption. One of EverVIEW's signature features is side-by-side map panels, which can be used to simultaneously compare species or habitat impacts from alternative restoration plans. Other features include examination of potential restoration plan impacts across multiple geographic or tabular displays, and animation through time. As a result of an iterative, standards-driven approach, EverVIEW is relevant to large-scale planning beyond Florida, and is used in multiple biological planning efforts in the United States. The EverVIEW Data Viewer is freely-available for Windows, Mac OS X, and Linux operating systems from the Joint Ecosystem Modeling (JEM) website: http://jem.gov. Also available on the JEM website are the CERP NetCDF metadata conventions and supporting Java NetCDF libraries. All desktop applications require the Java Runtime Environment. Users wanting to stay informed of software updates and releases can sign up to the email distribution list on the website.
A visualization tool to support decision making in environmental and biological planning
S1364815214002813
There is an increasing need for environmental management advice that is wide-scoped, covering various interlinked policies, and realistic about the uncertainties related to the possible management actions. To achieve this, efficient decision support integrates the results of pre-existing models. Many environmental models are deterministic, but the uncertainty of their outcomes needs to be estimated when they are utilized for decision support. We review various methods that have been or could be applied to evaluate the uncertainty related to deterministic models' outputs. We cover expert judgement, model emulation, sensitivity analysis, temporal and spatial variability in the model outputs, the use of multiple models, and statistical approaches, and evaluate when these methods are appropriate and what must be taken into account when utilizing them. The best way to evaluate the uncertainty depends on the definitions of the source models and the amount and quality of information available to the modeller.
An overview of methods to evaluate uncertainty of deterministic models in decision support
S1364815214002965
Recent evolutions in computing science and web technology provide the environmental community with continuously expanding resources for data collection and analysis that pose unprecedented challenges to the design of analysis methods, workflows, and interaction with data sets. In the light of the recent UK Research Council funded Environmental Virtual Observatory pilot project, this paper gives an overview of currently available implementations related to web-based technologies for processing large and heterogeneous datasets and discuss their relevance within the context of environmental data processing, simulation and prediction. We found that, the processing of the simple datasets used in the pilot proved to be relatively straightforward using a combination of R, RPy2, PyWPS and PostgreSQL. However, the use of NoSQL databases and more versatile frameworks such as OGC standard based implementations may provide a wider and more flexible set of features that particularly facilitate working with larger volumes and more heterogeneous data sources. Environmental Virtual Observatory pilot (EVOp) EVOp team [email protected] Any web-enabled device with a modern web browser Internet browser (Chrome, Firefox and Opera) Java, JavaScript, R, Python and SQL. Users can access the official website http://evo-uk.org. Access to EVOp data and applications is restricted to EVOp project partners, however user accounts can be made available to researchers upon request.
Web technologies for environmental Big Data
S1364815214002989
The present paper discusses a coupled gridded crop modeling and hydrologic modeling system that can examine the benefits of irrigation and costs of irrigation and the coincident impact of the irrigation water withdrawals on surface water hydrology. The system is applied to the Southeastern U.S. The system tools to be discussed include a gridded version (GriDSSAT) of the crop modeling system DSSAT. The irrigation demand from GriDSSAT is coupled to a regional hydrologic model (WaSSI). GriDSSAT and WaSSI are coupled through the USDA NASS CropScape data to provide crop acreages in each watershed. The crop model provides the dynamic irrigation demand which is a function of the weather. The hydrologic model responds to the weather and includes all other anthropogenic competing uses of water. Examples of the system include an analysis of the hydrologic impact of future expansion of irrigation and the real-time impact of short-term drought. The GriDSSAT model presented in this paper was developed with the Decision Support System for Agrotechnology (DSSAT), a software application program that comprises crop simulation models for over 28 crops and is supported by data base management programs for soil, weather, and crop management and experimental data, and by utilities and application programs. The latest version can be obtained free at: http://www.dssat.net. The WaSSI model is an integrated, process-based model that was originally developed by the U.S. Forest Service. More information is available at: http://www.forestthreats.org/tools/WaSSI.
An integrated crop and hydrologic modeling system to estimate hydrologic impacts of crop irrigation demands
S1364815214003521
System dynamics (SD) is an effective approach for helping reveal the temporal behavior of complex systems. Although there have been recent developments in expanding SD to include systems’ spatial dependencies, most applications have been restricted to the simulation of diffusion processes; this is especially true for models on structural change (e.g. LULC modeling). To address this shortcoming, a Python program is proposed to tightly couple SD software to a Geographic Information System (GIS). The approach provides the required capacities for handling bidirectional and synchronized interactions of operations between SD and GIS. In order to illustrate the concept and the techniques proposed for simulating structural changes, a fictitious environment called Daisyworld has been recreated in a spatial system dynamics (SSD) environment. The comparison of spatial and non-spatial simulations emphasizes the importance of considering spatio-temporal feedbacks. Finally, practical applications of structural change models in agriculture and disaster management are proposed.
Modeling structural change in spatial system dynamics: A Daisyworld example
S1364815214003648
The aim of this paper is to present a novel monotone upstream scheme for conservation law (MUSCL) on unstructured grids. The novel edge-based MUSCL scheme is devised to construct the required values at the midpoint of cell edges in a more straightforward and effective way compared to other conventional approaches, by making better use of the geometrical property of the triangular grids. The scheme is incorporated into a two-dimensional (2D) cell-centered Godunov-type finite volume model as proposed in Hou et al. (2013a,c) to solve the shallow water equations (SWEs). The MUSCL scheme renders the model to preserve the well-balanced property and achieve high accuracy and efficiency for shallow flow simulations over uneven terrains. Furthermore, the scheme is directly applicable to all triangular grids. Application to several numerical experiments verifies the efficiency and robustness of the current new MUSCL scheme.
An efficient unstructured MUSCL scheme for solving the 2D shallow water equations
S1364815214003685
The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil NO 3 − compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil NO 3 − and NH 4 + . Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent. DayCent model W. J. Parton, S. J. Del Grosso, S. Ogle, K. Paustian, Natural Resource Ecology Laboratory, Colorado State University, Fort Collins, CO, USA (970) 491-2195 [email protected] 1998 PC with at least 512 K of RAM. A graphics adapter (CGA, EGA, VGA, or Hercules monographic) and 2 Mb of disk space are recommended. Windows Available on request; Free PEST version 13.0 John Doherty Watermark Numerical Computing, 336 Cliveden Avenue, Corinda 4075, Australia 07 3379 1664 [email protected] 1994 Desktop or Laptop Windows or Linux down load from: http://www.pesthomepage.org/Downloads.php; Free.
Understanding the DayCent model: Calibration, sensitivity, and identifiability through inverse modeling
S136481521400379X
This paper describes the development of a model for assessing TRAffic Noise EXposure (TRANEX) in an open-source geographic information system. Instead of using proprietary software we developed our own model for two main reasons: 1) so that the treatment of source geometry, traffic information (flows/speeds/spatially varying diurnal traffic profiles) and receptors matched as closely as possible to that of the air pollution modelling being undertaken in the TRAFFIC project, and 2) to optimize model performance for practical reasons of needing to implement a noise model with detailed source geometry, over a large geographical area, to produce noise estimates at up to several million address locations, with limited computing resources. To evaluate TRANEX, noise estimates were compared with noise measurements made in the British cities of Leicester and Norwich. High correlation was seen between modelled and measured LAeq,1hr (Norwich: r = 0.85, p = .000; Leicester: r = 0.95, p = .000) with average model errors of 3.1 dB. TRANEX was used to estimate noise exposures (LAeq,1hr, LAeq,16hr, Lnight) for the resident population of London (2003–2010). Results suggest that 1.03 million (12%) people are exposed to daytime road traffic noise levels ≥ 65 dB(A) and 1.63 million (19%) people are exposed to night-time road traffic noise levels ≥ 55 dB(A). Differences in noise levels between 2010 and 2003 were on average relatively small: 0.25 dB (standard deviation: 0.89) and 0.26 dB (standard deviation: 0.87) for LAeq,16hr and Lnight. The noise model (TRANEX) was implemented in R to call functions from PostgreSQL and GRASS GIS packages and can be obtained from the corresponding author or the following website: http://www.sahsu.org/content/data-download; first available in July 2014; TRANEX requires at least one standard desktop PC.
Development of an open-source road traffic noise model for exposure assessment
S1364815215000225
Evolutionary Algorithms (EAs) have been widely employed to solve water resources problems for nearly two decades with much success. However, recent research in hyperheuristics has raised the possibility of developing optimisers that adapt to the characteristics of the problem being solved. In order to select appropriate operators for such optimisers it is necessary to first understand the interaction between operator and problem. This paper explores the concept of EA operator behaviour in real world applications through the empirical study of performance using water distribution networks (WDN) as a case study. Artificial networks are created to embody specific WDN features which are then used to evaluate the impact of network features on operator performance. The method extracts key attributes of the problem which are encapsulated in the natural features of a WDN, such as topologies and assets, on which different EA operators can be tested. The method is demonstrated using small exemplar networks designed specifically so that they isolate individual features. A set of operators are tested on these artificial networks and their behaviour characterised. This process provides a systematic and quantitative approach to establishing detailed information about an algorithm's suitability to optimise certain types of problem. The experiment is then repeated on real-world inspired networks and the results are shown to fit with the expected results.
An analysis of the interface between evolutionary algorithm operators and problem features for water resources problems. A case study in water distribution network design
S1364815215000237
Variance-based approaches are widely used for Global Sensitivity Analysis (GSA) of environmental models. However, methods that consider the entire Probability Density Function (PDF) of the model output, rather than its variance only, are preferable in cases where variance is not an adequate proxy of uncertainty, e.g. when the output distribution is highly-skewed or when it is multi-modal. Still, the adoption of density-based methods has been limited so far, possibly because they are relatively more difficult to implement. Here we present a novel GSA method, called PAWN, to efficiently compute density-based sensitivity indices. The key idea is to characterise output distributions by their Cumulative Distribution Functions (CDF), which are easier to derive than PDFs. We discuss and demonstrate the advantages of PAWN through applications to numerical and environmental modelling examples. We expect PAWN to increase the application of density-based approaches and to be a complementary approach to variance-based GSA.
A simple and efficient method for global sensitivity analysis based on cumulative distribution functions
S136481521500047X
Predicting the probability of wind damage in both natural and managed forests is important for understanding forest ecosystem functioning, the environmental impact of storms and for forest risk management. We undertook a thorough validation of three versions of the hybrid-mechanistic wind risk model, ForestGALES, and a statistical logistic regression model, against observed damage in a Scottish upland conifer forest following a major storm. Statistical analysis demonstrated that increasing tree height and local wind speed during the storm were the main factors associated with increased damage levels. All models provided acceptable discrimination between damaged and undamaged forest stands but there were trade-offs between the accuracy of the mechanistic models and model bias. The two versions of the mechanistic model with the lowest bias gave very comparable overall results at the forest scale and could form part of a decision support system for managing forest wind damage risk. Maximum width of canopy (m) Length of the live crown (m) Drag coefficient scale parameter Drag coefficient (percentage reduction in canopy area due to streamlining) Regression between stem weight (SW) and resistance to overturning (Nm kg−1) Critical wind speed for damage (m s−1) Zero-plane displacement (m) Stem diameter at base of tree (m) Stem diameter at breast height (1.3 m) (m) Average spacing between trees (m) Windiness score from Quine and White (1993) Dimensionless factor to account for additional turning moment due to crown and stem weight Dimensionless factor to account for reduction in clear wood MOR due to knots Dimensionless factor to account for gustiness of wind Tree height (m) von Karman constant = 0.4 Maximum turning moment due to wind loading only and not including additional moment due to overhanging crown and stem (Nm) Modulus of rupture on wood for species of interest (Pa) Parameter controlling reduction in drag coefficient with wind speed Density of air (kg m−3) Forestry Commission sub-compartment database Ratio of average tree spacing after and before a thinning Stem (bole) weight (kg) Turning moment coefficient from Hale et al. (2012) (kg) Ratio of turning moment coefficient after and before thinning Wind speed at 10 m above the zero plane displacement height (m s−1) Wind speed at tree height (m s−1) Friction velocity (m s−1) Weibull scale parameter (m s−1) Weibull shape parameter (dimensionless) Wind speed calculated from DAMS score (m s−1) Wind speed calculated from WAsP airflow model (m s−1) Wind speed at meteorological station (m s−1) Distance from forest edge (m) Yield class (m3 ha−1 yr−1) Aerodynamic roughness (m) Name of software: ForestGALES Developers: Forest Research and INRA Contact address: Forest Research, Northern Research Station, Roslin, Midlothian EH25 9SY, United Kingdom Email: [email protected] Availability and Online Documentation: The software along with supporting material is freely available. Go to http://www.forestresearch.gov.uk/forestgales to find out how to obtain the software or email [email protected] Year first available: 2000 Hardware required: IBM compatible PC Software required: MS Windows Programming language: Borland Delphi 5.0®. Versions have also been written in Python, Fortran, R and Java. Contact the corresponding author ([email protected]) for further details. Program size: 10 MB. With all additional support files and manuals = 25 MB.
Comparison and validation of three versions of a forest wind risk model
S1364815215000511
This paper details a strategy for modifying the source code of a complex model so that the model may be used in a data assimilation context, and gives the standards for implementing a data assimilation code to use such a model. The strategy relies on keeping the model separate from any data assimilation code, and coupling the two through the use of Message Passing Interface (MPI) functionality. This strategy limits the changes necessary to the model and as such is rapid to program, at the expense of ultimate performance. The implementation technique is applied in different models with state dimension up to .2.7 × 108 The overheads added by using this implementation strategy in a coupled ocean-atmosphere climate model are shown to be an order of magnitude smaller than the addition of correlated stochastic random errors necessary for some nonlinear data assimilation techniques.
A simple method for integrating a complex model into an ensemble data assimilation system using MPI
S1364815215000614
Anthropogenic impacts on the aquatic environment, especially in the context of nutrients, provide a major challenge for water resource management. The heterogeneous nature of policy relevant management units (e.g. catchments), in terms of environmental controls on nutrient source and transport, leads to the need for holistic management. However, current strategies are limited by current understanding and knowledge that is transferable between spatial scales and landscape typologies. This study presents a spatially-explicit framework to support the modelling of nutrients from land to water, encompassing environmental and spatial complexities. The framework recognises nine homogeneous landscape units, distinct in terms of sensitivity of nutrient losses to waterbodies. The functionality of the framework is demonstrated by supporting an exemplar nutrient model, applied within the Environmental Virtual Observatory pilot (EVOp) cloud cyber-infrastructure. We demonstrate scope for the use of the framework as a management decision support tool and for further development of integrated biogeochemical modelling. Geospatial framework to support integrated biogeochemical modelling in the United Kingdom Greene, S.a, Johnes, P.J., Bloomfield, J.P., Reaney, S.M., Lawley, R., Elkhatib, Y., Freer, J., Odoni, N., MacLeod, C.J.A., Percy, B. Address: aCentre for Ecology & Hydrology, Maclean Building, Benson Lane, Crowmarsh Gifford, Wallingford, Oxfordshire, OX10 8BB, United Kingdom [email protected] +44 (0) 1491 692495 +44 (0) 1491 692424 2012 Any GIS enabled device GIS software ESRI shapefile (vector polygon) 585 MB Contact author directly at [email protected]
A geospatial framework to support integrated biogeochemical modelling in the United Kingdom
S1364815215000808
Long-term exposure to fine particulate matter (PM2.5) has been shown to have significant negative impacts on human health. It is estimated that current levels of air pollution shorten the statistical life expectancy of European citizens by several months. The GAINS integrated assessment model calculates shortening of life expectancy from population exposure to PM2.5 using epidemiologically-derived health impact functions. In addition, GAINS estimates PM2.5 concentrations at 1875 air quality monitoring stations located in diverse environments ranging from remote background locations to busy street canyons. In this article, different approaches to dealing with the PM2.5 pollution problem are compared. We assess for the present and future the attainment of EU and WHO air quality standards for PM2.5 and estimate the loss of life expectancy under different policy scenarios developed for the ongoing revision of the EU Air Quality Legislation.
Modelling PM2.5 impact indicators in Europe: Health effects and legal compliance
S1364815215001115
It is common for in situ hydrologic and water quality data to be collected at high frequencies and for extended durations. These data streams, which may also be collected across many monitoring sites require infrastructure for data storage and management. The Observations Data Model (ODM), which is part of the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) Hydrologic Information System (HIS), was developed as a standard data model in which to organize, store, and describe point observations data. In this paper we describe ODM Tools Python, an open source software application that allows users to query and export, visualize, and perform quality control post processing on time series of environmental observations data stored in an ODM database using automated Python scripting that records the corrections and adjustments made to data series in the quality control process and ensures data editing steps are traceable and reproducible. ODM Tools Python Jeffery S. Horsburgh, Stephanie L. Reeder, Amber Spackman Jones, Jacob Meline, and James Patton [email protected] 2014 A personal computer Microsoft Windows, Mac OSX, or Linux operating system All source code, installers, example ODM databases, and documentation for the ODM Tools Python software application can be accessed at https://github.com/UCHIC/ODMToolsPython. Free. Software and source code are released under the New Berkeley Software Distribution (BSD) License, which allows for liberal reuse of the software and code.
Open source software for visualization and quality control of continuous hydrologic and water quality sensor data
S1364815215001188
Global Sensitivity Analysis (GSA) is increasingly used in the development and assessment of environmental models. Here we present a Matlab/Octave toolbox for the application of GSA, called SAFE (Sensitivity Analysis For Everybody). It implements several established GSA methods and allows for easily integrating others. All methods implemented in SAFE support the assessment of the robustness and convergence of sensitivity indices. Furthermore, SAFE includes numerous visualisation tools for the effective investigation and communication of GSA results. The toolbox is designed to make GSA accessible to non-specialist users, and to provide a fully commented code for more experienced users to complement their own tools. The documentation includes a set of workflow scripts with practical guidelines on how to apply GSA and how to use the toolbox. SAFE is open source and freely available for academic and non-commercial purpose. Ultimately, SAFE aims at contributing towards improving the diffusion and quality of GSA practice in the environmental modelling community.
A Matlab toolbox for Global Sensitivity Analysis
S136481521500167X
Sensors are becoming ubiquitous in everyday life, generating data at an unprecedented rate and scale. However, models that assess impacts of human activities on environmental and human health, have typically been developed in contexts where data scarcity is the norm. Models are essential tools to understand processes, identify relationships, associations and causality, formalize stakeholder mental models, and to quantify the effects of prevention and interventions. They can help to explain data, as well as inform the deployment and location of sensors by identifying hotspots and areas of interest where data collection may achieve the best results. We identify a paradigm shift in how the integration of models and sensors can contribute to harnessing ‘Big Data’ and, more importantly, make the vital step from ‘Big Data’ to ‘Big Information’. In this paper, we illustrate current developments and identify key research needs using human and environmental health challenges as an example.
Integrating modelling and smart sensors for environmental and human health
S1364815215001723
This paper explores how meta-studies can support the development of process-based land change models (LCMs) that can be applied across locations and scales. We describe a multi-step framework for model development and provide descriptions and examples of how meta-studies can be used in each step. We conclude that meta-studies best support the conceptualization and experimentation phases of the model development cycle, but cannot typically provide full model parameterizations. Moreover, meta-studies are particularly useful for developing agent-based LCMs that can be applied across a wide range of contexts, locations, and/or scales, because meta-studies provide both quantitative and qualitative data needed to derive agent behaviors more readily than from case study or aggregate data sources alone. Recent land change synthesis studies provide sufficient topical breadth and depth to support the development of broadly applicable process-based LCMs, as well as the potential to accelerate the production of generalized knowledge through model-driven synthesis.
From meta-studies to modeling: Using synthesis knowledge to build broadly applicable process-based land change models
S1364815215001747
Effective conservation planning relies on the accurate identification of anthropogenic land cover. However, accessing localized information can be difficult or impossible in developing countries. Additionally, global medium-resolution land use land cover datasets may be insufficient for conservation planning purposes at the scale of a country or smaller. We thus introduce a new tool, GE Grids, to bridge this gap. This tool creates an interactive user-specified binary grid laid over Google Earth's high-resolution imagery. Using GE Grids, we manually identified anthropogenic land conversion across East Africa and compared this against available land cover datasets. Nearly 30% of East Africa is converted to anthropogenic land cover. The two highest-resolution comparative datasets have the greatest agreement with our own at the regional extent, despite having as low as 44% agreement at the country level. We achieved 83% consistency among users. GE Grids is intended to complement existing remote sensing datasets at local scales. Grids is a web application written in Javascript using the Google Earth application programming interface (API), which is freely available from Google. The program requires a web browser, the Google Earth plug-in and internet connectivity. The codebase is maintained and can be downloaded as a zip file from http://andrewstanish.com/files/GERasterCreator.zip. The zip file contains a.html file, accessory files, and a ReadMe file. Use the ReadMe file for suggestions on program instruction and notes on Google's Terms of Service. GE Grids is free, regulated under the GNU General Public License v3 (http://www.gnu.org/copyleft/gpl.html) and intended for further open-source development. The developer is Andrew Stanish ([email protected]).
A novel approach to mapping land conversion using Google Earth with an application to East Africa
S1364815215300086
In making the decision whether to use component-based modeling, its benefits must be balanced against computational costs. Studies evaluating these costs using the Open Modeling Interface (OpenMI) have largely used models with simplified formulations, small spatial and temporal domains, or a limited number of components. We evaluate these costs by applying OpenMI to a relatively complex Stormwater Management Model (SWMM) for the City of Logan, Utah, USA. Configurations of coupled OpenMI components resulting from decomposing the stormwater model by process (i.e., runoff coupled to routing) and then by space (i.e., groups of catchments coupled together) were compared to a reference model executed in the standard SWMM configuration. Simulation times increased linearly with the number of connections between components, and mass balance error was a function of the degree to which a component resolved time series data received. This study also examines and proposes some strategies to address these computational costs. We developed the SWMMOpenMIComponent (a C# SWMM Component), SWMMOpenMINoGlobals (a modified, native C SWMM computational engine code underlying the component), and a modified OpenMI C# project. We forked the OpenMI 2.0 C# project, including the Software Development Kit (SDK), the command line interface, and the OpenMI Configuration Editor found at http://sourceforge.net/p/openmi/code/HEAD/tree/trunk/src/csharp for this study. In addition to implementing minor bug fixes to ensure that the code compiled, we implemented a new graphical user interface for creating connections with chained adapters in accordance with the OpenMI 2.0 specification, a new simulation monitoring dialog, and fixed the project file reading and writing classes to ensure that connections with adapters are read and written properly. Caleb A. Buahin [email protected] 2015 PC running Microsoft Windows The C# SWMM Component (SWMMOpenMIComponent) and its underlying modified native C SWMM library (SWMMOpenMINoGlobals) are freely available under the GNU Lesser General Public License (LGPL) license at https://github.com/cbuahin/SWMMOpenMIComponent. The source code for the modified version of the OpenMI C# project can be found at https://github.com/cbuahin/OpenMI under the LGPL license.
Evaluating the simulation times and mass balance errors of component-based models: An application of OpenMI 2.0 to an urban stormwater system
S1364815215300190
This study introduces a new open source software framework to support bottom-up environmental systems planning under deep uncertainty with a focus on many-objective robust decision making (MORDM), called OpenMORDM. OpenMORDM contains two complementary components: (1) a software application programming interface (API) for connecting planning models to computational exploration tools for many-objective optimization and sensitivity-based discovery of critical deeply uncertain factors; and (2) a web-based visualization toolkit for exploring high-dimensional datasets to better understand system trade-offs, vulnerabilities, and dependencies. We demonstrate the OpenMORDM framework on a challenging environmental management test case termed the “lake problem”. The lake problem has been used extensively in the prior environmental decision science literature and, in this study, captures the challenges posed by conflicting economic and environmental objectives, a water quality “tipping point” beyond which the lake may become irreversibly polluted, and multiple deeply uncertain factors that may undermine the robustness of pollution management policies. The OpenMORDM software framework enables decision makers to identify policy-relevant scenarios, quantify the trade-offs between alternative strategies in different scenarios, flexibly explore alternative definitions of robustness, and identify key system factors that should be monitored as triggers for future actions or additional planning. The web-based OpenMORDM visualization toolkit allows decision makers to easily share and visualize their datasets, with the option for analysts to extend the framework with customized scripts in the R programming language. OpenMORDM provides a platform for constructive decision support, allowing analysts and decision makers to interactively discover promising alternatives and potential vulnerabilities while balancing conflicting objectives. • Name of Software: OpenMORDM Description: OpenMORDM is an open-source R library for multiobjective robust decision making (MORDM). It includes support for loading datasets from a number of sources including CSV, XLS, XLSX, databases, and R matrices and data frames; visualizing the data sets using various 2D and 3D plots; performing scenario discovery and trade-off analysis; and computing uncertainty/robustness metrics. OpenMORDM also includes a web-based data exploration and visualization toolkit. Developer: D. Hadka ([email protected]) with contributions by P. Reed and K. Keller. Funding Source: Development was partially supported by the National Science Foundation through the Network for Sustainable Climate Risk Management (SCRiM) under NSF cooperative agreement GEO-1240507 as well as the Penn State Center for Climate Risk Management. Source Language: R Supported Systems: Unix, Linux, Windows, Mac License: GNU General Public License, Version 3 Availability: http://github.com/dhadka/OpenMORDM
An open source framework for many-objective robust decision making
S1364815215300220
Estimation of the direction and magnitude of trends in surface water quality remains a problem of great scientific and practical interest. The Weighted Regressions on Time, Discharge, and Season (WRTDS) method was recently introduced as an exploratory data analysis tool to provide flexible and robust estimates of water quality trends. This paper enhances the WRTDS method through the introduction of the WRTDS Bootstrap Test (WBT), an extension of WRTDS that quantifies the uncertainty in WRTDS-estimates of water quality trends and offers various ways to visualize and communicate these uncertainties. Monte Carlo experiments are applied to estimate the Type I error probabilities for this method. WBT is compared to other water-quality trend-testing methods appropriate for data sets of one to three decades in length with sampling frequencies of 6–24 observations per year. The software to conduct the test is in the EGRETci R-package. The statistical procedures presented here are all based on the Weighted Regressions on Time, Discharge, and Season (WRTDS) approach to water quality data analysis. The WRTDS is implemented in the EGRET (Exploration and Graphics for RivEr Trends), R-package (open source) available from the Comprehensive R Archive Network http://cran.r-project.org/web/packages/. The new software that implements the WRTDS Bootstrap Test (WBT) described in this paper is also an R-package called EGRETci, also available from the Comprehensive R Archive Network.
A bootstrap method for estimating uncertainty of water quality trends
S1364815215300268
As sea-level rises, the frequency of coastal marine flooding events is changing. For accurate assessments, several other factors must be considered as well, such as the variability of sea-level rise and storm surge patterns. Here, a global sensitivity analysis is used to provide quantitative insight into the relative importance of contributing uncertainties over the coming decades. The method is applied on an urban low-lying coastal site located in the north-western Mediterranean, where the yearly probability of damaging flooding could grow drastically after 2050 if sea-level rise follows IPCC projections. Storm surge propagation processes, then sea-level variability, and, later, global sea-level rise scenarios become successively important source of uncertainties over the 21st century. This defines research priorities that depend on the target period of interest. On the long term, scenarios RCP 6.0 and 8.0 challenge local capacities of adaptation for the considered site.
Evaluating uncertainties of future marine flooding occurrence as sea-level rises
S1364815215300463
A dynamic landscape evolution modelling platform (CLiDE) is presented that allows a variety of Earth system interactions to be explored under differing environmental forcing factors. Representation of distributed surface and subsurface hydrology within CLiDE is suited to simulation at sub-annual to centennial time-scales. In this study the hydrological components of CLiDE are evaluated against analytical solutions and recorded datasets. The impact of differing groundwater regimes on sediment discharge is examined for a simple, idealised catchment, Sediment discharge is found to be a function of the evolving catchment morphology. Application of CLiDE to the upper Eden Valley catchment, UK, suggests the addition of baseflow-return from groundwater into the fluvial system modifies the total catchment sediment discharge and the spatio-temporal distribution of sediment fluxes during storm events. The occurrence of a storm following a period of appreciable antecedent rainfall is found to increase simulated sediment fluxes. CLiDE Andrew Barkwith British Geological Survey, Environmental Science Centre, Keyworth, Nottingham, NG12 5GG, UK. E-mail: [email protected] 2013 Windows C# GNU licensed freeware
Simulating the influences of groundwater on regional geomorphology using a distributed, dynamic, landscape evolution modelling platform
S1364815215300505
Air quality models are often used to simulate how emission scenarios influence the concentration of primary as well as secondary pollutants in the atmosphere. In some cases, it is necessary to replace these air quality models with source–receptor relationships, to mimic in a faster way the link between emissions and concentrations. Source–receptor relationships are therefore also used in Integrated Assessment Models, when scenario responses need to be known in very short time. The objective of this work is to present a novel approach to design a source–receptor relationship for air quality modeling. Overall the proposed approach is shown to significantly reduce the number of simulations required for the training step and to bring flexibility in terms of emission source definition. A regional domain application is also presented, to test the performances of the proposed approach.
A new approach to design source–receptor relationships for air quality modelling
S1364815215300529
Uncertainty in operational hydrological forecast systems forced with numerical weather predictions is often assessed by quantifying the uncertainty from the inputs only. However, part of the uncertainty in modelled discharge stems from the hydrological model. A multi-model system can account for some of this uncertainty, but there exists a plethora of hydrological models and it is not trivial to select those that fit specific needs and collectively capture a representative spread of model uncertainty. This paper provides a technical review of 24 large-scale models to provide guidance for model selection. Suitability for the European Flood Awareness System (EFAS), as example of an operational continental flood forecasting system, is discussed based on process descriptions, flexibility in resolution, input data requirements, availability of code and more. The model choice is in the end subjective, but this review intends to objectively assist in selecting the most appropriate model for the intended purpose.
Technical review of large-scale hydrological models for implementation in operational flood forecasting schemes on continental level
S1364815215300682
In this paper a novel hydrodynamic wastewater treatment (WWT) model based on smoothed particle hydrodynamics (SPH) is presented. The hydraulics of the wastewater treatment plant is modelled in detail with SPH. The SPH solver is coupled to the activated sludge model such that the influence on biokinetic processes is described. The key innovation of the present WWT model is that both the biokinetics and the wastewater hydraulics are simultaneously solved for non-steady flows. After validating the present method against the software ASIM 5, the capabilities are demonstrated for a full-scale treatment plant simulation. We investigate the stirrer and aeration induced mixing within the reactor compartments as well as the resulting concentrations of the biokinetic compounds. Following the establishment of a local coupling between the hydraulics and the biokinetics, the biokinetic concentrations within a treatment plant can be spatially resolved with a high resolution. SPHASE – Smoothed Particle Hydrodynamics Activated Sludge Engine Gregor Burger, Michael Meister, Daniel Winkler Michael Meister, Unit of Environmental Engineering, University of Innsbruck, Technikerstrasse 13, 6020 Innsbruck, Austria, [email protected] on application to the author 2015 PC with multi-core processor, optional: CUDA compatible GPU Linux, Mac OS X, Windows C++ command line, graphical user interface (constant) reference particle index neighbouring particle index reactor index biokinetic compound index
Wastewater treatment modelling with smoothed particle hydrodynamics
S1364815215300724
Urban cellular automata (CA) models are broadly used in quantitative analyses and predictions of urban land-use dynamics. However, most urban CA developed with neighborhood rules consider only a small neighborhood scope under a specific spatial resolution. Here, we quantify neighborhood effects in a relatively large cellular space and analyze their role in the performance of an urban land use model. The extracted neighborhood rules were integrated into a commonly used logistic regression urban CA model (Logistic-CA), resulting in a large neighborhood urban land use model (Logistic-LNCA). Land-use simulations with both models were evaluated with urban expansion data in Xiamen City, China. Simulations with the Logistic-LNCA model raised the accuracies of built-up land by 3.0%–3.9% in two simulation periods compared with the Logistic-CA model with a 3 × 3 kernel. Parameter sensitivity analysis indicated that there was an optimal large window size in cellular space and a corresponding optimal parameter configuration.
Incorporation of extended neighborhood mechanisms and its impact on urban land-use cellular automata simulations
S1364815215300736
We present a parsimonious agricultural land-use model that is designed to replicate global land-use change while allowing the exploration of uncertainties in input parameters. At the global scale, the modelled uncertainty range of agricultural land-use change covers observed land-use change. Spatial patterns of cropland change at the country level are simulated less satisfactorily, but temporal trends of cropland change in large agricultural nations were replicated by the model. A variance-based global sensitivity analysis showed that uncertainties in the input parameters representing to consumption preferences are important for changes in global agricultural areas. However, uncertainties in technological change had the largest effect on cereal yields and changes in global agricultural area. Uncertainties related to technological change in developing countries were most important for modelling the extent of cropland. The performance of the model suggests that highly generalised representations of socio-economic processes can be used to replicate global land-use change.
Applying Occam's razor to global agricultural land use change
S1364815215300748
We discuss an on-line tool that facilitates access to the large collection of climate impacts on crop yields produced by the Agricultural Model Intercomparison and Improvement Project. This collection comprises the output of seven crop models which were run on a global grid using climate data from five different general circulation models under the current set of representative pathways. The output of this modeling endeavor consists of more than 36,000 publicly available global grids at a spatial resolution of one half degree. We offer flexible ways to aggregate these data while reducing the technical barriers implied by learning new download platforms and specialized formats. The tool is accessed trough any standard web browser without any special bandwidth requirement. A tool for aggregating outputs from the AgMIP's Global Gridded Crop Model Intercomparison Project (GGCMI) is freely available at the GEOSHARE website (https://mygeohub.org/resources/agmip) using any standard Internet browser. All the programs – a java graphical user interface (GUI) and a set of R functions – can be freely downloaded and reused. The tool is free under a GNU General Public License (www.gnu.org) agreement. Documentation and support for users include a User's Manual, 1 1 Included as an Appendix for the reviewers convenience. as well as a set of default regional maps and weighting schemes.
Rapid aggregation of global gridded crop model outputs to facilitate cross-disciplinary analysis of climate change impacts in agriculture
S1364815215300761
Ecosystem services (ES) modeling studies typically use a forecasting approach to predict scenarios of future ES provision. Usually, these forecasts do not inform on how specific policy alternatives will influence future ES supply and whether this supply can match ES demand – important information for policy-makers in practice. Addressing these gaps, we present a multi-method backcasting approach that links normative visions with explorative land-use and ES modeling to infer land-use policy strategies for matching regional ES supply and demand. Applied to a case study, the approach develops and evaluates a variety of ES transition pathways and identifies types, combinations and timings of policy interventions that increase ES benefits. By making explicit ES sensitivity towards regional policy strategies and global boundary conditions over time, the approach allows to address key uncertainties involved in ES modeling studies. Integrated backcasting modeling system BackES Sibyl H. Brunner, Adrienne Grêt-Regamey, Simon Peter, Simon Briner, Swiss Federal Institute of Technology; Robert Huber, Swiss Federal Institute for Forest, Snow and Landscape Research. The model version presented in this paper and example input data are offered free of charge from the corresponding author ([email protected]) Linear Programing Language (LPL), Virtual Optima; ILOG CPLEX Optimization Studio, IBM LPL academic license available on purchase, http://www.virtual-optima.com/en/index.html; CPLEX academic license available at no charge, http://www-01.ibm.com/software/commerce/optimization/cplex-optimizer/index.html NLOGIT 5, Econometric Software Inc. (Education license available on purchase, http://www.limdep.com/products/nlogit/); R x64 3.1.0: A language and environment for statistical computing, R Core Team, Foundation for Statistical Computing (Available at no charge, http://www.r-project.org/)
A backcasting approach for matching regional ecosystem services supply and demand
S1364815215300839
Advances in computing power and infrastructure, increases in the number and size of ecological and environmental datasets, and the number and type of data collection methods, are revolutionizing the field of Ecology. To integrate these advances, virtual laboratories offer a unique tool to facilitate, expedite, and accelerate research into the impacts of climate change on biodiversity. We introduce the uniquely cloud-based Biodiversity and Climate Change Virtual Laboratory (BCCVL), which provides access to numerous species distribution modelling tools; a large and growing collection of biological, climate, and other environmental datasets; and a variety of experiment types to conduct research into the impact of climate change on biodiversity. Users can upload and share datasets, potentially increasing collaboration, cross-fertilisation of ideas, and innovation among the user community. Feedback confirms that the BCCVL's goals of lowering the technical requirements for species distribution modelling, and reducing time spent on such research, are being met.
The Biodiversity and Climate Change Virtual Laboratory: Where ecology meets big data
S1364815215300955
Uncertainty quantification (UQ) refers to quantitative characterization and reduction of uncertainties present in computer model simulations. It is widely used in engineering and geophysics fields to assess and predict the likelihood of various outcomes. This paper describes a UQ platform called UQ-PyL (Uncertainty Quantification Python Laboratory), a flexible software platform designed to quantify uncertainty of complex dynamical models. UQ-PyL integrates different kinds of UQ methods, including experimental design, statistical analysis, sensitivity analysis, surrogate modeling and parameter optimization. It is written in Python language and runs on all common operating systems. UQ-PyL has a graphical user interface that allows users to enter commands via pull-down menus. It is equipped with a model driver generator that allows any computer model to be linked with the software. We illustrate the different functions of UQ-PyL by applying it to the uncertainty analysis of the Sacramento Soil Moisture Accounting Model. We will also demonstrate that UQ-PyL can be applied to a wide range of applications. Name of software: Uncertainty Quantification Python Laboratory (UQ-PyL) Programming language: Python Operating system: Windows, Linux and MacOS Availability: http://www.uq-pyl.com Documentation: http://www.uq-pyl.com User interface: Graphical user interface or command-line License: Free under a GNU General Public License (www.gnu.org) agreement.
A GUI platform for uncertainty quantification of complex dynamical models
S1364815215300979
The absence of long sub-daily rainfall records can hamper development of continuous streamflow forecasting systems run at sub-daily time steps. We test the hypothesis that simple disaggregation of daily rainfall data to hourly data, combined with hourly streamflow data, can be used to establish efficient hourly rainfall-runoff models. The approach is tested on four rainfall-runoff models and a range of meso-scale catchments (150–3500 km2). We also compare our disaggregation approach to a method of parameter scaling that attains an hourly parameter-set from daily data. Simple disaggregation of daily rainfall produces hourly streamflow models that perform almost as well as those developed from hourly rainfall data. Rainfall disaggregation performs at least as well as parameter scaling, and often better. For the catchments and models we test, simple disaggregation is a very straightforward and effective way to establish hydrological models for continuous sub-daily streamflow forecasting systems when sub-daily rainfall data are unavailable.
Calibrating hourly rainfall-runoff models with daily forcings for streamflow forecasting applications in meso-scale catchments
S1364815215301092
Scenario discovery is a novel model-based approach to scenario development in the presence of deep uncertainty. Scenario discovery frequently relies on the Patient Rule Induction Method (PRIM). PRIM identifies regions in the model input space that are highly predictive of producing model outcomes that are of interest. To identify these, PRIM uses a lenient hill climbing optimization procedure. PRIM struggles when confronted with cases where the uncertain factors are a mix of data types, and can be used only for binary classifications. We compare two more lenient objective functions which both address the first problem, and an alternative objective function using Gini impurity which addresses the second problem. We assess the efficacy of the modification using previously published cases. Both modifications are effective. The more lenient objective functions produce better descriptions of the data, while the Gini impurity objective function allows PRIM to be used when handling multinomial classified data. This paper makes use of the Exploratory Modeling Workbench, available via https://github.com/quaquel/EMAworkbench. Section 3.4 relies on extensions to classes available in the workbench. These extensions are provided as supplementary material. The detailed code with rudimentary documentation is provided in the form of 3 pdf representations of the underlying IPython notebooks.
Improving scenario discovery for handling heterogeneous uncertainties and multinomial classified outcomes
S1364815215301110
In order to fully capture the benefits of rising CO2 in adapting agriculture to climate change, we first need to understand how CO2 affects crop growth. Several recent studies reported unexpected increases in sugarcane (C4) yields under elevated CO2, but it is difficult to distinguish direct leaf-level effects of rising CO2 on photosynthesis from indirect water-related responses. A simulation model of CO2 effects, based purely on changes in stomatal conductance (indirect mechanism), showed transpiration was reduced by 30% (initially) to 10% (closed canopy) and yield increased by 3% even in a well-irrigated crop. The model incorporated the results of a field experiment, and a glasshouse experiment designed to disentangle the mechanisms of CO2 response: whole-plant transpiration and stomatal conductance were both 28% lower for plants growing with high-frequency demand-based watering at 720 vs 390 ppm CO2, but there was no increase in biomass, indicating that indirect mechanisms dominate CO2 responses in sugarcane.
Measuring and modelling CO2 effects on sugarcane
S1364815215301195
Growing demand from the general public for centralized points of data access and analytics tools coincides with similar, well-documented needs of regional and international hydrology research and resource management communities. To address this need within the Laurentian Great Lakes region, we introduce the Great Lakes Dashboard (GLD), a dynamic web data visualization platform that brings multiple time series data sets together for visual analysis and download. The platform's adaptable, robust, and expandable Time Series Core Object Model (GLD-TSCOM) separates the growing complexity and size of Great Lakes data sets from the web application interface. Although the GLD-TSCOM is currently applied exclusively to Great Lakes data sets, the concepts and methods discussed here can be applied in other geographical and topical areas of interest. The Great Lakes Dashboard, The Great Lakes Water Level Dashboard, and The Great Lakes Hydro-Climate Dashboard National Oceanic and Atmospheric Administration, Great Lakes Environmental Research Laboratory, Ann Arbor, Michigan, USA and Cooperative Institute for Limnology and Ecosystems Research, University of Michigan, Ann Arbor, Michigan, USA Adobe Flash capable computer with modern system specifications Internet browser (Mozilla Firefox, Google Chrome, Microsoft Internet Explorer, etc.), Adobe Flash Plugin MXML and ActionScript under the Apache Flex Framework, compiled under the Adobe Flash Builder and JetBrains IntelliJ IDEA, HTML, JavaScript with jQuery and Dygraphs packages All Adobe Flash based products are freely available at the following sites: http://www.glerl.noaa.gov/data/gldb, http://www.glerl.noaa.gov/data/wldb, and http://www.glerl.noaa.gov/data/hcdb http://www.glerl.noaa.gov/data/dashboard/GLD_HTML5.html http://www.glerl.noaa.gov/data/dbportal
An expandable web-based platform for visually analyzing basin-scale hydro-climate time series data
S1364815215301286
New scenarios for climate change research connect climate model results based on Representative Concentration Pathways to nested interpretations of Shared Socioeconomic Pathways. Socioeconomic drivers of emissions and determinants of impacts are now decoupled from climate model outputs. To retain scenario credibility, more internally consistent linking across scales must be achieved. This paper addresses this need, demonstrating a modification to cross impact balances (CIB), a method for systematically deriving qualitative socioeconomic scenarios. Traditionally CIB is performed with one cross-impact matrix. This poses limitations, as more than a few dozen scenario elements with sufficiently varied outcomes can become computationally infeasible to comprehensively explore. Through this paper, we introduce the concept of ‘linked CIB’, which takes the structure of judgements for how scenario elements interact to partition a single cross-impact matrix into multiple smaller matrices. Potentially, this enables analysis of large CIB matrices and ensures internally consistent linking of scenario elements across scales.
Systematically linking qualitative elements of scenarios across levels, scales, and sectors
S1364815216300020
In climate change research ensembles of climate simulations are produced in an attempt to cover the uncertainty in future projections. Many climate change impact studies face difficulties using the full number of simulations available, and therefore often only subsets are used. Until now such subsets were chosen based on their representation of temperature change or by accessibility of the simulations. By using more specific information about the needs of the impact study as guidance for the clustering of simulations, the subset fits the purpose of climate change impact research more appropriately. Here, the sensitivity of such a procedure is explored, particularly with regard to the use of different climate variables, seasons, and regions in Europe. While temperature dominates the clustering, the resulting selection is influenced by all variables, leading to the conclusion that different subsets fit different impact studies best.
Selecting regional climate scenarios for impact modelling studies
S1364815216300093
Integrated access to and analysis of data for cross-domain synthesis studies are hindered because common characteristics of observational data, including time, location, provenance, methods, and units are described differently within different information models, including physical implementations and exchange schema. We describe a new information model for spatially discrete Earth observations called the Observations Data Model Version 2 (ODM2) aimed at facilitating greater interoperability across scientific disciplines and domain cyberinfrastructures. ODM2 integrates concepts from ODM1 and other existing cyberinfrastructures to expand capacity to consistently describe, store, manage, and encode observational datasets for archival and transfer over the Internet. Compared to other systems, it accommodates a wider range of observational data derived from both sensors and specimens. We describe the identification of community information requirements for ODM2 and then present the core information model and demonstrate how it can be formally extended to accommodate a range of information requirements and use cases. Observations Data Model 2 (ODM2) Jeffery S. Horsburgh, Anthony K. Aufdenkampe, Emilio Mayorga, Kerstin A. Lehnert, Leslie Hsu, Lulin Song, Amber Spackman Jones, Sara G. Damiano, David G. Tarboton, David Valentine, Ilya Zaslavsky, Tom Whitenack Jeffery S. Horsburgh; Address: 8200 Old Main Hill, Logan, UT 84322-8200, USA; Email: [email protected] 2015 ODM2 is available for use with Microsoft SQL Server, MySQL, PostgreSQL, and SQLite on Windows, Macintosh, and Linux based computers. Information about additional software available for working with ODM2 is available at https://github.com/ODM2/ODM2. Free. Software and source code are released under the New Berkeley Software Distribution (BSD) License, which allows for liberal reuse. All source code, examples, and documentation can be accessed at https://github.com/ODM2/ODM2.
Observations Data Model 2: A community information model for spatially discrete Earth observations
S1364815216300251
We address two critical choices in Global Sensitivity Analysis (GSA): the choice of the sample size and of the threshold for the identification of insensitive input factors. Guidance to assist users with those two choices is still insufficient. We aim at filling this gap. Firstly, we define criteria to quantify the convergence of sensitivity indices, of ranking and of screening, based on a bootstrap approach. Secondly, we investigate the screening threshold with a quantitative validation procedure for screening results. We apply the proposed methodologies to three hydrological models with varying complexity utilizing three widely-used GSA methods (RSA, Morris, Sobol’). We demonstrate that convergence of screening and ranking can be reached before sensitivity estimates stabilize. Convergence dynamics appear to be case-dependent, which suggests that “fit-for-all” rules for sample sizes should not be used. Other modellers can easily adopt our criteria and procedures for a wide range of GSA methods and cases.
Global Sensitivity Analysis of environmental models: Convergence and validation
S1364815216300287
Sensitivity Analysis (SA) investigates how the variation in the output of a numerical model can be attributed to variations of its input factors. SA is increasingly being used in environmental modelling for a variety of purposes, including uncertainty assessment, model calibration and diagnostic evaluation, dominant control analysis and robust decision-making. In this paper we review the SA literature with the goal of providing: (i) a comprehensive view of SA approaches also in relation to other methodologies for model identification and application; (ii) a systematic classification of the most commonly used SA methods; (iii) practical guidelines for the application of SA. The paper aims at delivering an introduction to SA for non-specialist readers, as well as practical advice with best practice examples from the literature; and at stimulating the discussion within the community of SA developers and users regarding the setting of good practices and on defining priorities for future research.
Sensitivity analysis of environmental models: A systematic review with practical workflow
S1364815216300494
It is crucial to identify sources of impacts and degradation to maintain functions and services that the physical structure of coral reef provides. Here, a Bayesian Network approach is used to evaluate effects that anthropogenic and climate change disturbances have on coral reef structure. The network was constructed on knowledge derived from the literature and elicited from experts, and parameterised on independent data. Evaluation of the model was conducted through sensitivity analyses and data integration was fundamental to obtain a balanced dataset. Scenario analyses, conducted to assess the effects of stressors on the reef framework state, suggested that calcifying organisms and carbonate production, rather than bioerosion, had the largest influence on the reef carbonate budgetary state. Despite the overall budget remaining positive, anthropogenic pressures, particularly deterioration of water quality, affected reef carbonate production, representing a warning signal for potential changes in the reef state.
A Bayesian Belief Network to assess rate of changes in coral reef ecosystems
S136481521630113X
Advanced modeling tools are needed for informed water resources planning and management. Two classes of modeling tools are often used to this end–(1) distributed-parameter hydrologic models for quantifying supply and (2) river-operation models for sorting out demands under rule-based systems such as the prior-appropriation doctrine. Within each of these two broad classes of models, there are many software tools that excel at simulating the processes specific to each discipline, but have historically over-simplified, or at worse completely neglected, aspects of the other. As a result, water managers reliant on river-operation models for administering water resources need improved tools for representing spatially and temporally varying groundwater resources in conjunctive-use systems. A new tool is described that improves the representation of groundwater/surface-water (GW-SW) interaction within a river-operations modeling context and, in so doing, advances evaluation of system-wide hydrologic consequences of new or altered management regimes.
Toward improved simulation of river operations through integration with a hydrologic model
S1364815216301220
Advancing stakeholder participation beyond consultation offers a range of benefits for local flood risk management, particularly as responsibilities are increasingly devolved to local levels. This paper details the design and implementation of a participatory approach to identify intervention options for managing local flood risk. Within this approach, Bayesian networks were used to generate a conceptual model of the local flood risk system, with a particular focus on how different interventions might achieve each of nine participant objectives. The model was co-constructed by flood risk experts and local stakeholders. The study employs a novel evaluative framework, examining both the process and its outcomes (short-term substantive and longer-term social benefits). It concludes that participatory modelling techniques can facilitate the identification of intervention options by a wide range of stakeholders, and prioritise a subset for further investigation. They can help support a broader move towards active stakeholder participation in local flood risk management. Netica (CoGF) 4.16 for Windows ©1992–2015. Norsys Software Corporation, 3512 West 23rd Avenue, Vancouver, BC, CANADA, V6S 1K5. Available online from http://www.norsys.com/netica Cost US$285.00(academic)/US$585.00(commercial) (both include technical support and updates for one year). Free demo version available for download at above website (full-featured but limited model size supported).
Software availability
S1364815216301232
Changes of carbon stocks in agricultural soils, emissions of greenhouse gases from agriculture, and the delivery of ecosystem services of agricultural landscapes depend on combinations of land-use, livestock density, farming practices, climate and soil types. Many environmental processes are highly non-linear. If the analysis of the environmental impact is based on data at a relatively coarse-scale (e.g. farm, country, or large administrative regions), conclusions can be misleading. For an accurate assessment of agri-environmental indicators, data of agricultural activities and their dynamics are needed at high spatial resolution. In this paper, we develop and validate a spatial model for predicting the agricultural land-use areas within the homogenous spatial units (HSUs). For the EU-28 countries, we distinguish about 1.5 × 105 HSUs and we consider 30 possible land-uses to match with the classification used in the Common Agricultural Policy Regionalized Impact (CAPRI) model. The comparison of model predictions with independent observations and with a simple rule-based approach at HSU level demonstrates that the predictions are generally accurate in more than 75 % of HSUs. The frequent crops or land-use are better predicted. For non-frequent crops and/or crops requiring specific cultivation conditions, the model needs further fine-tuning. Software The Land-Use Disaggregation Model (LUDM) aims at predicting the land-use areas within the fine-scale units. LUDM is written in R (R CRAN) and is freely available. The source code is maintained and can be downloaded as a zip file from http://ludm2016.blogspot.com/2016/04/ludm.html. The zip file contains a ReadMe file and accessory files. Use the ReadMe file for suggestions on program instruction and notes on terms of service. LUDM model is free, regulated under the GNU General Public License v3 (http://www.gnu.org/copyleft/gpl.html) and intended for further open-source development.
Multi-scale land-use disaggregation modelling: Concept and application to EU countries
S1386505613001731
Purpose While contributing to an improved continuity of care, Shared Electronic Health Record (EHR) systems may also lead to information overload of healthcare providers. Document-oriented architectures, such as the commonly employed IHE XDS profile, which only support information retrieval at the level of documents, are particularly susceptible for this problem. The objective of the EHR-ARCHE project was to develop a methodology and a prototype to efficiently satisfy healthcare providers’ information needs when accessing a patient's Shared EHR during a treatment situation. We especially aimed to investigate whether this objective can be reached by integrating EHR Archetypes into an IHE XDS environment. Methods Using methodical triangulation, we first analysed the information needs of healthcare providers, focusing on the treatment of diabetes patients as an exemplary application domain. We then designed ISO/EN 13606 Archetypes covering the identified information needs. To support a content-based search for fine-grained information items within EHR documents, we extended the IHE XDS environment with two additional actors. Finally, we conducted a formative and summative evaluation of our approach within a controlled study. Results We identified 446 frequently needed diabetes-specific information items, representing typical information needs of healthcare providers. We then created 128 Archetypes and 120 EHR documents for two fictive patients. All seven diabetes experts, who evaluated our approach, preferred the content-based search to a conventional XDS search. Success rates of finding relevant information was higher for the content-based search (100% versus 80%) and the latter was also more time-efficient (8–14min versus 20min or more). Conclusions Our results show that for an efficient satisfaction of health care providers’ information needs, a content-based search that rests upon the integration of Archetypes into an IHE XDS-based Shared EHR system is superior to a conventional metadata-based XDS search.
The EHR-ARCHE project: Satisfying clinical information needs in a Shared Electronic Health Record System based on IHE XDS and Archetypes
S1386505613001895
Objective There are benefits and risks of giving patients more granular control of their personal health information in electronic health record (EHR) systems. When designing EHR systems and policies, informaticists and system developers must balance these benefits and risks. Ethical considerations should be an explicit part of this balancing. Our objective was to develop a structured ethics framework to accomplish this. Methods We reviewed existing literature on the ethical and policy issues, developed an ethics framework called a “Points to Consider” (P2C) document, and convened a national expert panel to review and critique the P2C. Results We developed the P2C to aid informaticists designing an advanced query tool for an electronic health record (EHR) system in Indianapolis. The P2C consists of six questions (“Points”) that frame important ethical issues, apply accepted principles of bioethics and Fair Information Practices, comment on how questions might be answered, and address implications for patient care. Discussion The P2C is intended to clarify what is at stake when designers try to accommodate potentially competing ethical commitments and logistical realities. The P2C was developed to guide informaticists who were designing a query tool in an existing EHR that would permit patient granular control. While consideration of ethical issues is coming to the forefront of medical informatics design and development practices, more reflection is needed to facilitate optimal collaboration between designers and ethicists. This report contributes to that discussion.
Giving patients granular control of personal health information: Using an ethics ‘Points to Consider’ to inform informatics system designers
S1386505614000173
Purpose To provide an overview of factors influencing the acceptance of electronic technologies that support aging in place by community-dwelling older adults. Since technology acceptance factors fluctuate over time, a distinction was made between factors in the pre-implementation stage and factors in the post-implementation stage. Methods A systematic review of mixed studies. Seven major scientific databases (including MEDLINE, Scopus and CINAHL) were searched. Inclusion criteria were as follows: (1) original and peer-reviewed research, (2) qualitative, quantitative or mixed methods research, (3) research in which participants are community-dwelling older adults aged 60 years or older, and (4) research aimed at investigating factors that influence the intention to use or the actual use of electronic technology for aging in place. Three researchers each read the articles and extracted factors. Results Sixteen out of 2841 articles were included. Most articles investigated acceptance of technology that enhances safety or provides social interaction. The majority of data was based on qualitative research investigating factors in the pre-implementation stage. Acceptance in this stage is influenced by 27 factors, divided into six themes: concerns regarding technology (e.g., high cost, privacy implications and usability factors); expected benefits of technology (e.g., increased safety and perceived usefulness); need for technology (e.g., perceived need and subjective health status); alternatives to technology (e.g., help by family or spouse), social influence (e.g., influence of family, friends and professional caregivers); and characteristics of older adults (e.g., desire to age in place). When comparing these results to qualitative results on post-implementation acceptance, our analysis showed that some factors are persistent while new factors also emerge. Quantitative results showed that a small number of variables have a significant influence in the pre-implementation stage. Fourteen out of the sixteen included articles did not use an existing technology acceptance framework or model. Conclusions Acceptance of technology in the pre-implementation stage is influenced by multiple factors. However, post-implementation research on technology acceptance by community-dwelling older adults is scarce and most of the factors in this review have not been tested by using quantitative methods. Further research is needed to determine if and how the factors in this review are interrelated, and how they relate to existing models of technology acceptance.
Factors influencing acceptance of technology for aging in place: A systematic review
S1386505614001105
Purpose This paper reviews the research literature on text mining (TM) with the aim to find out (1) which cancer domains have been the subject of TM efforts, (2) which knowledge resources can support TM of cancer-related information and (3) to what extent systems that rely on knowledge and computational methods can convert text data into useful clinical information. These questions were used to determine the current state of the art in this particular strand of TM and suggest future directions in TM development to support cancer research. Methods A review of the research on TM of cancer-related information was carried out. A literature search was conducted on the Medline database as well as IEEE Xplore and ACM digital libraries to address the interdisciplinary nature of such research. The search results were supplemented with the literature identified through Google Scholar. Results A range of studies have proven the feasibility of TM for extracting structured information from clinical narratives such as those found in pathology or radiology reports. In this article, we provide a critical overview of the current state of the art for TM related to cancer. The review highlighted a strong bias towards symbolic methods, e.g. named entity recognition (NER) based on dictionary lookup and information extraction (IE) relying on pattern matching. The F-measure of NER ranges between 80% and 90%, while that of IE for simple tasks is in the high 90s. To further improve the performance, TM approaches need to deal effectively with idiosyncrasies of the clinical sublanguage such as non-standard abbreviations as well as a high degree of spelling and grammatical errors. This requires a shift from rule-based methods to machine learning following the success of similar trends in biological applications of TM. Machine learning approaches require large training datasets, but clinical narratives are not readily available for TM research due to privacy and confidentiality concerns. This issue remains the main bottleneck for progress in this area. In addition, there is a need for a comprehensive cancer ontology that would enable semantic representation of textual information found in narrative reports.
Text mining of cancer-related information: Review of current status and future directions
S1386505614001610
Objectives The first objective of this study is to evaluate the impact of integrating a single-source system into the routine patient care documentation workflow with respect to process modifications, data quality and execution times in patient care as well as research documentation. The second one is to evaluate whether it is cost-efficient using a single-source system in terms of achieved savings in documentation expenditures. Methods We analyzed the documentation workflow of routine patient care and research documentation in the medical field of pruritus to identify redundant and error-prone process steps. Based on this, we established a novel documentation workflow including the x4T (exchange for Trials) system to connect hospital information systems with electronic data capture systems for the exchange of study data. To evaluate the workflow modifications, we performed a before/after analysis as well as a time–motion study. Data quality was assessed by measuring completeness, correctness and concordance of previously and newly collected data. A cost–benefit analysis was conducted to estimate the savings using x4T per collected data element and the additional costs for introducing x4T. Results The documentation workflow of patient care as well as clinical research was modified due to the introduction of the x4T system. After x4T implementation and workflow modifications, half of the redundant and error-prone process steps were eliminated. The generic x4T system allows direct transfer of routinely collected health care data into the x4T research database and avoids manual transcription steps. Since x4T has been introduced in March 2012, the number of included patients has increased by about 1000 per year. The average entire documentation time per patient visit has been significantly decreased by 70.1% (from 1116±185 to 334±83s). After the introduction of the x4T system and associated workflow changes, the completeness of mandatory data elements raised from 82.2% to 100%. In case of the pruritus research study, the additional costs for introducing the x4T system are €434.01 and the savings are 0.48ct per collected data element. So, with the assumption of a 5-year runtime and 82 collected data elements per patient, the amount of documented patients has to be higher than 1102 to create a benefit. Conclusion Introduction of the x4T system into the clinical and research documentation workflow can optimize the data collection workflow in both areas. Redundant and cumbersome process steps can be eliminated in the research documentation, with the result of reduced documentation times as well as increased data quality. The usage of the x4T system is especially worthwhile in a study with a large amount of collected data or a high number of included patients.
Does single-source create an added value? Evaluating the impact of introducing x4T into the clinical routine on workflow modifications, data quality and cost–benefit
S1386505614001671
Background The field of telehealth and telemedicine is expanding as the need to improve efficiency of health care becomes more pressing. The decision to implement a telehealth system is generally an expensive undertaking that impacts a large number of patients and other stakeholders. It is therefore extremely important that the decision is fully supported by accurate evaluation of telehealth interventions. Objective Numerous reviews of telehealth have described the evidence base as inconsistent. In response they call for larger, more rigorously controlled trials, and trials which go beyond evaluation of clinical effectiveness alone. The aim of this paper is to discuss various ways in which evaluation of telehealth could be improved by the use of adaptive trial designs. Results We discuss various adaptive design options, such as sample size reviews and changing the study hypothesis to address uncertain parameters, group sequential trials and multi-arm multi-stage trials to improve efficiency, and enrichment designs to maximise the chances of obtaining clear evidence about the telehealth intervention. Conclusion There is potential to address the flaws discussed in the telehealth literature through the adoption of adaptive approaches to trial design. Such designs could lead to improvements in efficiency, allow the evaluation of multiple telehealth interventions in a cost-effective way, or accurately assess a range of endpoints that are important in the overall success of a telehealth programme.
Design of telehealth trials – Introducing adaptive approaches
S1386505614001932
Background Initiatives in the UK to enable patients to access their electronic health records (EHRs) are gathering momentum. All citizens of the European Union should have access to their records by 2015, a target that the UK has endorsed. Objectives To identify the ways in which patients used their access to their EHRs, what they sought to achieve, and the extent to which EHR access was related to the concept of making savings. Methods An audit of patients’ online access to medical records was conducted in July–August 2011 using a survey questionnaire. Two hundred and twenty six patients who were registered with two general practices in the National Health Service (NHS) located in the UK and who had accessed their personal EHRs at least twice in the preceding 12 months i.e. from July 2010 to July 2011, completed the questionnaire. Data analysis A thematic analysis of the comments that patients gave in response to the open ended questions on the questionnaire. Results Overall, evaluations of record access were positive. Four main themes relating to the ways in which patients accessed their records were identified: making savings, checking past activity, preparation for future action, and setting new expectations. Conclusions Quite apart from any benefits of savings in healthcare resources, this study has provided qualitative evidence of the active ways in which patients may make use of access to their EHRs, many of which are in line with proportionate health management strategies. Access to personal EHRs may contribute to the development of new expectations among patients.
Accessing personal medical records online: A means to what ends?