FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S1746809416000033
Brain oscillations have traditionally been studied by time-frequency analysis of the electrophysiological signals. In this work we demonstrated the usefulness of two nonlinear combinations of differential operators on intracranial EEG (iEEG) recordings to study abnormal oscillations in human brain during intractable focal epileptic seizures. Each one dimensional time domain signal was visualized as the trajectory of a particle moving in a force field with one degree of freedom. Modeling of the temporal difference operators to be applied on the signals was inspired by the principles of classical Newtonian mechanics. Efficiency of one of the nonlinear combinations of the operators in distinguishing the seizure part from the background signal and the artifacts was established, particularly when the seizure duration was long. The resultant automatic detection algorithm is linear time executable and detects a seizure with an average delay of 5.02s after the electrographic onset, with a mean 0.05/h false positive rate and 94% detection accuracy. The area under the ROC curve was 0.959. Another nonlinear combination of differential operators detects spikes (peaks) and inverted spikes (troughs) in a signal irrespective of their shape and size. It was shown that in a majority of the cases simultaneous occurrence of all the spikes and inverted spikes across the focal channels was more after the seizure offset than during the seizure, where the duration after the offset was taken equal to the duration of the seizure. It has been explained in terms of GABAergic inhibition of seizure termination.
Analysis of cortical rhythms in intracranial EEG by temporal difference operators during epileptic seizures
S1746809416000045
In quantitative electromyography (EMG), the set of potentials that constitute a motor unit action potential (MUAP) train are represented by a single waveform from which various parameters are determined in order to characterize the MUAP for diagnostic analysis. Several methods that extract such a waveform are currently available, and they are, in essence, based on two operations: averaging and selection, which are performed either sample-by-sample or on the whole-potential. We present a new approach that carries out selection and averaging on a local interval basis. We tested our algorithm with a dataset of MUAP records extracted from the tibialis anterioris muscle of healthy subjects and compared it with some of the most relevant state-of-the-art methods considered in a previous work (Malanda et al., J. Electromyogr. Kinesiol., 2015). The comparison covered general purpose signal processing figures of merit and clinically used MUAP waveform parameters. Significantly better results in both sets of figures of merit were obtained with the new approach. In addition, relative to the other algorithms tested, the new approach required fewer potentials from the MUAP set to obtain an accurate representative waveform.
Sliding window averaging for the extraction of representative waveforms from motor unit action potential trains
S1746809416000057
In the oncology field, the anti-angiogenetic therapies aim at inhibiting tumour vascularization, that is the development of new capillary blood vessels in tumours, that allows them to grow and spread and, potentially, to metastasi. Computed tomography perfusion (CTp) is a dynamic contrast-enhanced technique that has emerged in the last few years as a promising approach for earlier assessment of such therapies, and of tumour response, in general, since functional changes precede morphological changes, that take more time to become evident. However several issues, such as patient motion and several types of artefacts, jeopardize quantitative measurements, this preventing CTp to be used in standard clinics. This paper presents an original automatic approach, based on the voxel-based analysis of the time–concentration curves (TCCs), that allows emphasizing those physiological structures, such as vessels, bronchi or artefacts, that could affect the final computation of blood flow perfusion values in CTp studies of lung cancer. The automatic exclusion of these misleading values represents a step towards a quantitative CTp, hence its routine use in clinics.
Automatic detection of misleading blood flow values in CT perfusion studies of lung cancer
S1746809416000069
Speech rate (SR) plays an important role in the assessment of disordered speech. Clinicians rely primarily on manual or semi-automatic methods to determine SR. The reported algorithms are designed for normal speech and show many restrictions with respect to disordered speech that are predominantly characterized by slow SR. This research presents an algorithm that in addition to energy and pitch, relies on information regarding the spectral characteristics of the borders of the syllables (landmarks). Speech samples (three sentences per speaker) for 66 healthy and dysarthric speakers were analyzed with four algorithms (Mrate, robust SR estimation method, Praat's script and the proposed algorithm based on landmark detection). The landmark approach is demonstrated to be more accurate for speakers with slow SR. The Pearson correlation coefficient between the calculated SR and the reference remains over 0.84 for the 198 sentences analyzed, while the other algorithms’ correlations are below the values reported in literature for fluent speech. In samples where SR is high, the algorithm shows similar limitations versus other algorithms due to the merging of syllables. The landmark-based algorithm is an adequate method for determining SR in disordered speech.
Speech rate estimation in disordered speech based on spectral landmark detection
S1746809416000070
A new pathological tremor signal prediction algorithm for tremor suppression, the adaptive multiple oscillators linear combiner (AMOLC) based on Hopf oscillator, is proposed in this paper. This method can be used to predict pathological tremor signal with high accuracy and well robustness. An experimental platform is built to simulate the tremor motion to obtain the optimum parameters of AMOLC algorithm. Furthermore, AMOLC algorithm is applied to tremor prediction for actual pathological tremor patients. Results verify that chosen model parameters indeed improve the prediction performance which meets the demand of tremor suppression with high accuracy and well robustness. A comparison of computational complexity is conducted between some existing prediction methods and AMOLC.
Prediction of pathological tremor using adaptive multiple oscillators linear combiner
S1746809416000082
This paper focuses on the architecture and FPGA implementation aspects of a kind of assistive tool for disabled persons with motor neuron diseases, specifically with muscle atrophy. The tool, called a communication interface, allows such persons to communicate with other people by means of moving selected muscles, e.g., within the face. The application of MEMS accelerometers have been proposed as muscle movement sensors. Four different FPGA implementation methods of signal processing from MEMS sensors, i.e., manual HDL coding, usage of the Matlab HDL coder and Vivado HLS as well as embedded microcontroller exploitation, have been investigated. The communication interface can be used either as an input switch for, so called, virtual keyboards or as a stand-alone tool, which allows disabled persons to write a text by means of the Morse code.
FPGA-based communication interface for persons with motor neuron diseases
S1746809416300015
We present a new algorithm for the automatic detection of periodic and non-periodic limb movements in polysomnographic (PSG) sleep recordings. A set of 70 PSG recordings obtained in the course of common practice were randomly selected for the validation of the proposed approach. The dataset includes 35 recordings that were acquired in ambulatory conditions and 35 that were carried out under the supervision of clinicians at our sleep centre. The algorithm includes robust mechanisms to handle the presence of artefacts, and has the ability to adjust its detection thresholds to dynamically adapt to changing signal conditions. The validation results in our dataset, which also include the comparison with another two automatic methods available in the literature, support the validity of our approach, and its utility as a valuable tool to help the clinician in the scoring task.
A new automatic method for the detection of limb movements and the analysis of their periodicity
S1746809416300027
Positron emission tomography (PET) is a functional molecular imaging, which helps to diagnose neurodegenerative diseases, such as Alzheimer's disease (AD), by evaluating cerebral metabolic rate of glucose after administration of (18)F-fluoro-deoxy-glucose ((18)FDG). A quantitative evaluation, using computer aided methods, is of importance to improve medical care. In this paper a novel ranking method of the effectiveness of brain region of interest to classify healthy and AD brain is developed. Brain images are first segmented into 116 regions according to an anatomical atlas. A spatial normalization and four gray level normalization methods are used for comparison. Each extracted region is then characterized by a feature set based on gray level histogram moments, as well as age and gender of a subject. Using a receiver operating characteristic curve for each region, it was possible to define a Separating Power Factor (SPF) to rank region's ability to separate healthy from AD brain images. Using a set of selected regions, according to their rank, and when inputting them to a support vector machine classifier, it was possible to show that classification results were similar or slightly better than those obtained when using the whole gray matter voxels of the brain or the 116 regions as input features to the classifier. Computational time was reduced compared to the other methods to which our approach was compared.
Brain region ranking for 18FDG-PET computer-aided diagnosis of Alzheimer's disease
S1746809416300039
Patients suffering from acute respiratory distress syndrome (ARDS) require mechanical ventilation (MV) for breathing support. A lung model that captures patient specific behaviour can allow clinicians to optimise each patient's ventilator settings, and reduce the incidence of ventilator induced lung injury (VILI). This study develops a nonlinear autoregressive model (NARX), incorporating pressure dependent basis functions and time dependent resistance coefficients. The goal was to capture nonlinear lung mechanics, with an easily identifiable model, more accurately than the first order model (FOM). Model coefficients were identified for 27 ARDS patient data sets including nonlinear, clinically useful inspiratory pauses. The model successfully described all parts of the airway pressure curve for 25 data sets. Coefficients that captured airway resistance effects enabled end-inspiratory and expiratory relaxation to be accurately described. Basis function coefficients were also able to describe an elastance curve across different PEEP levels without refitting, providing a more useful patient-specific model. The model thus has potential to allow clinicians to predict the effects of changes in ventilator PEEP levels on airway pressure, and thus determine optimal patient specific PEEP with less need for clinical input or testing.
Use of basis functions within a non-linear autoregressive model of pulmonary mechanics
S1746809416300040
Different approaches have been proposed to select features and channels for pattern recognition classification of myoelectric upper-limb prostheses. The goal of this work is to use deterministic methods to select the feature-channels pairs that best classify the hand postures at different limb positions. Two selection methods were tried. One is a distance-based feature selection (DFSS) that determines a separability index using the Mahalanobis distance between classes. The second method is a correlation-based feature selection (CFSS) that measures the amount of mutual information between features and classes. To evaluate the performance of these selection methods, EMG data from 10 able-bodied subjects were acquired when performing 5 hand postures at 9 different arm positions and 10 time-domain and frequency-domain features were extracted. Classification accuracy using both methods was always higher than including all the features and channels and showed slight improvement over classification using the state-of-art TD features when evaluated against limb variation. The CFSS method always used less feature-channel pairs compared to the DFSS method. Using both methods, selection of channels placed on the posterior side of the forearm was significantly higher than anterior side. Such methods could be used as fast screening filters to select features and channels that best classify different hand postures at different arm positions.
Distance and mutual information methods for EMG feature and channel subset selection for classification of hand movements
S1746809416300052
Swallowing disorders affect thousands of patients every year. Currently utilized techniques to screen for this condition are questionably reliable and are often deployed in non-standard manners, so efforts have been put forth to generate an instrumental alternative based on cervical auscultation. These physiological signals with low signal-to-noise ratios are traditionally denoised by well-known wavelets in a discrete, single tree wavelet decomposition. We attempt to improve this widely accepted method by designing a matched wavelet for cervical auscultation signals to provide better denoising capabilities and by implementing a dual-tree complex wavelet transform to maintain time invariant properties of this filtering. We found that our matched wavelet did offer better denoising capabilities for cervical auscultation signals compared to several popular wavelets and that the dual tree complex wavelet transform did offer better time invariance when compared to the single tree structure. We conclude that this new method of denoising cervical auscultation signals could benefit applications that can spare the required computation time and complexity.
A matched dual-tree wavelet denoising for tri-axial swallowing vibrations
S1746809416300143
This work describes an algorithm intended to detect the beat-to-beat heart rate from the ballistocardiogram (BCG) obtained from seated subjects. The algorithm is based on the continuous wavelet transform with splines, which enables the selection of an optimum scale for reducing noise and mechanical interferences. The first step of the algorithm is a learning phase in which the first four heartbeats in the BCG are detected to define initial thresholds, search windows and interval limits. The learned parameters serve to identify the next heartbeat and are readapted after each heartbeat detected to follow the heart rate and signal-amplitude changes. To evaluate the agreement between results from the algorithm and the heart rate obtained from the ECG, a Bland–Altman plot has been used to compare them for seven seated subjects. The mean error obtained was −0.03beats/min and the 95% confidence interval (±2 SD) was ±2.7beats/min, which is within the accuracy limits recommended by the Association for the Advancement of Medical Instrumentation (AAMI) standard for heart rate meters.
An algorithm for beat-to-beat heart rate detection from the BCG based on the continuous spline wavelet transform
S1746809416300155
Jitter is a phenomenon caused by the perturbation in the length of the glottal cycles due to the quasi-periodic oscillation of the vocal folds in the production of the voice. It can be modeled as a random phenomenon described by the deviations of the glottal cycle length in relation to a mean value. Its study has been developed due to important applications such as aid in identification of voices with pathological characteristics, when its values are large, because a normal voice has naturally a low level of jitter. The aim of this paper is to construct a stochastic model of jitter using a two-mass mechanical model of the vocal folds, assuming complete right–left symmetry of the vocal folds and considering the motion of the vocal folds only in the horizontal direction. The stiffnesses taken into account in the model are considered as stochastic processes and their modeling are proposed. Glottal signals and voice signals are generated with jitter and the probability density function of the fundamental frequency is constructed for several values of the hyperparameters that control the level of jitter.
Jitter generation in voice signals produced by a two-mass stochastic mechanical model
S1746809416300167
Nearby scalp channels in multi-channel EEG data exhibit high correlation. A question that naturally arises is whether it is required to record signals from all the electrodes in a group of closely spaced electrodes in a typical measurement setup. One could save on the number of channels that are recorded, if it were possible to reconstruct the omitted channels to the accuracy needed for identifying the relevant information (say, spectral content in the signal), required to carry out a preliminary diagnosis. We address this problem from a compressed sensing perspective and propose a measurement and reconstruction scheme. Working with publicly available EEG database, we have demonstrated that up to 12 channels in the 10-10 system of electrode placement can be estimated within an average error of 2% from recordings of the remaining channels. As a limiting case, all the channels of the 10-10 system can be estimated using recordings on the sparser 10-20 system within an error of less than 20% in each of the significant bands: delta, theta, beta and alpha.
Reconstruction of EEG from limited channel acquisition using estimated signal correlation
S1746809416300179
In this paper, we present a novel framework for parcellation of a brain region into functional subROIs (Sub-Region-of-Interest) based on their connectivity patterns with other brain regions. By utilising previously established neuroanatomy information, the proposed method aims at finding spatially continuous, functionally consistent subROIs in a given brain region. The proposed framework relies on (1) a sparse spatially-regularized fused lasso regression model for encouraging spatially and functionally adjacent voxels to share similar regression coefficients; (2) an iterative merging and adaptive parameter tuning process; (3) a Graph-Cut optimization algorithm for assigning overlapped voxels into separate subROIs. Our simulation results demonstrate that the proposed method could reliably yield spatially continuous and functionally consistent subROIs. We applied the method to resting-state fMRI data obtained from normal subjects and explored connectivity to the putamen. Two distinct functional subROIs could be parcellated out in the putamen region in all subjects. This approach provides a way to extract functional subROIs that can then be investigated for alterations in connectivity in diseases of the basal ganglia, for example in Parkinson's disease.
Connectivity-based parcellation of functional SubROIs in putamen using a sparse spatially regularized regression model
S1746809416300180
Image Quality Assessment (IQA) plays an important role in assessing any new hardware, software, image acquisition techniques, image reconstruction or post-processing algorithms, etc. In the past decade, there have been various IQA methods designed to evaluate natural images. Some were used for the medical images but the use was limited. This paper reviews the recent advancement on IQA for medical images, mainly for Magnetic Resonance Imaging (MRI), Computed Tomography (CT), and ultrasonic imaging. Thus far, there is no gold standard of IQA for medical images due to various difficulties in designing a suitable IQA for medical images, and there are many different image characteristics and contents across various imaging modalities. No reference-IQA (NR-IQA) is recommended for assessing medical images because there is no perfect reference image in the real world medical imaging. We will discuss and comment on some useful and interesting IQA methods, and then suggest several important factors to be considered in designing a new IQA method for medical images. There is still great potential for research in this area.
Review of medical image quality assessment
S1746809416300192
In many healthcare applications, artifacts mask or corrupt important features of Electrocardiogram (ECG) signals. In this paper we describe a revised scheme for ECG signal denoising based on a recursive filtering methodology. We suggest a suitable class of kernel functions in order to remove artifacts in the ECG signal, starting from noise frequencies in the Fourier domain. Our approach does not require high computational requirements and this feature offers the possibility of an implementation of the scheme directly on mobile computing devices. The proposed scheme allows local denoising and hence a real time visualization of the signal by means of a strategy based on boundary conditions. Experiments on real datasets have been carried out in order to test, in terms of computation and accuracy, the proposed algorithm. Finally, comparative results with other well-known denoising methods are shown.
A revised scheme for real time ECG Signal denoising based on recursive filtering
S1746809416300209
A multimodal medical image fusion method based on discrete fractional wavelet (DFRWT) is presented in this paper. With a change in p order in domain (0,1], source medical images are decomposed by DFRWT in different p order. The sparsity character of the mode coefficients in subband images changes. According to the method, to enhance the correlation between subband coefficients, the non-sparsity character of the mode coefficients in low p order should be utilized. The coefficients of the all subbands are fused using the weighted regional variance rule. Finally, inverse DFRWT is applied to obtain a fused image. Subjective and objective analyses of the results and comparisons with other multiresolution domain techniques show the effectiveness of the proposed scheme in fusing multimodal medical images.
Medical image fusion using discrete fractional wavelet transform
S1746809416300210
Schizophrenia is a severe psychiatric disorder which lacks any established diagnostic test and is currently diagnosed on the basis of externally observed behavioral symptoms. Functional magnetic resonance imaging (fMRI) is helpful in capturing abnormalities in brain activation patterns of schizophrenia patients in comparison to healthy subjects. Since the dimension of fMRI data is huge, while the number of samples is limited, dimensionality reduction is essential. Thus, this research work aims to utilize pattern recognition techniques to reduce the dimension of fMRI data for developing an effective computer-aided diagnosis method for schizophrenia. A three-phase method is proposed which involves spatial clustering of whole-brain voxels of individual 3-D spatial maps (β-maps or independent component score-maps), representation of each cluster using singular value decomposition followed by a novel hybrid multivariate forward feature selection method to obtain an optimal subset of relevant and non-redundant features for classification. A decision model is built using support vector machine classifier with leave-one-out cross-validation scheme. The measures, namely, sensitivity, specificity and classification accuracy are utilized to evaluate the performance of the decision model. The efficacy of the proposed method is evaluated on two distinct balanced datasets D1 and D2 (captured from 1.5T and 3T MRI scanners, respectively). D1 and D2 comprise of auditory oddball task fMRI data of schizophrenia patients and well age-matched healthy subjects, derived from publicly available FBIRN multisite dataset. Best classification accuracy of 92.6% and 94% are achieved for D1 and D2, respectively with the proposed method. The proposed method exhibits superior performance over the existing methods. In addition, discriminative brain regions, corresponding to the optimal subset of features, are identified and are in accordance with the literature. The proposed method is able to effectively classify schizophrenia patients and healthy subjects and thus, may be utilized as a diagnostic tool.
A combination of singular value decomposition and multivariate feature selection method for diagnosis of schizophrenia using fMRI
S1746809416300222
Enhancement of ultrasound (US) images is required for proper visual inspection and further pre-processing since US images are generally corrupted with speckle. In this paper, a new approach based on non-local means (NLM) method is proposed to remove the speckle noise in the US images. Since the interpolated final Cartesian image produced from uncompressed ultrasound data contaminated with fully developed speckle can be represented by a Gamma distribution, a Gamma model is incorporated in the proposed denoising procedure. In addition, the scale and shape parameters of the Gamma distribution are estimated using the maximum likelihood (ML) method. Bias due to speckle noise is expressed using these parameters and is removed from the NLM filtered output. The experiments on phantom images and real 2D ultrasound datasets show that the proposed method outperforms other related well-accepted methods, both in terms of objective and subjective evaluations. The results demonstrate that the proposed method has a better performance in both speckle reduction and preservation of structural features.
Speckle reduction in medical ultrasound images using an unbiased non-local means method
S1746809416300234
This review provides the first comprehensive technically focused image of algorithms developed for automation of inspired oxygen control in preterm infants. The paper has three main parts; the first provides an overview of the existing algorithms, the second presents the major design challenges of automation and the third proposes directions for future research and development of improved controllers. In the first section, the algorithms are classified in four categories, namely rule-based, proportional-integral-derivative, adaptive, and robust. The second section discusses variability in oxygenation, technologic shortcomings of infant monitoring and safety considerations as the three major challenges for designing automated controllers. The paper finally proceeds to suggest some future directions for improving automated oxygen control in the preterm infant. It suggests that based on the nature of the physiological system, an optimal algorithm must be capable of making continuous adjustments and it should be adaptive, including to alterations in severity of lung dysfunction and position on the oxygen saturation curve. It is also suggested that future controllers must utilise additional inputs for tasks such as oximeter signal validation and real-time prediction of hypoxic events.
Automated control of inspired oxygen for preterm infants: What we have and what we need
S1746809416300246
We present the Iterative/Causal Subspace Tracking framework (I/CST) for reducing noise in continuously monitored quasi-periodic biosignals. Signal reconstruction of the basic segments of the noisy signal (e.g. beats) is achieved by projection to a reduced space on which probabilistic tracking is performed. The attractiveness of the presented method lies in the fact that the subspace, or manifold, is learned by incorporating temporal, morphological, and signal elevation constraints, so that segment samples with similar shapes, and that are close in time and elevation, are also close in the subspace representation. Evaluation of the algorithm's effectiveness on the intracranial pressure (ICP) signal serves as a practical illustration of how it can operate in clinical conditions on routinely acquired biosignals. The reconstruction accuracy of the system is evaluated on an idealized 20-min ICP recording established from the average ICP of patients monitored for various ICP related conditions. The reconstruction accuracy of the ground truth signal is tested in presence of varying levels of additive white Gaussian noise (AWGN) and Poisson noise processes, and measures significant increases of 758% and 396% in the average signal-to-noise ratio (SNR).
Noise reduction in intracranial pressure signal using causal shape manifolds
S1746809416300337
The stage and grade of psoriasis severity is clinically relevant and important for dermatologists as it aids them lead to a reliable and an accurate decision making process for better therapy. This paper proposes a novel psoriasis risk assessment system (pRAS) for stratification of psoriasis severity from colored psoriasis skin images having Asian Indian ethnicity. Machine learning paradigm is adapted for risk stratification of psoriasis disease grades utilizing offline training and online testing images. We design four kinds of pRAS systems. It uses two kinds of classifiers (support vector machines (SVM) and decision tree (DT)) during training and testing phases and two kinds of feature selection criteria (Principal Component Analysis (PCA) and Fisher Discriminant Ratio (FDR)), thus, leading to an exhaustive comparison between these four systems. Our database consisted of 848 psoriasis images with five severity grades: healthy, mild, moderate, severe and very severe, consisting of 383, 47, 245, 145, and 28 images respectively. The pRAS system computes 859 colored and grayscale image features. Using cross-validation protocol with K-fold procedure, the pRAS system utilizing the SVM with FDR combination with combined color and grayscale feature set gives an accuracy of 99.92%. Several performance evaluation parameters such as: feature retaining power, aggregated feature effect and system reliability is computed meeting our assumptions and hypothesis. Our results demonstrate promising results and pRAS system is able to stratify the psoriasis disease.
A novel approach to multiclass psoriasis disease risk stratification: Machine learning paradigm
S1746809416300349
Electromyogram signals contain information for predicting muscle force that can be used in human-machine interaction and medical applications such as the control of prosthetic hands. Different methods exist for estimating the SEMG–force relation. However, muscle dynamic variations during voluntary contractions due to fatigue have been neglected in the identification stage. This would make the models not applicable to normal working conditions. We developed a model based on Laguerre expansion technique, LET, to identify the dynamic SEMG–force relation and investigate the presence of fatigue through kernel analysis. Our proposed data acquisition protocol was used to induce fatigue in the muscles involved in the act of grasping, hence enabling us to study the effects of muscle fatigue. The results of LET in comparison with fast orthogonal search and parallel cascade identification, which were able to accurately identify the desired dynamics, represent an improvement of 15% and 3.8% in prediction fitness, respectively. Moreover, by extracting median frequency (MDF) of the recorded SEMG signals and tracking its changes over time, the existence of muscle fatigue was studied. The results showed that fatigue had an impact on the Brachioradialis muscle. The first and second order kernels of the LET illustrated variations in the time and frequency domains similar to that of MDF for the Brachioradialis muscle corresponding to the fatigue generation process. Employing the proposed model the dynamics of SEMG–force relation can be predicted and its variations due to muscle fatigue can also be investigated.
Dynamic modeling of SEMG–force relation in the presence of muscle fatigue during isometric contractions
S1751157713000527
In this paper the accuracy of five current approaches to quantifying the byline hierarchy of a scientific paper is assessed by measuring the ability of each to explain the variation in a composite empirical dataset. Harmonic credit explained 97% of the variation by including information about the number of coauthors and their position in the byline. In contrast, fractional credit, which ignored the byline hierarchy by allocating equal credit to all coauthors, explained less than 40% of the variation in the empirical dataset. The nearly 60% discrepancy in explanatory power between fractional and harmonic credit was accounted for by equalizing bias associated with the omission of relevant information about differential coauthor contribution. Including an additional parameter to describe a continuum of intermediate formulas between fractional and harmonic provided a negligible or negative gain in predictive accuracy. By comparison, two parametric models from the bibliometric literature both had an explanatory capacity of approximately 80%. In conclusion, the results indicate that the harmonic formula provides a parsimonious solution to the problem of quantifying the byline hierarchy. Harmonic credit allocation also accommodates specific indications of departures from the basic byline hierarchy, such as footnoted information stating that some or all coauthors have contributed equally or indicating the presence of a senior author.
Harmonic coauthor credit: A parsimonious quantification of the byline hierarchy
S1751157714000510
Equalizing bias (EqB) is a systematic inaccuracy which arises when authorship credit is divided equally among coauthors who have not contributed equally. As the number of coauthors increases, the diminishing amount of credit allocated to each additional coauthor is increasingly composed of equalizing bias such that when the total number of coauthors exceeds 12, the credit score of most coauthors is composed mostly of EqB. In general, EqB reverses the byline hierarchy and skews bibliometric assessments by underestimating the contribution of primary authors, i.e. those adversely affected by negative EqB, and overestimating the contribution of secondary authors, those benefitting from positive EqB. The positive and negative effects of EqB are balanced and sum to zero, but are not symmetrical. The lack of symmetry exacerbates the relative effects of EqB, and explains why primary authors are increasingly outnumbered by secondary authors as the number of coauthors increases. Specifically, for a paper with 50 coauthors, the benefit of positive EqB goes to 39 secondary authors while the burden of negative EqB befalls 11 primary authors. Relative to harmonic estimates of their actual contribution, the EqB of the 50 coauthors ranged from <−90% to >350%. Senior authorship, when it occurs, is conventionally indicated by a corresponding last author and recognized as being on a par with a first author. If senior authorship is not recognized, then the credit lost by an unrecognized senior author is distributed among the other coauthors as part of their EqB. The powerful distortional effect of EqB is compounded in bibliometric indices and performance rankings derived from biased equal credit. Equalizing bias must therefore be corrected at the source by ensuring accurate accreditation of all coauthors prior to the calculation of aggregate publication metrics.
Reversing the byline hierarchy: The effect of equalizing bias on the accreditation of primary, secondary and senior authors
S175115771500005X
There are two versions in the literature of counting co-author pairs. Whereas the first version leads to a two-dimensional (2-D) power function distribution; the other version shows three-dimensional (3-D) graphs, totally rotatable around and their shapes are visible in space from all possible points of view. As a result, these new 3-D computer graphs, called “Social Gestalts” deliver more comprehensive information about social network structures than simple 2-D power function distributions. The mathematical model of Social Gestalts and the corresponding methods for the 3-D visualization and animation of collaboration networks are presented in Part I of this paper. Fundamental findings in psychology/sociology and physics are used as a basis for the development of this model. The application of these new methods to male and to female networks is shown in Part II. After regression analysis the visualized Social Gestalts are rather identical with the corresponding empirical distributions (R 2 >0.99). The structures of female co-authorship networks differ markedly from the structures of the male co-authorship networks. For female co-author pairs’ networks, accentuation of productivity dissimilarities of the pairs is becoming visible but on the contrary, for male co-author pairs’ networks, accentuation of productivity similarities of the pairs is expressed.
Who is collaborating with whom? Part I. Mathematical model and methods for empirical testing
S1751157715000188
The theoretical approach of the mathematical model of Social Gestalts and the corresponding methods for the 3-D visualization and animation of collaboration networks are presented in Part I. The application of these new methods to male and female networks is shown in Part II. After regression analysis the visualized Social Gestalts are rather identically with the corresponding empirical distributions (R 2 >0.99). The structures of female co-authorship networks differ markedly from the structures of the male co-authorship networks. For female co-author pairs’ networks, accentuation of productivity dissimilarities of the pairs is becoming visible but on the contrary, for male co-author pairs’ networks, accentuation of productivity similarities of the pairs is expressed.
Who is collaborating with whom? Part II. Application of the methods to male and to female networks
S1751157715000218
This study investigates how scientific performance in terms of publication rate is influenced by the gender, age and academic position of the researchers. Previous studies have shown that these factors are important variables when analysing scientific productivity at the individual level. What is new with our approach is that we have been able to identify the relative importance of the different factors based on regression analyses (OLS) of each major academic field. The study, involving almost 12,400 Norwegian university researchers, shows that academic position is more important than age and gender. In the fields analysed, the regression model can explain 13.5–19 per cent of the variance in the publication output at the levels of individuals. This also means that most of the variance in publication rate is due to other factors.
Publication rate expressed by age, gender and academic position – A large-scale analysis of Norwegian academic staff
S1751157715200302
An elite segment of the academic output gap between Denmark and Norway was examined using harmonic estimates of publication credit for contributions to Science and Nature in 2012 and 2013. Denmark still leads but the gap narrowed in 2013 as Norway's credit increased 58%, while Denmark's credit increased only 5.4%, even though Norway had 36% fewer, and Denmark 40% more, coauthor contributions than in 2012. Concurrently, the credit produced by the least productive half of the contributions rose tenfold from 0.9% to 10.1% for Norway, but dropped from 7.2% to 5.7% for Denmark. Overall, contributory inequality as measured by the Gini coefficient, fell from 0.78 to 0.51 for Norway, but rose from 0.63 to 0.68 for Denmark. Neither gap narrowing nor the positive association between reduced contributory inequality and increased credit were detected by conventional metrics. Conventional metrics are confounded by equalizing bias (EqB) which favours small contributors at the expense of large contributors, and which carries an element of reverse meritocracy and systemic injustice into bibliometric performance assessment. EqB was corrected by using all relevant byline information from every coauthored publication in the source data. This approach demonstrates the feasibility of using EqB-corrected publication credit in gap assessment at the national level.
Contributory inequality alters assessment of academic output gap between comparable countries
S1751157715300717
In this paper, we try to answer two questions about any given scientific discipline: first, how important is each subfield and second, how does a specific subfield influence other subfields? We modify the well-known open-system Leontief Input–Output Analysis in economics into a closed-system analysis focusing on eigenvalues and eigenvectors and the effects of removing one subfield. We apply this method to the subfields of physics. This analysis has yielded some promising results for identifying important subfields (for example the field of statistical physics has large influence while it is not among the largest subfields) and describing their influences on each other (for example the subfield of mechanical control of atoms is not among the largest subfields cited by quantum mechanics, but our analysis suggests that these fields are strongly connected). This method is potentially applicable to more general systems that have input–output relations among their elements.
Interrelations among scientific fields and their relative influences revealed by an input–output analysis
S1751157715300766
Each year, researchers publish an immense number of scientific papers. While some receive many citations, others receive none. Here we investigate whether any of this variance can be explained by the choice of words in a paper's abstract. We find that doubling the word frequency of an average abstract increases citations by 0.70%. We also find that journals which publish papers whose abstracts are shorter and contain more frequently used words receive slightly more citations per paper. Specifically, adding a 5 letter word to an abstract decreases the number of citations by 0.02%. These results are consistent with the hypothesis that the style in which a paper's abstract is written bears some relation to its scientific impact.
The advantage of simple paper abstracts
S1751157715301048
We discuss, at the macro-level of nations, the contribution of research funding and rate of international collaboration to research performance, with important implications for the “science of science policy”. In particular, we cross-correlate suitable measures of these quantities with a scientometric-based assessment of scientific success, studying both the average performance of nations and their temporal dynamics in the space defined by these variables during the last decade. We find significant differences among nations in terms of efficiency in turning (financial) input into bibliometrically measurable output, and we confirm that growth of international collaboration positively correlate with scientific success—with significant benefits brought by EU integration policies. Various geo-cultural clusters of nations naturally emerge from our analysis. We critically discuss the factors that potentially determine the observed patterns.
Investigating the interplay between fundamentals of national research systems: Performance, investments and international collaborations
S1875952116300040
The pre-show experience is a significant part of the movie industry. Moviegoers, on average arrive 24min before when the previews start. Previews have been a part of the movie experience for more than a hundred years and are a culturally significant aspect of the whole experience. Over the last decade, the pre-movie in-theatre experience has grown to a $600 million industry. This growth continues to accelerate. Since 2012, this industry has increased by 150%. Consequently, there is an industry-wide demand for innovation in the pre-movie area. In this paper, we describe Paths, an innovative multiplayer real-time socially engaging game that we designed, developed and evaluated. An iterative refinement application development methodology was used to create the game. The game may be played on any smartphone and group interactions are viewed on the large theatre screen. This paper also reports on the quasi-experimental mixed method study with repeated measures that was conducted to ascertain the effectiveness of this new game. The results show that Paths is very engaging with elements of suspense, pleasant unpredictability and effective team building and crowd-pleasing characteristics.
Mobile devices at the cinema theatre
S1877750313000240
We investigate the performance of the HemeLB lattice-Boltzmann simulator for cerebrovascular blood flow, aimed at providing timely and clinically relevant assistance to neurosurgeons. HemeLB is optimised for sparse geometries, supports interactive use, and scales well to 32,768 cores for problems with ∼81 million lattice sites. We obtain a maximum performance of 29.5 billion site updates per second, with only an 11% slowdown for highly sparse problems (5% fluid fraction). We present steering and visualisation performance measurements and provide a model which allows users to predict the performance, thereby determining how to run simulations with maximum accuracy within time constraints.
Analysing and modelling the performance of the HemeLB lattice-Boltzmann simulation environment
S187775031300077X
Mean-field models of the mammalian cortex treat this part of the brain as a two-dimensional excitable medium. The electrical potentials, generated by the excitatory and inhibitory neuron populations, are described by nonlinear, coupled, partial differential equations that are known to generate complicated spatio-temporal behaviour. We focus on the model by Liley et al. (Network: Computation in Neural Systems 13 (2002) 67–113). Several reductions of this model have been studied in detail, but a direct analysis of its spatio-temporal dynamics has, to the best of our knowledge, never been attempted before. Here, we describe the implementation of implicit time-stepping of the model and the tangent linear model, and solving for equilibria and time-periodic solutions, using the open-source library PETSc. By using domain decomposition for parallelization, and iterative solving of linear problems, the code is capable of parsing some dynamics of a macroscopic slice of cortical tissue with a sub-millimetre resolution.
Open-source tools for dynamical analysis of Liley's mean-field cortex model
S1877750313001269
Computer simulation is finding a role in an increasing number of scientific disciplines, concomitant with the rise in available computing power. Marshalling this power facilitates new, more effective and different research than has been hitherto possible. Realizing this inevitably requires access to computational power beyond the desktop, making use of clusters, supercomputers, data repositories, networks and distributed aggregations of these resources. The use of diverse e-infrastructure brings with it the ability to perform distributed multiscale simulations. Accessing one such resource entails a number of usability and security problems; when multiple geographically distributed resources are involved, the difficulty is compounded. In this paper we present a solution, the Application Hosting Environment, 3 3 AHE is available to download under the LGPL license from: https://sourceforge.net/projects/ahe3/. which provides a Software as a Service layer on top of distributed e-infrastructure resources. We describe the performance and usability enhancements present in AHE version 3, and show how these have led to a high performance, easy to use gateway for computational scientists working in diverse application domains, from computational physics and chemistry, materials science to biology and biomedicine.
Flexible composition and execution of large scale applications on distributed e-infrastructures
S1877750314000465
We present the Multiscale Coupling Library and Environment: MUSCLE 2. This multiscale component-based execution environment has a simple to use Java, C++, C, Python and Fortran API, compatible with MPI, OpenMP and threading codes. We demonstrate its local and distributed computing capabilities and compare its performance to MUSCLE 1, file copy, MPI, MPWide, and GridFTP. The local throughput of MPI is about two times higher, so very tightly coupled code should use MPI as a single submodel of MUSCLE 2; the distributed performance of GridFTP is lower, especially for small messages. We test the performance of a canal system model with MUSCLE 2, where it introduces an overhead as small as 5% compared to MPI.
Distributed multiscale computing with MUSCLE 2, the Multiscale Coupling Library and Environment
S1877750315000125
The failure rate for vascular interventions (vein bypass grafting, arterial angioplasty/stenting) remains unacceptably high. Over the past two decades, researchers have applied a wide variety of approaches to investigate the primary failure mechanisms, neointimal hyperplasia and aberrant remodeling of the wall, in an effort to identify novel therapeutic strategies. Despite incremental progress, specific cause/effect linkages among the primary drivers of the pathology, (hemodynamic factors, inflammatory biochemical mediators, cellular effectors) and vascular occlusive phenotype remain lacking. We propose a multiscale computational framework of vascular adaptation to develop a bridge between theory and experimental observation and to provide a method for the systematic testing of relevant clinical hypotheses. Cornerstone to our model is a feedback mechanism between environmental conditions and dynamic tissue plasticity described at the cellular level with an agent based model. Our implementation (i) is modular, (ii) starts from basic mechano-biology principle at the cell level and (iii) facilitates the agile development of the model.
A multiscale computational framework to understand vascular adaptation
S1877750315000460
Cerebrovascular diseases such as brain aneurysms are a primary cause of adult disability. The flow dynamics in brain arteries, both during periods of rest and increased activity, are known to be a major factor in the risk of aneurysm formation and rupture. The precise relation is however still an open field of investigation. We present an automated ensemble simulation method for modelling cerebrovascular blood flow under a range of flow regimes. By automatically constructing and performing an ensemble of multiscale simulations, where we unidirectionally couple a 1D solver with a 3D lattice-Boltzmann code, we are able to model the blood flow in a patient artery over a range of flow regimes. We apply the method to a model of a middle cerebral artery, and find that this approach helps us to fine-tune our modelling techniques, and opens up new ways to investigate cerebrovascular flow properties.
An automated multiscale ensemble simulation approach for vascular blood flow
S1877750315000563
Dimensional analysis is a well known technique for checking the consistency of equations involving physical quantities, constituting a kind of type system. Various type systems for dimensional analysis, and its refinement to units-of-measure, have been proposed. In this paper, we detail the design and implementation of a units-of-measure system for Fortran, provided as a pre-processor. Our system is designed to aid adding units to existing code base: units may be polymorphic and can be inferred. Furthermore, we introduce a technique for reporting to the user a set of critical variables which should be explicitly annotated with units to get the maximum amount of unit information with the minimal number of explicit declarations. This aids adoption of our type system to existing code bases, of which there are many in computational science projects.
Evolving Fortran types with inferred units-of-measure
S1877750315000575
With a broader distribution of personal smart devices and with an increasing availability of advanced navigation tools, more drivers can have access to real time information regarding the traffic situation. Our research focuses on determining how using the real time information about a transportation system could influence the system itself. We developed an agent based model to simulate the effect of drivers using real time information to avoid traffic congestion. Experiments reveal that the system's performance is influenced by the number of participants that have access to real time information. We also discover that, in certain circumstances, the system performance when all participants have information is no different from, and perhaps even worse than, when no participant has access to information.
Information impact on transportation systems
S1877750315300119
The paper describes the philosophy, design, functionality, and usage of the Python software toolbox Chaospy for performing uncertainty quantification via polynomial chaos expansions and Monte Carlo simulation. The paper compares Chaospy to similar packages and demonstrates a stronger focus on defining reusable software building blocks that can easily be assembled to construct new, tailored algorithms for uncertainty quantification. For example, a Chaospy user can in a few lines of high-level computer code define custom distributions, polynomials, integration rules, sampling schemes, and statistical metrics for uncertainty analysis. In addition, the software introduces some novel methodological advances, like a framework for computing Rosenblatt transformations and a new approach for creating polynomial chaos expansions with dependent stochastic variables.
Chaospy: An open source tool for designing methods of uncertainty quantification
S1877750315300296
Wave form modeling is used in a vast number of applications. Therefore, different methods have been developed that exhibit different strengths and weaknesses in accuracy, stability and computational cost. The latter remains a problem for most applications. Parallel programming has had a large impact on wave field modeling since the solution of the wave equation can be divided into independent steps. The finite difference solution of the wave equation is particularly suitable for GPU acceleration; however, one problem is the rather limited global memory current GPUs are equipped with. For this reason, most large-scale applications require multiple GPUs to be employed. This paper proposes a method to optimally distribute the workload on different GPUs by avoiding devices that are running idle. This is done by using a list of active sub-domains so that a certain sub-domain is activated only if the amplitude inside the sub-domain exceeds a given threshold. During the computation, every GPU checks if the sub-domain needs to be active. If not, the GPU can be assigned to another sub-domain. The method was applied to synthetic examples to test the accuracy and the efficiency of the method. The results show that the method offers a more efficient utilization of multi-GPU computer architectures.
A two-scale method using a list of active sub-domains for a fully parallelized solution of wave equations
S1877750316300278
Nowadays, High Performance Computing (HPC) systems commonly used in bioinformatics, such as genome sequencing, incorporate multi-processor architectures. Typically, most bioinformatics applications are multi-threaded and dominated by memory-intensive operations, which are not designed to take full advantage of these HPC capabilities. Therefore, the application end-user is responsible for optimizing the application performance and improving scalability with various performance engineering concepts. Additionally, most of the HPC systems are operated in a multi-user (or multi-job) environment; thus, Quality of Service (QoS) methods are essential for balancing between application performance, scalability and system utilization. We propose a QoS workflow that optimizes the balancing ratio between parallel efficiency and system utilization. Accordingly, our proposed optimization workflow will advise the end user of a selection criteria to apply toward resources and options for a given application and HPC system architecture. For example, the BWA-MEM algorithm is a popular and modern algorithm for aligning human genome sequences. We conducted various case studies on BWA-MEM using our optimization workflow, and as a result compared to a state-of-the-art baseline, the application performance is improved up to 67%, scalability extended up to 200%, parallel efficiency improved up to 39% and overall system utilization increased up to 38%.
Workflow optimization of performance and quality of service for bioinformatics application in high performance computing
S1877750316300308
This study is to understand confinement effect on the dynamical behaviour of a droplet immersed in an immiscible liquid subjected to a simple shear flow. The lattice Boltzmann method, which uses a forcing term and a recolouring algorithm to realize the interfacial tension effect and phase separation respectively, is adopted to systematically study droplet deformation and breakup in confined conditions. The effects of capillary number, viscosity ratio of the droplet to the carrier liquid, and confinement ratio are studied. The simulation results are compared against the theoretical predictions, experimental and numerical data available in literature. We find that increasing confinement ratio will enhance deformation, and the maximum deformation occurs at the viscosity ratio of unity. The droplet is found to orient more towards the flow direction with increasing viscosity ratio or confinement ratio. Also, it is noticed that the wall effect becomes more significant for the confinement ratios larger than 0.4. Finally, the critical capillary number, above which the droplet breakup occurs, is found to be mildly affected by the confinement for the viscosity ratio of unity. Upon increasing the confinement ratio, the critical capillary number increases for the viscosity ratios less than unity, but decreases for the viscosity ratios more than unity.
Droplet dynamics in confinement
S1877750316300321
Droplet collisions have complex dynamics, which can lead to many different regimes of outcomes. The head-on collision and bounce back regime has been observed in previous experiments but numerical simulations using macro- or mesoscale approaches have difficulties reproducing the phenomena, because the interfacial regions are not well resolved. Previous molecular dynamics (MD) simulations have not reproduced the bounce regime either but have reported the coalescence and/or shattering regimes. To scrutinize the dynamics and mechanisms of binary collisions especially the interfacial regions, head-on collision processes of two identical nano-droplets with various impact velocities both in vacuum and in an ambient of nitrogen gas are investigated by MD simulations. With the right combination of the impact velocity and ambient pressure, the head-on collision and bounce back phenomenon is successfully reproduced. The bounce phenomena are mainly attributed to the “cushion effect” of the in-between nitrogen molecules and evaporated water molecules from the two nano-droplets. The analysis has verified and also extended the current gas film theory for the bounce regime through including the effects of evaporated water molecules (vapour). Some similarities and some dissimilarities between nanoscale and macro-/meso-/microscale droplet collisions have been observed. The study provides unprecedented insight into the interfacial regions between two colliding droplets.
Bounce regime of droplet collisions: A molecular dynamics study
S1878778914000465
Bacteria communicate with one another by exchanging specific chemical signals called autoinducers. This process, also called quorum sensing, enables a cluster of bacteria to regulate their gene expression and behaviour collectively and synchronously, such as bioluminescence, virulence, sporulation and conjugation. Bacteria assess their population density by detecting the concentration of autoinducers. In Vibrio fischeri, which is a heterotrophic Gram-negative marine bacterium, quorum sensing relies on the synthesis, accumulation and subsequent sensing of a signalling molecule (3-oxo-C6-HSL, an N-acyl homoserine lactone or AHL). In this work, a data link layer protocol for a bacterial communication paradigm based on diffusion is introduced, considering two populations of bacteria as the transmitter node and the receiver node, instead of employing two individual bacteria. Moreover, some initial results are provided, which concern the application of the Stop-N-Wait Automatic Repeat reQuest (SW-ARQ) schemes to the proposed model. The performances of the system are later evaluated, in terms of the transmission time, frame error rate, energy consumption and transmission efficiency.
Performance of SW-ARQ in bacterial quorum communications
S1878778915000617
Quorum sensing (QS) is used to describe the communication between bacterial cells, whereby a coordinated population response is controlled through the synthesis, accumulation and subsequent sensing of specific diffusible chemical signals called autoinducers, enabling a cluster of bacteria to regulate gene expression and behaviour collectively and synchronously, and assess their own population. As a promising method of molecular communication, bacterial populations can be programmed as bio-transceivers to establish information transmission using molecules. In this work, to investigate the key features for molecular communication, a bacterial QS system is introduced, which contains two clusters of bacteria, specifically Vibrio fischeri, as the transmitter node and receiver node, and the diffusive channel. The transmitted information is represented by the concentration of autoinducers with on–off keying (OOK) modulation. In addition, to achieve better reliability, transmission efficiency and channel throughput performance, different Automatic Repeat reQuest (ARQ) protocols are taken into consideration. This configuration is investigated via simulation and the consequent results discussed. The performance of the system is evaluated in terms of transmission time, efficiency, bit error rate (BER) and channel throughput. Results show that Selective-Repeat (SR-ARQ) performs better than Go-Back-N (GBN-ARQ), while the performance of Stop-N-Wait (SW-ARQ) varies for different channel conditions, which is quite different from the performance of ARQ schemes in traditional networking areas.
Analysis of ARQ protocols for bacterial quorum communications
S2210537914000195
While evolving mobile technologies bring millions of users closer to the vision of information anywhere-anytime, device battery depletions still hamper the quality of experience to a great extent. The energy consumption of data transmission is highly dependent on the traffic pattern, and we argue that designing energy efficient data transmissions starts by energy awareness. Our work proposes EnergyBox, a parametrised tool that facilitates accurate and repeatable energy consumption studies for 3G and WiFi transmissions at the user end using real traffic data. The tool takes as input the parameters of a network operator and the power draw for a given mobile device in the 3G and WiFi transmission states. It outputs an estimate of the consumed energy for a given packet trace, either synthetic or captured in a device using real applications. Using nine different applications with different data patterns the versatility and accuracy of the tool was evaluated. The evaluation was carried out for a modern and popular smartphone in the WiFi setting, a specific mobile broadband module for the 3G setting, and within the operating environment of a major mobile operator in Sweden. A comparison with real power traces indicates that EnergyBox is a valuable tool for repeatable and convenient studies. It exhibits an accuracy of 94–99% for 3G, and 95–99% for WiFi given the studied applications’ traces. Next the tool was deployed in a use case where a location sharing application was ran on top of two alternative application layer protocols (HTTP and MQTT) and with two different data exchange formats (JSON and Base64). The illustrative use case helped to identify the appropriateness of the pull and push strategies in sharing location data, and the benefit of EnergyBox in characterising where the breaking point lies for preferring one or the other protocol, under which network load, or exchange data format.
EnergyBox: Disclosing the wireless transmission energy cost for mobile devices
S2210970613000231
This paper presents a timetable rescheduling algorithm based on Mixed Integer Programming (MIP) formulation when train traffic is disrupted. We minimize further inconvenience to passengers instead of consecutive delays caused by the disruption, since loss of time and satisfaction of the passengers are considered implicitly and insufficiently in the latter optimization. We presume that inconvenience of traveling by train consists of the traveling time on board, the waiting time at platforms and the number of transfers. Hence, the objective function is calculated on the positive difference between the inconvenience which each passenger suffers on his/her route in a rescheduled timetable and that in a planned timetable. The inconvenience-minimized rescheduling is often achieved at the cost of further train delays. Some trains dwell longer at a station to wait for extra passengers to come or to keep a connection, for instance. In the MIP model, train operation, each passenger’s behavior and the amount inconvenience are simultaneously expressed by a system of integer linear inequalities. As countermeasures against the disruption, changes of train types and rolling stock operation schedules at termini as well as changes of departing order of trains and assignment of a track to trains in stations are performed. We also consider capacities of a line between adjacent stations as well as those of a track in stations. We have conducted numerical experiments using actual data and have obtained better rescheduled timetables in terms of customer satisfaction within practical time in proper solution space. set of train directions opposite direction of b ∈ B set of trains set of trains traveling in b ∈ B set of ordered pair of distinct trains traveling in b ∈ B set of candidate successor trains of r ∈ R set of stations starting station of trains traveling in b ∈ B terminus of trains traveling in b ∈ B station next to s ∈ S in b ∈ B set of O–D pair of stations in b ∈ B set of stations at which passengers to d ∈ S in b ∈ B can transfer station next to s ∈ S in b ∈ B at which passengers can transfer set of tracks at s ∈ S which trains traveling in b ∈ B can pass through or stop at set of train types set of train types which stop at s ∈ S set of train types which passengers can take at s to reach d where ( s , d ) ∈ S b < 2 priority relation between two train types set of discrete time periods set of two pairs ( o , d 1 ) , ( o , d 2 ) ∈ S b < 2 with d 1 ≠ d 2 whose behavior is same when they appear at same time arrival/departure time of r ∈ R at s ∈ S in planned timetable amount of inconvenience to passengers between ( o , d ) ∈ S b < 2 appearing at t ∈ T in planned timetable arrival/departure time of r ∈ R at s ∈ S in certain feasible rescheduled timetable minimum interval required between arrivals and departures arbitrary large number concerning headway/inconvenience maximum number of trains traveling in b ∈ B allowed to exist on line between ( s , Next ( b , s ) ) number of passengers between ( o , d ) ∈ S b < 2 appearing at t ∈ T inconvenience of waiting for 1min at platform inconvenience of one train transfer flexibility in timetable to be rescheduled arrival/departure time of r ∈ R at s ∈ S 0–1 if or not type of r ∈ R is e ∈ E 0–1 if or not r ∈ R b passes through or stops at k ∈ K b , s 0–1 if or not successor of r 1 ∈ R is r 2 ∈ R Suc ( r 1 ) 0–1 if or not r 1 ∈ R b departs from s ∈ S ⧹ { Term ( b ) } earlier than r 2 ∈ R b ⧹ { r 1 } 0–1 if or not r 1 ∈ R b arrives at Term(b) earlier than r 2 ∈ R Opp ( b ) departs from 0–1 if or not r 2 ∈ R Opp ( b ) departs from Term(b) earlier than r 1 ∈ R b arrives at 0–1 if or not N b , s - 1 trains depart from s after r 1 and before r 2 does 0–1 if or not passengers between ( o , d ) ∈ S b < 2 appearing at t ∈ T take r ∈ R b 0–1 if or not passengers to d ∈ S at s ∈ S Tra ( b , d ) transfer from r 1 ∈ R b to r 2 ∈ R b amount of inconvenience to passengers to d when they take r ∈ R b at s where ( s , d ) ∈ S b < 2 increased amount of inconvenience to passengers between ( o , d ) ∈ S b < 2 appearing at t ∈ T taking r ∈ R b
A MIP-based timetable rescheduling formulation and algorithm minimizing further inconvenience to passengers
S2210970614000304
The evaluation of carrying capacity of complex railway nodes is a typical problem to be faced in metropolitan areas. This paper initially analyzes a few methods (Potthoff methodology, Probabilistic approach and Deutsche Bahn procedure) for the evaluation of carrying capacity of complex railway nodes. The aim of the article is to investigate commonalities and differences among these methods in order to try (even in the continuation of the research) to identify potential margins of improvement or to formulate a new approach to evaluate the use of stations in a synthetic mode, considering the characteristics and the limits of the existing and analyzed models. The results of the theoretical analysis have been validated by means of applications to typical case studies.
A synthetic approach to the evaluation of the carrying capacity of complex railway nodes
S2210970614000419
The Øresund and the planned Fehmarnbelt fixed links have recently adopted a set of standards that can significantly raise the operating efficiency and capacity of freight by rail. These standards are explained in the context of the German–Scandinavian railway corridor and in comparison to the European Technical Specifications for Interoperability. Using a quantitative model, the mass and volume load capacity per train are calculated. Compared to present constraining limitations in the German–Scandinavian corridor, the mass load capacity per train can be increased by 64%, and the volume load capacity by up to 220%.
Øresund and Fehmarnbelt high-capacity rail corridor standards updated
S2210970615300093
Timetable management is one of the operational methodologies commonly applied in the highly structured European rail system to improve the capacity utilization while maintaining acceptable level of service (LOS) parameters; but their potential benefits to the less structured U.S. system have received little attention. The objective of this study was to investigate the use of timetable management features to analyze the trade-off between LOS parameters and capacity utilization in the U.S. The research applies a hybrid simulation approach, where output from RTC, a simulation tool developed in the U.S., was used as an input for timetable compression by RailSys, a simulation tool developed in Europe. 28 scenarios were developed in RailSys to identify a preferred scenario with reasonable LOS parameters while maintaining the capacity utilization under the recommended threshold, and the selected scenario of RailSys was then validated in RTC. The results of the study revealed that 10-min maximum allowed dwell time provided the best corridor capacity utilization. Also, the LOS parameters were significantly improved for total number of stops (55% reduction), total dwell times (80% reduction) and average dwell time (65% reduction); while the timetable duration was increased (capacity utilization was degraded) by 18% compared to the initial schedule.
Hybrid simulation approach for improving railway capacity and train schedules
S2212054813000027
Archaeology and biological anthropology share research interests and numerous methods for field work. Both profit from collaborative work and diffusion of know-how. The last two decades have seen a technical revolution in biological anthropology: Virtual Anthropology (VA). It exploits digital technologies and brings together experts from different domains. Using volume and surface data from scanning processes, VA allows applying advanced shape and form analysis, higher reproducibility, offers permanent availability of virtual objects, and easy data exchange. The six main areas of VA including digitisation, exposing hidden structures, comparing shapes and forms, reconstructing specimens, materialising electronic specimens, and sharing data are introduced in this paper. Many overlaps with archaeological problems are highlighted and potential application areas are emphasised. The article provides a 3D human cranium model and a movie flying around and through the virtual copy of a most famous archaeological object: the Venus from Willendorf, Austria.
Another link between archaeology and anthropology: Virtual anthropology
S2212054813000088
The Baths of Caracalla are the second largest but most complete bathing complex in the city of Rome. They are a representation of the might, wealth, and ingenuity of the Roman Empire. As such, a brief introduction to the site of the Baths of Caracalla and its layout is advantageous. This article chronicles the digital reconstruction process that began as a means to obtain the geometry of one room for the purposes of a thermal analysis. Unlike many reconstructions, this one uses a parametric design program, SolidWorks, as the base because it allows for easy and precise manipulation of the geometry. While this recreation still has rough textures, it provides insights into the geometry: particularly surrounding the glass in the windows. The 3D model allows the viewer to partially experience the atmosphere of the site and illustrates its enormity.
Reconstructing the Baths of Caracalla
S2212054814000022
As demonstrated in several case studies 3D digital acquisition techniques may greatly help in documenting an archeological site and the related findings. Despite such information supports for the traditional analytical approach for hypothesizing the most probable interpretation of an archeological ruin, mainly focused on excavations and stratigraphic examination; an accurate reality-based representation may be also used as the starting point for creating a scientifically sound virtual reconstruction of the site, embedding historical information of different provenances. The aim of this paper is to describe this whole process step by step, focusing on the iterative feedback that can allow us to reach the best virtual reconstruction solutions, helping the archeologists to better focus their reasoning through a detailed visual representation, and the technological experts to avoid misleading details in the final virtual reconstruction. The methodology has been experimented on a group of Cham temples located at MySon, an UNESCO archeological area in central Vietnam.
3D survey and virtual reconstruction of archeological sites
S2212054814000046
La Venta was a large regional center located near the Gulf coast in Tabasco, Mexico. From ca. 800–400 BC it was the major Olmec capital in Mesoamerica. Despite its significance La Venta has received little archeological attention. The clay structures of its ritual precinct, Complex A, excavated in the 1940s–50s, were subsequently destroyed. Unfortunately, the published reports on those excavations are inadequate, with misleading archeological drawings. In order to obtain a more precise and comprehensive understanding of La Venta the original excavation records were consulted, and field drawings and maps were digitized to create more accurate 2d images as well as a 3d model of Complex A. This article summarizes the process of digitizing the archival records and the interpretive benefits from utilizing 3d visualizations of the site. Recounting the process may inform similar projects dependent on archival records when field mapping or excavation are no longer possible.
A 3d model of Complex A, La Venta, Mexico
S2212054814000058
This paper discusses a methodology used in the 3D virtual representation of monuments whose characteristics and function make them unsuitable for an integrated survey and modelling study. We therefore analysed the potential of Architecture, Engineering and Construction (AEC) tools, which combine geometry with alpha numeric metadata. Parametric and associative geometry allows updating the automatic model so that whenever there is a new survey or new findings emerge, the whole model does not need to be reworked, except for the relevant components, which renders unnecessary its complete reformulation in future updates. This makes the modelling process applicable to any inaccessible monument. Moreover, it is also possible to explore the potential of these strategies for heritage management. We chose an extensive monument, the Águas Livres Aqueduct, a Portuguese national monument in and near Lisbon, of which we present the first outputs in this paper.
Automation in heritage – Parametric and associative design strategies to model inaccessible monuments: The case-study of eighteenth-century Lisbon Águas Livres Aqueduct
S2212054814000101
Cambodian temples are severely damaged and their reconstruction is complex. Conservators are challenged by a magnitude of stones of unknown original position. Specialists resolve this large-scale puzzle by analyzing each stone, using their experience and knowledge of Khmer culture. Then a trial and error approach is applied which has disadvantages. The weight of the stones of up 1000kg complicates their movement and opposes a safety hazard to workers. Additionally the stones’ relocation should be reduced to a minimum as it promotes their deterioration. This motivated the development of a virtual approach, as computer algorithms lead to a potential solution in less time, thereby drastically reducing the amount of work. The basis for this virtual puzzle are high-resolution 3D models of 135 stones. These stones have an approximately cuboidal form and often feature indentations that are exploited by the algorithm to accelerate the matching process. The general idea is to (1) simplify the high-resolution models, (2) test all feasible combinations and (3) match best combinations and validate the results. Close collaboration with specialists on site ensures overall algorithmic correctness.
Virtually reassembling Angkor-style Khmer temples
S2212054815000028
Advances in rock art recording techniques through non-destructive methods are a priority in rock art research. Developments on computer science, digital photography and more specifically on digital image enhancement have revolutionized the way rock art motifs are recorded and documented today. Conventional software for digital image processing is widely used to produce digital tracings, with reasonable results so far. But this manual process is still time consuming, especially when motifs are either faded, deteriorated or part of complex superimpositions. This paper explores the potential of two advanced digital image enhancement decorrelation techniques, principal components analysis (PCA) and decorrelation stretch (DS) to facilitate and accelerate motif recognition on visible digital images, as a first step in the rock art recording process. Statistical analysis revealed that the performance of PCA is slightly better (5%) than DS, when compared to the results of conventional image enhancement solutions.
Evaluating conventional and advanced visible image enhancement solutions to produce digital tracings at el Carche rock art shelter
S221205481500003X
Contemporary rock art researchers have a wide choice of 3D laser scanners available to them for recording stone surfaces and this is complimented by numerous software packages that are able to process point cloud data. Though ESRI’s ArcGIS primarily handles geographical data, it also offers the ability to visualise XYZ data from a stone surface. In this article the potential of ArcGIS for rock art research is explored by focusing on 3D data obtained from two panels of cup and ring marks found at Loups’s Hill, County Durham, England. A selection of methods commonly utilised in LiDAR studies, which enhance the identification of landscape features, are also conducted upon the rock panels, including DSM normalisation and raster data Principle Component Analysis (PCA). Collectively, the visualisations produced from these techniques facilitate the identification of the rock art motifs, but there are limitations to these enhancements that are also discussed.
Image processing and visualisation of rock art laser scans from Loups’s Hill, County Durham
S2212054815000041
DStretch ®, plug-in for ImageJ © is designed specifically for the enhancement of digital images of pictographs. It is now widely used in the world since a decade but have thus far rarely been used in French and African rock art studies. Among all the software tools currently available to rock art specialists, it is one of the most efficient to decipher faint paintings and sometimes engravings, while being cheap, fast and easy to use, particularly in harsh or remote environments. Moreover, the enhancement of digital images with DStretch ® is almost operator-independent and reproducible thanks to pre-recorded settings, thus improving objectivity. We provide several examples of the benefit of using this tool on various sites, from African prehistoric pictographs to Alpine paintings and petroglyphs, and propose a method for recording rock art panels with DStretch ®. But, in spite of these advantages, we have to keep in mind that the final tracings are always subjective and biased by personal perceptions, as demonstrated here by a simple but significant experience.
Digital image enhancement with DStretch ®: Is complexity always necessary for efficiency?
S2212054815000065
We must distinguish between two profoundly different ways of knowing a thing. The first implies that we go around that thing; the second, that we enter into it. The first depends on the observer׳s point of view and on the symbols that are used to express that point of view. The second does not adopt any point of view, nor does it rely on any symbol. One could say that the first knowledge stops at the relative, whereas the second, where possible, achieves the absolute. Consequently, the act of interpreting a prehistoric carving/painting on a standing stone or on a boulder demands the generous use of language: on the one hand, the language of science, which is dominated by the symbol of equality, and where each term can be replaced by others; and on the other hand, by the lyrical language, where each term is irreplaceable and can only be repeated. But the language of science cannot be anchored within an archaeological reality that is distorted by a poorly-controlled process of information acquisition. We must adopt an approach, both in the field and in the laboratory, which allows one to reproduce an experience and which takes account of our choices and our initial interpretations in the graphic representation of the painted or engraved signs, through the implemented sensors. This contribution will showcase the use of an approach that integrates several digital methods and allows us to progress the archaeology of images. It both shares and accumulates our information base and our knowledge, proceeding as it does on a basis that is epistemologically renewed.
Intuition and analysis in the recording, interpretation and public translation of Neolithic engraved signs in western France
S2212054815000089
Photogrammetry is an indirect technique that allows one to obtain different recording products – orthophotographs, planimetries, 3D models, etc. – that are essential for the study of prehistoric rock art. We believe nonetheless that there is no single technique capable of effectively registering an entire rock art site, so it is highly recommended to use a combination of several systems – that is to say, the development of a specific recording methodology – in order to obtain a documentation which is as thorough as possible. In this regard, different possibilities of combination of photogrammetry with other photographic techniques have been analysed, with the aim of obtaining an accurate recording of the art and its support, seeking also to incorporate into this recording other essential data for the study of its state of preservation. The use of photogrammetric techniques will be described, along with the tests carried out with photographic techniques such as polarised light photography or those that register images at both ends of the visible spectrum, both in the ultraviolet (UV) and in the infrared region (IR). These techniques enable the revelation of invisible details to clarify issues concerning technology and to explore scarcely noticeable forms of alteration. In some cases, these experiences have been complemented by the use of laser scanning in order to compare the effectiveness of both techniques. With all the experience acquired, it is possible to propose a rather precise recording methodology that requires no specialised technical training and no complex equipment.
Combining photogrammetry and photographic enhancement techniques for the recording of megalithic art in north-west Iberia
S2212054815000107
Portable three-dimensional digitizers facilitate the “digitize” and “compare” operational areas of Virtual Anthropology. However, these measurement devices must be adequately accurate and precise for their intended purpose. We determined that varying the arm configurations led to standard deviations less than 0.23 mm. The distances between pairs of points among 10 points on a reference object at 16 locations within the workspace had an average error of 0.239 mm; only three of the 720 measures had an error greater than 1 mm. Through assessing geometric morphometry, the average landmark standard deviation of the 10 points at 16 locations across the workspace was only 0.12 mm. The different areas of the workspace and different arm joint configurations do not meaningfully influence the measurement precision and accuracy. Therefore, the MicroScribe G2X has suitable accuracy and precision to be an appropriate research tool for many anthropological, forensic and biomechanical studies.
Quantifying the precision and accuracy of the MicroScribe G2X three-dimensional digitizer
S2212054815000132
This paper traces the development of techniques of recording carvings on megalithic tombs and on open-air rock-art in Ireland from 1699 to the present day. Analysis shows that after the initial pioneering phase, recording methodologies tended to develop in accelerated bursts, interspersed with lulls in activity. In all, four phases of activity can be identified; in each there were a critical number of researchers who interacted with each other, driving forward advances in various forms of recording methods. Part 2 of the paper describes the application of new methods of digital recording, notably Structure from Motion photogrammetry. It shows how the resulting data have been used to create new ways of experiencing Irish prehistoric art in virtual environments, either as entire monuments in the landscape or within a “virtual museum”, using the open-source Blender 3D animation and game engine software. Part 1—Elizabeth Shee Twohig
From sketchbook to structure from motion: Recording prehistoric carvings in Ireland
S2212054815000144
The 3D technologies have become essential in our researches on the Middle Magdalenian rock carving (18,500–17,000 cal. BP), complementary to the other traditional analytic tools. They play a noticeable role in our stylistic studies: the superimpositions of volumes and not only shapes make the form comparisons all the more accurate that margins of difference can be calculated. On the one hand, clarifying the degree of similarity between two carvings brings more data to the problem of the author(s) of the carvings, and thus it questions notions hardly tackled in prehistoric archaeology: the individual and the short time. These form comparisons prove to be very useful for other archaeological problems. Used for shape identification, they help for a better interpretation of the fragmentary representations and, beyond, for a more precise modelling of the chronological evolution of the parietal assemblages.
Contribution of 3D technologies to the analysis of form in late palaeolithic rock carvings: The case of the Roc-aux-Sorciers rock-shelter (Angles-sur-l’Anglin, France)
S2212054815000168
How can the utilization of newly developed advanced portable technologies give us greater understandings of the most complex of prehistoric rock art? This is the questions driving The Gordian Knot project analysing the polychrome Californian site known as Pleito. New small transportable devices allow detailed on-site analyses of rock art. These non-destructive portable technologies can use X-ray and Raman technology to determine the chemical elements used to make the pigment that makes the painting; they can use imaging techniques such as Highlight Reflective Transformation Imaging and dStretch© to enhance their visibility; they can use digital imagery to disentangle complex superimposed paintings; and they can use portable laser instruments to analyse the micro-topography of the rock surface and integrate these technologies into a 3-D environment. This paper outlines a robust methodology and preliminary results to show how an integration of different portable technologies can serve rock art research and management.
Methodological considerations of integrating portable digital technologies in the analysis and management of complex superimposed Californian pictographs: From spectroscopy and spectral imaging to 3-D scanning
S221205481500017X
3D modeling in rock art studies involves different techniques according to the size and morphology of the subject. It has mainly been used for reconstructing the volume of caves, morphology of walls and as a substitute to graphic and photographic recording of the prehistoric pictures. Little work has been done at macroscopic and microscopic scale, partly because lasergrammetry, which is the most common technique, is poorly adequate under the centimetric scale, while for patrimonial purposes recording at high resolution was of little interest. Thanks to the increasing performance of personal computers new modeling techniques are becoming available, based on photographic recording and no longer depending on costly and cumbersome equipments. We have tested in open air and underground sites in France, Portugal and Russia, the potential of photogrammetry and focus stacking for 3D recording of millimetric and submillimetric details of prehistoric petroglyphs and paintings, along with original simple optical solutions.
From 2D to 3D at macro- and microscopic scale in rock art studies
S2212054815300011
RTI is a powerful technique for recording, interpreting, and disseminating rock art. RTI enhances the perception of the micro-topography of the rock surface and it is particularly helpful for the study of engraved art. Subtle details, such as the traces left by different engraving techniques, the outlines of motifs or superimpositions are more clearly revealed through RTI's interactive re-light and enhancement tools. This paper describes the application of RTI for the re-examination of two Iberian south-western stelae, Setefilla and Almadén de la Plata 2, whose preserved decoration is engraved. Previous studies focused on the iconographic analysis of motifs and employed methods of examination and recording that posed limitations. Based on the more robust data provided by RTI and supported by RTI's tools for surface interpretation, we provide a new analysis of the decorated surfaces of both stelae, including insights into their manufacturing techniques and later modification.
RTI and the study of engraved rock art: A re-examination of the Iberian south-western stelae of Setefilla and Almadén de la Plata 2 (Seville, Spain)
S2212054815300035
This paper explores the use of digital 3D models of museum artefacts in a creative context. It investigates how creative engagement with digital 3D models of heritage artefacts can stimulate learning and foster new forms of engagement with digital heritage artefacts. This paper is illustrated with examples of creative works from a case study undertaken in collaboration with the National Museum Cardiff.
Digital 3D models of heritage artefacts: Towards a digital dream space
S2212054815300047
The virtual visualization of historical objects opens the door to a variety of new teaching applications in the classroom. In this study, we present a flexible platform for the creation of semi-immersive 3D environments. First, we describe the software and hardware tools that generate the 3D models and the Virtual Reality Environments. We then present an optimized design methodology, adapted to the generation of light 3D models of sufficient visual quality for teaching purposes. The most suitable option for such purposes proved to be CAD tools coupled with extensive use of image textures on low-resolution 3D meshes. Finally, we report an ad-hoc teaching method to test the platform during a short teaching session on Cultural Heritage and Computer Graphics for high-school and undergraduate students. The evaluation of their experiences, based on post-session surveys, points to the effectiveness of this approach to communicate different types of knowledge and to stimulate student learning.
A flexible platform for the creation of 3D semi-immersive environments to teach Cultural Heritage
S2212054815300059
Acquisition and processing of point clouds, allowing the high dimensional accuracy that is an essential prerequisite for good documentation, are widely used today for cultural heritage surveys. In recent years, manual and direct surveys in archaeological survey campaigns have been replaced by digital image processing and laser-scanning. Multi-image photogrammetry has proven to be valuable for underwater archaeology. A topographical survey is always necessary to guarantee dimensional accuracy, and is necessary for geo-referencing all the finds in the same reference system. The need for low costs and rapid solutions, combined with the necessity of producing three-dimensional surveys with the same accuracy as classical terrestrial surveying, led the researchers to test and apply image-based techniques. Ca' Foscari and IUAV University of Venice are conducting research into integrated techniques for the accurate documentation of underwater surveys. Survey design, image acquisition, topographical measurements and data processing of two Roman shipwrecks in southern Sicily are presented in this paper. Photogrammetric and topographical surveys were organized using two distinct methods, due to the different characteristics of the cargoes of huge marble blocks, their depth and their distribution on the seabed. The results of the survey are two 3D polygonal-textured models of the sites, which can be easily used for various analyses and trial reconstructions, opening new possibilities for producing documentation for both specialists and the wider public. Furthermore, 3D models are the geometrical basis for making 2D orthophotos and cross-sections. The paper illustrates all the phases of the survey's design, acquisition and preparation, and the data processing to obtain final 2D and 3D representations.
3D reconstruction of marble shipwreck cargoes based on underwater multi-image photogrammetry
S2213076414000955
Family member presence may contribute to the healing of hospitalized patients, but may also be in conflict with the perceived needs of delivering intensive care. We detail our experience with “opening the doors” of the intensive care unit (ICU), allowing family members to be present and participate in the care of loved ones without restriction. “Opening the doors” challenged the traditions, legacy and sense of professional entitlement that were a part of ICU culture and generated considerable initial resistance among nurses and physicians. We describe our “opening the doors” transformation to more patient- and family-centered care in four steps: (1) enlist support of administrative and local leaders; (2) create a collective aim; (3) test on a small scale, and (4) scale up after initial successes. Preparing ICU staff so that they are comfortable with more “on stage” time (i.e., greater family presence) was critical to our success. “Opening the doors” now serves as a guiding vision to organizing the ICU’s work.
Opening the ICU doors
S2213076415000196
In pediatric medicine, inadequate access to subspecialty care is widespread. Referral Guidelines are structured tools that describe criteria for subspecialty referral and may decrease medically unnecessary referrals and thereby improve access. Variation in referral rates and suboptimal communication around pediatric subspecialty referrals leads to inappropriate and ineffective use of scarce clinical resources. Connecticut Children׳s Medical Center prioritized the development of collaborative care tools at the interface between primary and subspecialty care, including Referral Guidelines. A comprehensive set of Referral Guidelines was developed and consisted of background information on a given condition, strategies for initial evaluation and management, instructions for how and when to refer, and what the patient and family could expect at the visit with the subspecialist. A key component of the initiative was the integral role of the PCP during development. Twenty-eight Referral Guidelines have been developed among 15 subspecialty areas. A novel process for active dissemination of Referral Guidelines was piloted in one medical subspecialty area and led to a reduction in overall referrals and an increase in the proportion of referrals meeting the necessary criteria.
Implementation of referral guidelines at the interface between pediatric primary and subspecialty care
S2213133713000061
Nowadays, analyzing and reducing the ever larger astronomical datasets is becoming a crucial challenge, especially for long cumulated observation times. The INTEGRAL/SPI X/ γ -ray spectrometer is an instrument for which it is essential to process many exposures at the same time in order to increase the low signal-to-noise ratio of the weakest sources. In this context, the conventional methods for data reduction are inefficient and sometimes not feasible at all. Processing several years of data simultaneously requires computing not only the solution of a large system of equations, but also the associated uncertainties. We aim at reducing the computation time and the memory usage. Since the SPI transfer function is sparse, we have used some popular methods for the solution of large sparse linear systems; we briefly review these methods. We use the Multifrontal Massively Parallel Solver (MUMPS) to compute the solution of the system of equations. We also need to compute the variance of the solution, which amounts to computing selected entries of the inverse of the sparse matrix corresponding to our linear system. This can be achieved through one of the latest features of the MUMPS software that has been partly motivated by this work. In this paper we provide a brief presentation of this feature and evaluate its effectiveness on astrophysical problems requiring the processing of large datasets simultaneously, such as the study of the entire emission of the Galaxy. We used these algorithms to solve the large sparse systems arising from SPI data processing and to obtain both their solutions and the associated variances. In conclusion, thanks to these newly developed tools, processing large datasets arising from SPI is now feasible with both a reasonable execution time and a low memory usage. H 0 = ( h 11 h 12 h 13 … h 1 N h 21 h 22 h 23 ⋱ h 2 N h 31 h 32 h 33 ⋱ h 3 N ⋮ ⋱ ⋱ ⋱ ⋮ h M 1 h M 2 h M 3 … h M N ) ⟼ H = ( h 11 0 0 0 h 12 0 0 h 13 0 h 1 N … 0 0 h 21 0 0 h 22 0 0 0 h 23 h 2 N … 0 0 0 h 31 0 0 h 32 0 0 h 33 0 … h 3 N ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋱ ⋮ 0 0 0 h M 1 0 0 h M 2 0 h M 3 0 … h M N )
Simultaneous analysis of large INTEGRAL/SPI Based on observations with INTEGRAL, an ESA project with instruments and science data center funded by ESA member states (especially the PI countries: Denmark, France, Germany, Italy, Spain, and Switzerland), Czech Republic and Poland with participation of Russia and the USA. datasets: Optimizing the computation of the solution and its variance using sparse matrix algorithms
S2213133713000085
High performance computing has been used in various fields of astrophysical research. But most of it is implemented on massively parallel systems (supercomputers) or graphical processing unit clusters. With the advent of multicore processors in the last decade, many serial software codes have been re-implemented in parallel mode to utilize the full potential of these processors. In this paper, we propose parallel processing recipes for multicore machines for astronomical data processing. The target audience is astronomers who use Python as their preferred scripting language and who may be using PyRAF/IRAF for data processing. Three problems of varied complexity were benchmarked on three different types of multicore processors to demonstrate the benefits, in terms of execution time, of parallelizing data processing tasks. The native multiprocessing module available in Python makes it a relatively trivial task to implement the parallel code. We have also compared the three multiprocessing approaches—Pool/Map, Process/Queue and Parallel Python. Our test codes are freely available and can be downloaded from our website.
Parallel astronomical data processing with Python: Recipes for multicore machines
S2213133713000206
In the present work, we discuss and assess the performances of Earth cylindrical shadow model (ECSM) and Earth shadow conical model (ESCM), with application to the Indian Remote Sensing (IRS), Low Earth orbiting (LEO) satellites; Cartosat-2A, Meghatropics-1, Resourcesat-2 and Oceansat-2. Both models are very simple and efficient for the prediction of eclipse states of any Earth orbiting eclipsing satellite at a given epoch. The advantage of using ESCM over ECSM is that first one predicts both states of eclipse penumbra and umbra while the later one predicts only one which, in reality, is not true. The ESCM model can be effectively useful for the precise orbit prediction and satellite operation to utilize the power properly. Earth’s equatorial radius Sun vector Sun unit vector Spacecraft ( S / C ) position vector Dot product between S / C position vector and Sun unit vector Projection vector of S / C position vector onto Sun unit vector Vector originating at the shadow axis, pointing to S / C at the projected S / C location Magnitude of the vector q ⃗ Coordinates of a point on the surface of the spherical Earth in ECI frame Components of S / C position vector r s / c ⃗ from the geo-center Vector joining Sun-edge 1 to S / C Vector joining Sun-edge 2 to S / C Components of the vector b ⃗ Components of the vector c ⃗ Vector representing the line from geo-center to Sun-edge 1 Vector representing the line from geo-center to Sun-edge 2 Unit vector which lies in the plane and perpendicular to Sun vector Vector normal to the plane Radius of the photosphere Distance between Sun and S / C Distance between Sun and point of intersection of Line 1 Distance between Sun and point of intersection of Line 2 Difference operator
Eclipse prediction methods for LEO satellites with cylindrical and cone geometries: A comparative study of ECSM and ESCM to IRS satellites
S2213133713000218
Filtergraph is a web application being developed and maintained by the Vanderbilt Initiative in Data-intensive Astrophysics (VIDA) to flexibly and rapidly visualize a large variety of astronomy datasets of various formats and sizes. The user loads a flat-file dataset into Filtergraph which automatically generates an interactive data portal that can be easily shared with others. From this portal, the user can immediately generate scatter plots of up to five dimensions as well as histograms and tables based on the dataset. Key features of the portal include intuitive controls with auto-completed variable names, the ability to filter the data in real time through user-specified criteria, the ability to select data by dragging on the screen, and the ability to perform arithmetic operations on the data in real time. To enable seamless data visualization and exploration, changes are quickly rendered on screen and visualizations can be exported as high quality graphics files. The application is optimized for speed in the context of large datasets: for instance, a plot generated from a stellar database of 3.1 million entries renders in less than 2 s on a standard web server platform. This web application has been created using the Web2py web framework based on the Python programming language. Filtergraph is free to use at http://filtergraph.vanderbilt.edu/.
Filtergraph: An interactive web application for visualization of astronomy datasets
S221313371300022X
We study the benefits and limits of parallelised Markov chain Monte Carlo (MCMC) sampling in cosmology. MCMC methods are widely used for the estimation of cosmological parameters from a given set of observations and are typically based on the Metropolis–Hastings algorithm. Some of the required calculations can however be computationally intensive, meaning that a single long chain can take several hours or days to calculate. In practice, this can be limiting, since the MCMC process needs to be performed many times to test the impact of possible systematics and to understand the robustness of the measurements being made. To achieve greater speed through parallelisation, MCMC algorithms need to have short autocorrelation times and minimal overheads caused by tuning and burn-in. The resulting scalability is hence influenced by two factors, the MCMC overheads and the parallelisation costs. In order to efficiently distribute the MCMC sampling over thousands of cores on modern cloud computing infrastructure, we developed a Python framework called CosmoHammer which embeds emcee, an implementation by Foreman-Mackey et al. (2012) of the affine invariant ensemble sampler by Goodman and Weare (2010). We test the performance of CosmoHammer for cosmological parameter estimation from cosmic microwave background data. While Metropolis–Hastings is dominated by overheads, CosmoHammer is able to accelerate the sampling process from a wall time of 30 h on a dual core notebook to 16 min by scaling out to 2048 cores. Such short wall times for complex datasets open possibilities for extensive model testing and control of systematics.
CosmoHammer: Cosmological parameter estimation with the MCMC Hammer
S2213133714000419
Despite the long tradition of publishing digital datasets in Astronomy, and the existence of a rich network of services providing astronomical datasets in standardized interoperable formats through the Virtual Observatory (VO), there has been little use of scientific workflow technologies in this field. In this paper we present AstroTaverna, a plugin that we have developed for the Taverna Workbench scientific workflow management system. It integrates existing VO web services as first-class building blocks in Taverna workflows, allowing the digital capture of otherwise lost procedural steps manually performed in e.g. GUI tools, providing reproducibility and re-use. It improves the readability of digital VO recipes with a comprehensive view of the entire automated execution process, complementing the scarce narratives produced in the classic documentation practices, transforming them into living tutorials for an efficient use of the VO infrastructure. The plugin also adds astronomical data manipulation and transformation tools based on the STIL Tool Set and the integration of Aladin VO software, as well as interactive connectivity with SAMP-compliant astronomy tools.
AstroTaverna—Building workflows with Virtual Observatory services
S2213133714000481
In this article, we propose an Earth conical shadow model predicting umbra and penumbra states for the low Earth orbiting satellite considering the spherical shape of the Earth. The model is described using the umbra and penumbra cone geometries of the Earth’s shadow and the geometrical equations of these conical shadow regions into a Sun centered frame. The proposed model is simulated for three polar Sun-synchronous Indian Remote Sensing satellites: Cartosat-2A, Resourcesat-2 and Oceansat-2. The proposed model compares well with the existing spherical Earth conical shadow models such as those given by Vallado (2013), Wertz (2002), Hubaux et al. (2012), and Srivastava et al. (2013, 2014). An assessment is carried out of the existing Earth conical shadow models with Systems Tool Kit (STK), a high fidelity commercial software package of Analytic Graphic Inc., and the real time telemetry data. Satellite position vector in the ECI frame Equatorial radius of the Earth Radius of photosphere of the Sun Distance between the Sun and Earth Earth’s shadow umbra cone angle Earth’s shadow penumbra cone angle Earth’s umbral cone radius in the ECI frame Earth’s penumbral cone radius in the ECI frame Sun vector in the ECI frame Magnitude of the Sun vector in the ECI frame Sun vector other than r → ⊙ Unit vectors in x , y and z -directions in the SCRF frame Transformation matrix from the ECI frame to the transformed frame Satellite position vector in the rotated frame Sun vector in the SCRF frame Components of the satellite position vector in the SCRF frame Radius of the satellite orbit Earth’s umbral cone radius in the SCRF frame Earth’s penumbral cone radius in the SCRF frame
Earth conical shadow modeling for LEO satellite using reference frame transformation technique: A comparative study with existing earth conical shadow models
S221313371400050X
The increasing number of astronomical surveys in mid- and far-infrared, as well as in submillimetre and radio wavelengths, brings more difficulties to the already challenging task of detecting sources in an automatic way. These specific images are characterized by a more complex background than in shorter wavelengths, with a higher level of noise, more noticeable flux variations and both unresolved and extended sources with a higher dynamic range. In order to improve the source detection efficiency in long wavelength images, in this paper we present a new approach based on the combined use of a multiscale decomposition and a recently developed method called Distilled Sensing. Its application minimizes the impact of the contaminants from the background, unveiling and highlighting the sources at the same time. The experimental results achieved using infrared and radio images illustrate the good performance of the approach, identifying greater percentages of true sources than using both the widely used SExtractor algorithm and the Distilled Sensing method alone.
Multiscale Distilled Sensing: Astronomical source detection in long wavelength images
S2213133714000687
The Python programming language is becoming increasingly popular for scientific applications due to its simplicity, versatility, and the broad range of its libraries. A drawback of this dynamic language, however, is its low runtime performance which limits its applicability for large simulations and for the analysis of large data sets, as is common in astrophysics and cosmology. While various frameworks have been developed to address this limitation, most focus on covering the complete language set, and either force the user to alter the code or are not able to reach the full speed of an optimised native compiled language. In order to combine the ease of Python and the speed of C++, we developed HOPE, a specialised Python just-in-time (JIT) compiler designed for numerical astrophysical applications. HOPE focuses on a subset of the language and is able to translate Python code into C++ while performing numerical optimisation on mathematical expressions at runtime. To enable the JIT compilation, the user only needs to add a decorator to the function definition. We assess the performance of HOPE by performing a series of benchmarks and compare its execution speed with that of plain Python, C++ and the other existing frameworks. We find that HOPE improves the performance compared to plain Python by a factor of 2 to 120, achieves speeds comparable to that of C++, and often exceeds the speed of the existing solutions. We discuss the differences between HOPE and the other frameworks, as well as future extensions of its capabilities. The fully documented HOPE package is available at http://hope.phys.ethz.ch and is published under the GPLv3 license on PyPI and GitHub.
HOPE: A Python just-in-time compiler for astrophysical computations
S221313371500092X
We present the project Asteroids@home that uses distributed computing to solve the time-consuming inverse problem of shape reconstruction of asteroids. The project uses the Berkeley Open Infrastructure for Network Computing (BOINC) framework to distribute, collect, and validate small computational units that are solved independently at individual computers of volunteers connected to the project. Shapes, rotational periods, and orientations of the spin axes of asteroids are reconstructed from their disk-integrated photometry by the lightcurve inversion method.
Asteroids@home—A BOINC distributed computing project for asteroid shape reconstruction
S2213597915300082
A handheld approach to optoacoustic imaging is essential for the clinical translation. The first 2- and 3-dimensional handheld multispectral optoacoustic tomography (MSOT) probes featuring real-time unmixing have recently been developed. Imaging performance of both probes was determined in vitro and in a brain melanoma metastasis mouse model in vivo. T1-weighted MR images were acquired for anatomical reference. The limit of detection of melanoma cells in vitro was significantly lower using the 2D than the 3D probe. The signal decrease was more profound in relation to depth with the 3D versus the 2D probe. Both approaches were capable of imaging the melanoma tumors qualitatively at all time points. Quantitatively, the 2D approach enabled closer anatomical resemblance of the tumor compared to the 3D probe, particularly at depths beyond 3mm. The 3D probe was shown to be superior for rapid 3D imaging and, thus, holds promise for more superficial target structures.
Performance of a Multispectral Optoacoustic Tomography (MSOT) System equipped with 2D vs. 3D Handheld Probes for Potential Clinical Translation
S2213597916300015
High resolution ultrasound and photoacoustic images of stained neutrophils, lymphocytes and monocytes from a blood smear were acquired using a combined acoustic/photoacoustic microscope. Photoacoustic images were created using a pulsed 532nm laser that was coupled to a single mode fiber to produce output wavelengths from 532nm to 620nm via stimulated Raman scattering. The excitation wavelength was selected using optical filters and focused onto the sample using a 20× objective. A 1000MHz transducer was co-aligned with the laser spot and used for ultrasound and photoacoustic images, enabling micrometer resolution with both modalities. The different cell types could be easily identified due to variations in contrast within the acoustic and photoacoustic images. This technique provides a new way of probing leukocyte structure with potential applications towards detecting cellular abnormalities and diseased cells at the single cell level.
High resolution ultrasound and photoacoustic imaging of single cells
S2214579615000088
The non-contiguous access pattern of many scientific applications results in a large number of I/O requests, which can seriously limit the data-access performance. Collective I/O has been widely used to address this issue. However, the performance of collective I/O could be dramatically degraded in today's high-performance computing systems due to the increasing shuffle cost caused by highly concurrent data accesses. This situation tends to be even worse as many applications become more and more data intensive. Previous research has primarily focused on optimizing I/O access cost in collective I/O but largely ignored the shuffle cost involved. Previous works assume that the lowest average response time leads to the best QoS and performance, while that is not always true for collective requests when considering the additional shuffle cost. In this study, we propose a new hierarchical I/O scheduling (HIO) algorithm to address the increasing shuffle cost in collective I/O. The fundamental idea is to schedule applications' I/O requests based on a shuffle cost analysis to achieve the optimal overall performance, instead of achieving optimal I/O accesses only. The algorithm is currently evaluated with the MPICH3 and PVFS2. Both theoretical analysis and experimental tests show that the proposed hierarchical I/O scheduling has a potential in addressing the degraded performance issue of collective I/O with highly concurrent accesses.
Hierarchical Collective I/O Scheduling for High-Performance Computing
S2214716014000049
The issue of selfish routing through a network has received a lot of attention in recent years. We study an atomic dynamic routing scenario, where players allocate resources with load dependent costs only for some limited time. Our paper introduces a natural discrete version of the deterministic queuing model introduced by Koch and Skutella (2011). In this model the time a user needs to traverse an edge e is given by a constant travel time and the waiting time in a queue at the end of e . At each discrete time step the first u e users of the queue proceed to the end vertex of e , where u e denotes the capacity of the edge e . An important aspect of this model is that it ensures the FIFO property. We study the complexity of central algorithmic questions for this model such as determining an optimal flow in an empty network, an optimal path in a congested network or a maximum dynamic flow and the question whether a given flow is a Nash equilibrium. For the bottleneck case, where the cost of each user is the travel time of the slowest edge on her path, the main results here are mostly bad news. Computing social optima and Nash equilibria turns out to be NP -complete and the Price of Anarchy is given by the number of users. We also consider the makespan objective (arrival time of the last user) and show that optimal solutions and Nash equilibria in these games, where every user selfishly tries to minimize her travel time, can be found efficiently.
Atomic routing in a deterministic queuing model
S2214716015000123
We consider a generalization of the classical quadratic assignment problem, where material flows between facilities are uncertain, and belong to a budgeted uncertainty set. The objective is to find a robust solution under all possible scenarios in the given uncertainty set. We present an exact quadratic formulation as a robust counterpart and develop an equivalent mixed integer programming model for it. To solve the proposed model for large-scale instances, we also develop two different heuristics based on 2-Opt local search and tabu search algorithms. We discuss performance of these methods and the quality of robust solutions through extensive computational experiments.
Robust quadratic assignment problem with budgeted uncertain flows
S2214716015300117
In Operational Research practice there are almost always alternative paths that can be followed in the modeling and problem solving process. Path dependence refers to the impact of the path on the outcome of the process. The steps of the path include, e.g. forming the problem solving team, the framing and structuring of the problem, the choice of model, the order in which the different parts of the model are specified and solved, and the way in which data or preferences are collected. We identify and discuss seven possibly interacting origins or drivers of path dependence: systemic origins, learning, procedure, behavior, motivation, uncertainty, and external environment. We provide several ideas on how to cope with path dependence.
Path dependence in Operational Research—How the modeling process can influence the results
S2214782914000037
Internet interventions have great potential for alleviating emotional distress, promoting mental health, and enhancing well-being. Numerous clinical trials have demonstrated their efficacy for a number of psychiatric conditions, and interventions delivered via the Internet will likely become a common alternative to face-to-face treatment. Meanwhile, research has paid little attention to the negative effects associated with treatment, warranting further investigation of the possibility that some patients might deteriorate or encounter adverse events despite receiving best available care. Evidence from research of face-to-face treatment suggests that negative effects afflict 5–10% of all patients undergoing treatment in terms of deterioration. However, there is currently a lack of consensus on how to define and measure negative effects in psychotherapy research in general, leaving researchers without practical guidelines for monitoring and reporting negative effects in clinical trials. The current paper therefore seeks to provide recommendations that could promote the study of negative effects in Internet interventions with the aim of increasing the knowledge of its occurrence and characteristics. Ten leading experts in the field of Internet interventions were invited to participate and share their perspective on how to explore negative effects, using the Delphi technique to facilitate a dialog and reach an agreement. The authors discuss the importance of conducting research on negative effects in order to further the understanding of its incidence and different features. Suggestions on how to classify and measure negative effects in Internet interventions are proposed, involving methods from both quantitative and qualitative research. Potential mechanisms underlying negative effects are also discussed, differentiating common factors shared with face-to-face treatments from those unique to treatments delivered via the Internet. The authors conclude that negative effects are to be expected and need to be acknowledged to a greater extent, advising researchers to systematically probe for negative effects whenever conducting clinical trials involving Internet interventions, as well as to share their findings in scientific journals.
Consensus statement on defining and measuring negative effects of Internet interventions
S2214782914000050
Internet-based mental health resources often suffer from low engagement and retention. An increased understanding of engagement and attrition is needed to realize the potential of such resources. In this study, 45,142 individuals were screened for depression by an automated online screener, with 2,539 enrolling in a year-long monthly rescreening study; they received a single monthly reminder e-mail to rescreen their mood. We found that, even with such a minimal cohort maintenance strategy, a third of the participants completed 1 or more follow-ups, and 22% completed 2 or more follow-ups. Furthermore, completion of earlier follow-ups was highly predictive of future completions. We also found a number of participant characteristics (e.g., current depression status, previous depression treatment seeking, and education level) predicted follow-up rates, singly or in interactions.
Participant retention in an automated online monthly depression rescreening program: Patterns and predictors
S2214782914000062
Study and treatment dropout and adherence represent particular challenges in studies on Internet-based interventions. However, systematic investigations of the relationship between study, intervention and patient characteristics, participation, and intervention outcomes in online-prevention are scarce. A review of participation in trials investigating a cognitive-behavioral, Internet-based, 8-week prevention program (StudentBodies™) for eating disorders, moderators of participation, and the impact of participation on the relationship of outcome moderators and outcomes was performed. Seven US and three German studies with a total of N=1059 female participants were included. Two of the U.S. and one of the German trials explicitly addressed high risk samples in a selective prevention approach. Across studies, dropout rates ranged from 3% to 26%. The women who participated in the trials accessed on average between 49% and 83% of the assigned intervention content. None of the study characteristics (universal vs. selective prevention, incentives, country, participants' age) predicted adherence or study dropout. After adjusting for adherence, intervention outcomes (EDI Drive for Thinness and EDI Bulimia) were only moderated by participant's age, with smaller effects in one sample of adolescent girls. Adherence to StudentBodies™ proved to be high across a number of trials, settings and countries. These findings are promising, but it is likely that adherence will be distinctly lower in the general public than in research settings, and intervention effects will turn out smaller. However, the intervention is readily available at minimal cost per participant, and the public health impact may still be notable.
Participant adherence to the Internet-based prevention program StudentBodies™ for eating disorders — A review
S2214782914000074
The aim of this randomized controlled trial was to investigate the effects of guided internet-based cognitive behavior therapy (ICBT) for posttraumatic stress disorder (PTSD). Sixty-two participants with chronic PTSD, as assessed by the Clinician-administered PTSD Scale, were recruited via nationwide advertising and randomized to either treatment (n =31) or delayed treatment attention control (n =31). The ICBT treatment consisted of 8 weekly text-based modules containing psychoeducation, breathing retraining, imaginal and in vivo exposure, cognitive restructuring, and relapse prevention. Therapist support and feedback on homework assignment were given weekly via an online contact handling system. Assessments were made at baseline, post-treatment, and at 1-year follow-up. Main outcome measures were the Impact of Events Scale — Revised (IES-R) and the Posttraumatic Stress Diagnostic Scale (PDS). Results showed significant reductions of PTSD symptoms (between group effect on the IES-R Cohen's d =1.25, and d =1.24 for the PDS) compared to the control group. There were also effects on depression symptoms, anxiety symptoms, and quality of life. The results at one-year follow-up showed that treatment gains were maintained. In sum, these results suggest that ICBT with therapist support can reduce PTSD symptoms significantly.
Guided internet-delivered cognitive behavior therapy for post-traumatic stress disorder: A randomized controlled trial
S2214782914000086
Guided Internet-delivered cognitive behaviour therapy (ICBT) is efficacious for the treatment of a variety of clinical disorders (Spek et al., 2007), yet minimal research has investigated training students in guided ICBT. To contribute to the training literature, through qualitative interviews, this study explored how ICBT was perceived by student therapists (n =12) trained in guided ICBT. Additionally, facilitators and challenges encountered by students learning guided ICBT were examined. Qualitative analysis revealed that students perceived training to enhance their professional skills in guided ICBT such as how to gain informed consent, address emergencies, and facilitate communication over the Internet. Students described guided ICBT as beneficial for novice therapists learning cognitive behavior therapy as asynchronous communication allowed them to reflect on their clinical emails and seek supervision. Further, students perceived guided ICBT as an important skill for future practice and an avenue to improve patient access to mental health care. Specific facilitators of learning guided ICBT included having access to formal and peer supervision as well as technical assistance, ICBT modules, a functional web application, and detailed policies and procedures for the practice of guided ICBT. Challenges in delivering guided ICBT were also identified by participants such as finding time to learn the approach given other academic commitments, working with non-responsive clients, addressing multiple complex topics over email, and communicating through asynchronous emails. Based on the feedback collected from participants, recommendations for training in guided ICBT are offered along with future research directions.
A qualitative examination of psychology graduate students' experiences with guided Internet-delivered cognitive behaviour therapy
S2214782914000098
Objective This study (ID: NCT01205906) compared the impact of the working alliance between the therapist and the client on treatment outcome in a group and an Internet-based cognitive behavior therapy (GCBT vs. ICBT) for chronic tinnitus. Methods The Working Alliance Inventory — Short Revised (WAI-SR, scale range: 1–5) was administered to 26 GCBT and 38 ICBT participants after treatment weeks 2, 5, and 9, and the Tinnitus Handicap Inventory (THI) before and after the treatment. Results High alliance ratings were found in both ICBT (WAI-SR total scores at week 9: M =3.59, SD =0.72) and GCBT (WAI-SR total scores at week 9: M =4.20, SD =0.49), but significantly higher ratings occurred in GCBT on most WAI-SR scales (ps<.01). Significant time×group interactions for most WAI-SR scales indicated differences in alliance growth patterns between the treatments (ps<.001). Residual gain scores for the therapy outcome measure ‘tinnitus distress’ were significantly correlated with the agreement on treatment tasks between therapist and client in ICBT (r =.40, p =.014) and with the affective therapeutic bond in GCBT (r =.40, p =.043) at mid-treatment (week 5). Conclusion More time was needed to build a strong alliance in ICBT although GCBT yielded generally higher alliance ratings. Moreover, different aspects of the therapeutic alliance might be important for treatment success in ICBT versus GCBT.
The working alliance in a randomized controlled trial comparing Internet-based self-help and face-to-face cognitive behavior therapy for chronic tinnitus
S2214782914000104
Further understanding is needed of the functionalities and efficiency of social media for health intervention research recruitment. Facebook was examined as a mechanism to recruit young adults for a smoking cessation intervention. An ad campaign targeting young adult smokers tested specific messaging based on market theory and successful strategies used to recruit smokers in previous clinical trials (i.e. informative, call to action, scarcity, social norms), previously successful ads, and general messaging. Images were selected to target smokers (e.g., lit cigarette), appeal to the target age, vary demographically, and vary graphically (cartoon, photo, logo). Facebook's Ads Manager was used over 7weeks (6/10/13–7/29/13), targeted by age (18–25), location (U.S.), and language (English), and employed multiple ad types (newsfeed, standard, promoted posts, sponsored stories) and keywords. Ads linked to the online screening survey or study Facebook page. The 36 different ads generated 3,198,373 impressions, 5895 unique clicks, at an overall cost of $2024 ($0.34/click). Images of smoking and newsfeed ads had the greatest reach and clicks at the lowest cost. Of 5895 unique clicks, 586 (10%) were study eligible and 230 (39%) consented. Advertising costs averaged $8.80 per eligible, consented participant. The final study sample (n =79) was largely Caucasian (77%) and male (69%), averaging 11 cigarettes/day (SD=8.3) and 2.7 years smoking (SD=0.7). Facebook is a useful, cost-effective recruitment source for young adult smokers. Ads posted via newsfeed posts were particularly successful, likely because they were viewable via mobile phone. Efforts to engage more ethnic minorities, young women, and smokers motivated to quit are needed.
Facebook recruitment of young adult smokers for a cessation trial: Methods, metrics, and lessons learned
S2214782914000116
Background There are first indications that an Internet-based cognitive therapy (CT) combined with monitoring by text messages (Mobile CT), and minimal therapist support (e-mail and telephone), is an effective approach of prevention of relapse in depression. However, examining the acceptability and adherence to Mobile CT is necessary to understand and increase the efficiency and effectiveness of this approach. Method In this study we used a subset of a randomized controlled trial on the effectiveness of Mobile CT. A total of 129 remitted patients with at least two previous episodes of depression were available for analyses. All available information on demographic characteristics, the number of finished modules, therapist support uptake (telephone and e-mail), and acceptability perceived by the participants was gathered from automatically derived log data, therapists and participants. Results Of all 129 participants, 109 (84.5%) participants finished at least one of all eight modules of Mobile CT. Adherence, i.e. the proportion who completed the final module out of those who entered the first module, was 58.7% (64/109). None of the demographic variables studied were related to higher adherence. The total therapist support time per participant that finished at least one module of Mobile CT was 21min (SD=17.5). Overall participants rated Mobile CT as an acceptable treatment in terms of difficulty, time spent per module and usefulness. However, one therapist mentioned that some participants experienced difficulties with using multiple CT based challenging techniques. Conclusion Overall uptake of the intervention and adherence was high with a low time investment of therapists. This might be partially explained by the fact that the intervention was offered with therapist support by telephone (blended) reducing non-adherence and that this high-risk group for depressive relapse started the intervention during remission. Nevertheless, our results indicate Mobile CT as an acceptable and feasible approach to both participants and therapists.
Mobile Cognitive Therapy: Adherence and acceptability of an online intervention in remitted recurrently depressed patients