FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S1476927115302371
The analysis of energy landscapes plays an important role in mathematical modelling, simulation and optimisation. Among the main features of interest are the number and distribution of local minima within the energy landscape. Granier and Kallel proposed in 2002 a new sampling procedure for estimating the number of local minima. In the present paper, we focus on improved heuristic implementations of the general framework devised by Granier and Kallel with regard to run-time behaviour and accuracy of predictions. The new heuristic method is demonstrated for the case of partial energy landscapes induced by RNA secondary structures. While the computation of minimum free energy RNA secondary structures has been studied for a long time, the analysis of folding landscapes has gained momentum over the past years in the context of co-transcriptional folding and deeper insights into cell processes. The new approach has been applied to ten RNA instances of length between 99nt and 504nt and their respective partial energy landscapes defined by secondary structures within an energy offset ΔE above the minimum free energy conformation. The number of local minima within the partial energy landscapes ranges from 1440 to 3441. Our heuristic method produces for the best approximations on average a deviation below 3.0% from the true number of local minima.
A new heuristic method for approximating the number of local minima in partial RNA energy landscapes
S1476927115302383
Anti-epileptic drugs (AEDs) have high risk of teratogenic side effects, including neural tube defects while mother is on AEDs for her own prevention of convulsions during pregnancy. The present study investigated the interaction of major marketed AEDs and human placental (hp)-cadherin protein, in-silico, to establish the role of hp-cadherin protein in teratogenicity and also to evaluate the importance of Ca2+ ion in functioning of the protein. A set of 21 major marketed AEDs were selected for the study and 3D-structure of hp-cadherin was constructed using homology modelling and energy minimized using MD simulations. Molecular docking studies were carried out using selected AEDs as ligand with hp-cadherin (free and bound Ca2+ ion) to study the behavioural changes in hp-cadherin due to presence of Ca2+ ion. The study reflected that four AEDs (Gabapentin, Pregabalin, Remacimide and Vigabatrine) had very high affinity towards hp-cadherin and thus the later may have prominent role in the teratogenic effects of these AEDs. From docking simulation analysis it was observed that Ca2+ ion is required to make hp-cadherin energetically favourable and sterically functional.
Homology modelling and molecular docking studies of human placental cadherin protein for its role in teratogenic effects of anti-epileptic drugs
S1476927115302395
As predominant component in the venom of many dangerous animal species, toxins have been thoroughly investigated for drug design or as pharmacologic tools. The present study demonstrated the use of size and hydrophobicity of amino acid residues for the purposes of quantifying the valuable sequence–structure relationship and performing further analysis of interactional mechanisms in secondary structure elements (SSEs) for toxin native conformations. First, we showed that the presence of large and hydrophobic residues varying in availability in the primary sequences correspondingly affects the amount of these residues being used in the SSEs in accordance with linear behavioral patterns from empirical assessments of experimentally derived toxins and non-toxins. Subsequent derivation of prediction rules was established with the aim of analyzing molecular structures and mechanisms by means of 114 residue compositions for venom toxins. The obtained results concerning the linear behavioral patterns demonstrated the nature of the information transfer occurring from the primary to secondary structures. A dual action mechanism was established, taking into account steric and hydrophobic interactions. Finally, a new residue composition prediction method for SSEs of toxins was suggested.
Analysis of molecular structures and mechanisms for toxins derived from venomous animals
S1476927115302401
Casein kinase-1 (CK1) isoforms actively participate in the down-regulation of canonical Wnt signaling pathway; however recent studies have shown their active roles in oncogenesis of various tissues through this pathway. Functional loss of two isoforms (CK1-α/ε) has been shown to activate the carcinogenic pathway which involves the stabilization of of cytoplasmic β-catenin. Development of anticancer therapeutics is very laborious task and depends upon the structural and conformational details of the target. This study focuses on, how the structural dynamics and conformational changes of two CK1 isoforms are synchronized in carcinogenic pathway. The conformational dynamics in kinases is the responsible for their action as has been supported by the molecular docking experiments.
Dynamic conformational ensembles regulate casein kinase-1 isoforms: Insights from molecular dynamics and molecular docking studies
S1476927115302413
Non-specific lipid transfer proteins (nsLTPs) are common allergens and they are particularly widespread within the plant kingdom. They have a highly conserved three-dimensional structure that generate a strong cross-reactivity among the members of this family. In the last years several web tools for the prediction of allergenicity of new molecules based on their homology with known allergens have been released, and guidelines to assess potential allergenicity of proteins through bioinformatics have been established. Even if such tools are only partially reliable yet, they can provide important indications when other kinds of molecular characterization are lacking. The potential allergenicity of 28 amino acid sequences of LTPs homologs, either retrieved from the UniProt database or in silico deduced from the corresponding EST coding sequence, was predicted using 7 publicly available web tools. Moreover, their similarity degree to their closest known LTP allergens was calculated, in order to evaluate their potential cross-reactivity. Finally, all sequences were studied for their identity degree with the peach allergen Pru p 3, considering the regions involved in the formation of its known conformational IgE-binding epitope. Most of the analyzed sequences displayed a high probability to be allergenic according to all the software employed. The analyzed LTPs from bell pepper, cassava, mango, mungbean and soybean showed high homology (>70%) with some known allergenic LTPs, suggesting a potential risk of cross-reactivity for sensitized individuals. Other LTPs, like for example those from canola, cassava, mango, mungbean, papaya or persimmon, displayed a high degree of identity with Pru p 3 within the consensus sequence responsible for the formation, at three-dimensional level, of its major conformational epitope. Since recent studies highlighted how in patients mono-sensitized to peach LTP the levels of IgE seem directly proportional to the chance of developing cross-reactivity to LTPs from non-Rosaceae foods, and these chances increase the more similar the protein is to Pru p 3, these proteins should be taken into special account for future studies aimed at evaluating the risk of cross-allergenicity in highly sensitized individuals.
In silico allergenicity prediction of several lipid transfer proteins
S1476927115302425
G-protein-coupled receptors (GPCRs) are important targets of modern medicinal drugs. The accurate identification of interactions between GPCRs and drugs is of significant importance for both protein function annotations and drug discovery. In this paper, a new sequence-based predictor called TargetGDrug is designed and implemented for predicting GPCR–drug interactions. In TargetGDrug, the evolutionary feature of GPCR sequence and the wavelet-based molecular fingerprint feature of drug are integrated to form the combined feature of a GPCR–drug pair; then, the combined feature is fed to a trained random forest (RF) classifier to perform initial prediction; finally, a novel drug-association-matrix-based post-processing procedure is applied to reduce potential false positive or false negative of the initial prediction. Experimental results on benchmark datasets demonstrate the efficacy of the proposed method, and an improvement of 15% in the Matthews correlation coefficient (MCC) was observed over independent validation tests when compared with the most recently released sequence-based GPCR–drug interactions predictor. The implemented webserver, together with the datasets used in this study, is freely available for academic use at http://csbio.njust.edu.cn/bioinf/TargetGDrug.
GPCR–drug interactions prediction using random forest with drug-association-matrix-based post-processing procedure
S1476927115302681
The standard method of the global quantitative analysis of gene expression at the protein level combines high-resolution two-dimensional gel electrophoresis (2DE) with mass spectrometric identification of protein spots. One of the major concerns with the application of gel-based proteomics is the need for the analytical and biological accuracy of the datasets. We mathematically and empirically simulated the possibility of the technical regulations of gene expression using 2DE. Our developed equation predicted a detectable alteration in the quantity of protein spots in response to a new protein added in, with various amounts. Testing the predictability of the developed equation, we observed that a new protein could form deceptive expression profiles, classified using prevalent tools for the analysis of 2DE results. In spite of the theoretically predicted overall reduction of proteins that resulted from adding the new protein, the empirical data revealed differential amount of proteins when various quantities of the new protein were added to the protein sample. The present work emphasize that employment of 2DE would not be a reliable approach for biological samples with extensive proteome alterations such as the developmental and differentiation stages of cells without depletion of high abundant proteins.
Deceptive responsive genes in gel-based proteomics
S1476927115302772
Human ADAMs (a disintegrin and metalloproteinases) have been established as an attractive therapeutic target of inflammatory disorders such as inflammatory bowel disease (IBD). The ADAM metallopeptidase domain 17 (ADAM17 or TACE) and its close relative ADAM10 are two of the most important ADAM members that share high conservation in sequence, structure and function, but exhibit subtle difference in regulation of downstream cell signaling events. Here, we described a systematic protocol that combined computational modeling and experimental assay to discover novel peptide hydroxamate derivatives as potent and selective inhibitors for ADAM17 over ADAM10. In the procedure, a virtual combinatorial library of peptide hydroxamate compounds was generated by exploiting intermolecular interactions involved in crystal and modeled structures. The library was examined in detail to identify few promising candidates with both high affinity to ADAM17 and low affinity to ADAM10, which were then tested in vitro with enzyme inhibition assay. Consequently, two peptide hydroxamates Hxm-Phe-Ser-Asn and Hxm-Phe-Arg-Gln were found to exhibit potent inhibition against ADAM17 (K i =92 and 47nM, respectively) and strong selectivity for ADAM17 over ADAM10 (∼7-fold and ∼5-fold, S =0.86 and 0.71, respectively). The structural basis and energetic property of ADAM17 and ADAM10 interactions with the designed inhibitors were also investigated systematically. It is found that the exquisite network of nonbonded interactions involving the side chains of peptide hydroxamates is primarily responsible for inhibitor selectivity, while the coordination interactions and hydrogen bonds formed by the hydroxamate moiety and backbone of peptide hydroxamates confer high affinity to inhibitor binding.
Molecular design and structural optimization of potent peptide hydroxamate inhibitors to selectively target human ADAM metallopeptidase domain 17
S1476927115302838
Neuronal polo-like kinase (nPLK) is an essential regular of cell cycle and differentiation in nervous system, and targeting nPLK has been established as a promising therapeutic strategy to treat neurological disorders and to promote neuroregeneration. The protein contains an N-terminal kinase domain (KD) and a C-terminal Polo-box domain (PBD) that are mutually inhibited by each other. Here, the intramolecular KD–PBD complex in nPLK was investigated at structural level via bioinformatics analysis, molecular dynamics (MD) simulation and binding affinity scoring. From the complex interface two regions representing separately two continuous peptide fragments in PBD domain were identified as the hot spots of KD–PBD interaction. Structural and energetic analysis suggested that one (PBD peptide 1) of the two peptides can bind tightly to a pocket nearby the active site of KD domain, which is thus potential as self-inhibitory peptide to target and suppress nPLK kinase activity. The knowledge harvesting from computational studies were then used to guide the structural optimization and mutation of PBD peptide 1. Consequently, two of three peptide mutants separately exhibited moderately and considerably increased affinity as compared to the native peptide. The computationally modeled complex structures of KD domain with these self-inhibitory peptides were also examined in detail to unravel the structural basis and energetic property of nPLK-peptide recognition and interaction.
Structure-based design and confirmation of peptide ligands for neuronal polo-like kinase to promote neuroregeneration
S1476927115302991
Human epidermal growth factor receptor (EGFR) plays a central role in the pathological progression and metastasis of lung cancer; the development and clinical application of therapeutic agents that target the receptor provide important insights for new lung cancer therapies. The tumor-suppressor protein MIG6 is a negative regulator of EGFR, which can bind at the activation interface of asymmetric dimer of EGFR kinase domains to disrupt dimerization and then inactivate the kinase (Zhang X. et al. Nature 2007, 450: 741–744). The protein adopts two separated segments, i.e. MIG6segment 1 and MIG6segment 2, to directly interact with EGFR. Here, computational modeling and analysis of the intermolecular interaction between EGFR kinase domain and MIG6segment 2 peptide revealed that the peptide is folded into a two-stranded β-sheet composed of β-strand 1 and β-strand 2; only the β-strand 2 can directly interact with EGFR activation loop, while leaving β-strand 1 apart from the kinase. A C-terminal island within the β-strand 2 is primarily responsible for peptide binding, which was truncated from the MIG6segment 2 and exhibited weak affinity to EGFR kinase domain. Structural and energetic analysis suggested that phosphorylation at residues Tyr394 and Tyr395 of truncated peptide can considerably improve EGFR affinity, and mutation of other residues can further optimize the peptide binding capability. Subsequently, three derivative versions of the truncated peptide, including phosphorylated and dephosphorylated peptides as well as a double-point mutant were synthesized and purified, and their affinities to the recombinant protein of human EGFR kinase domain were determined by fluorescence anisotropy titration. As expected theoretically, the dephosphorylated peptide has no observable binding to the kinase, and phosphorylation and mutation can confer low and moderate affinities to the peptide, respectively, suggesting a good consistence between the computational analysis and experimental assay.
Truncation, modification, and optimization of MIG6segment 2 peptide to target lung cancer-related EGFR
S1476927116300135
Background The statistical tests for single locus disease association are mostly under-powered. If a disease associated causal single nucleotide polymorphism (SNP) operates essentially through a complex mechanism that involves multiple SNPs or possible environmental factors, its effect might be missed if the causal SNP is studied in isolation without accounting for these unknown genetic influences. In this study, we attempt to address the issue of reduced power that is inherent in single point association studies by accounting for genetic influences that negatively impact the detection of causal variant in single point association analysis. In our method we use propensity score (PS) to adjust for the effect of SNPs that influence the marginal association of a candidate marker. These SNPs might be in linkage disequilibrium (LD) and/or epistatic with the target-SNP and have a joint interactive influence on the disease under study. We therefore propose a propensity score adjustment method (PSAM) as a tool for dimension reduction to improve the power for single locus studies through an estimated PS to adjust for influence from these SNPs while regressing disease status on the target-genetic locus. The degree of freedom of such a test is therefore always restricted to 1. Results We assess PSAM under the null hypothesis of no disease association to affirm that it correctly controls for the type-I-error rate (<0.05). PSAM displays reasonable power (>70%) and shows an average of 15% improvement in power as compared with commonly-used logistic regression method and PLINK under most simulated scenarios. Using the open-access multifactor dimensionality reduction dataset, PSAM displays improved significance for all disease loci. Through a whole genome study, PSAM was able to identify 21 SNPs from the GAW16 NARAC dataset by reducing their original trend-test p-values from within 0.001 and 0.05 to p-values less than 0.0009, and among which 6 SNPs were further found to be associated with immunity and inflammation. Conclusions PSAM improves the significance of single-locus association of causal SNPs which have had marginal single point association by adjusting for influence from other SNPs in a dataset. This would explain part of the missing heritability without increasing the complexity of the model due to huge multiple testing scenarios. The newly reported SNPs from GAW16 data would provide evidences for further research to elucidate the etiology of rheumatoid arthritis. PSAM is proposed as an exploratory tool that would be complementary to other existing methods. A downloadable user friendly program, PSAM, written in SAS, is available for public use.
Using propensity score adjustment method in genetic association studies
S1476927116300159
Kinesin-like protein (KIF11) is a molecular motor protein that is essential in mitosis. Removal of KIF11 prevents centrosome migration and causes cell arrest in mitosis. KIF11 defects are linked to the disease of microcephaly, lymph edema or mental retardation. The human KIF11 protein has been actively studied for its role in mitosis and its potential as a therapeutic target for cancer treatment. Pharmacophore modeling, molecular docking and density functional theory approaches was employed to reveal the structural, chemical and electronic features essential for the development of small molecule inhibitor for KIF11. Hence we have developed chemical feature based pharmacophore models using Discovery Studio v 2.5 (DS). The best hypothesis (Hypo1) consisting of four chemical features (two hydrogen bond acceptor, one hydrophobic and one ring aromatic) has exhibited high correlation co-efficient of 0.9521, cost difference of 70.63 and low RMS value of 0.9475. This Hypo1 is cross validated by Cat Scramble method; test set and decoy set to prove its robustness, statistical significance and predictability respectively. The well validated Hypo1 was used as 3Dquery to perform virtual screening. The hits obtained from the virtual screening were subjected to various scrupulous drug-like filters such as Lipinski’s rule of five and ADMET properties. Finally, six hit compounds were identified based on the molecular interaction and its electronic properties. Our final lead compound could serve as a powerful tool for the discovery of potent inhibitor as KIF11 agonists.
Investigation on the isoform selectivity of novel kinesin-like protein 1 (KIF11) inhibitor using chemical feature based pharmacophore, molecular docking, and quantum mechanical studies
S1476927116300160
The DNA binding protein, TDP43 is a major protein involved in amyotrophic lateral sclerosis and other neurological disorders such as frontotemporal dementia, Alzheimer disease, etc. In the present study, we have designed possible siRNAs for the glycine rich region of tardbp mutants causing ALS disorder based on a systematic theoretical approach including (i) identification of respective codons for all mutants (reported at the protein level) based on both minimum free energy and probabilistic approaches, (ii) rational design of siRNA, (iii) secondary structure analysis for the target accessibility of siRNA, (iii) determination of the ability of siRNA to interact with mRNA and the formation/stability of duplex via molecular dynamics study for a period of 15ns and (iv) characterization of mRNA–siRNA duplex stability based on thermo-physical analysis. The stable GC-rich siRNA expressed strong binding affinity towards mRNA and forms stable duplex in A-form. The linear dependence between the thermo-physical parameters such as T m, GC content and binding free energy revealed the ability of the identified siRNAs to interact with mRNA in comparable to that of the experimentally reported siRNAs. Hence, this present study proposes few siRNAs as the possible gene silencing agents in RNAi therapy based on the in silico approach.
Identification of possible siRNA molecules for TDP43 mutants causing amyotrophic lateral sclerosis: In silico design and molecular dynamics study
S1476927116300317
Ovarian carcinoma is the fifth-leading cause of cancer death among women in the United States. Major reasons for this persistent mortality include the poor understanding of the underlying biology and a lack of reliable biomarkers. Previous studies have shown that aberrantly expressed MicroRNAs (miRNAs) are involved in carcinogenesis and tumor progression by post-transcriptionally regulating gene expression. However, the interference of miRNAs in tumorigenesis is quite complicated and far from being fully understood. In this work, by an integrative analysis of mRNA expression, miRNA expression and clinical data published by The Cancer Genome Atlas (TCGA), we studied the modularity and dynamicity of miRNA–mRNA interactions and the prognostic implications in high-grade serous ovarian carcinomas. With the top transcriptional correlations (Bonferroni-adjusted p-value<0.01) as inputs, we identified five miRNA–mRNA module pairs (MPs), each of which included one positive-connection (correlation) module and one negative-connection (correlation) module. The number of miRNAs or mRNAs in each module varied from 3 to 7 or from 2 to 873. Among the four major negative-connection modules, three fit well with the widely accepted miRNA-mediated post-transcriptional regulation theory. These modules were enriched with the genes relevant to cell cycle and immune response. Moreover, we proposed two novel algorithms to reveal the group or sample specific dynamic regulations between these two RNA classes. The obtained miRNA–mRNA dynamic network contains 3350 interactions captured across different cancer progression stages or tumor grades. We found that those dynamic interactions tended to concentrate on a few miRNAs (e.g. miRNA-936), and were more likely present on the miRNA–mRNA pairs outside the discovered modules. In addition, we also pinpointed a robust prognostic signature consisting of 56 modular protein-coding genes, whose co-expression patterns were predictive for the survival time of ovarian cancer patients in multiple independent cohorts.
The modularity and dynamicity of miRNA–mRNA interactions in high-grade serous ovarian carcinomas and the prognostic implication
S1476927116300329
In mass spectrometry-based shotgun proteomics, protein quantification and protein identification are two major computational problems. To quantify the protein abundance, a list of proteins must be firstly inferred from the raw data. Then the relative or absolute protein abundance is estimated with quantification methods, such as spectral counting. Until now, most researchers have been dealing with these two processes separately. In fact, the protein inference problem can be regarded as a special protein quantification problem in the sense that truly present proteins are those proteins whose abundance values are not zero. Some recent published papers have conceptually discussed this possibility. However, there is still a lack of rigorous experimental studies to test this hypothesis. In this paper, we investigate the feasibility of using protein quantification methods to solve the protein inference problem. Protein inference methods aim to determine whether each candidate protein is present in the sample or not. Protein quantification methods estimate the abundance value of each inferred protein. Naturally, the abundance value of an absent protein should be zero. Thus, we argue that the protein inference problem can be viewed as a special protein quantification problem in which one protein is considered to be present if its abundance is not zero. Based on this idea, our paper tries to use three simple protein quantification methods to solve the protein inference problem effectively. The experimental results on six data sets show that these three methods are competitive with previous protein inference algorithms. This demonstrates that it is plausible to model the protein inference problem as a special protein quantification task, which opens the door of devising more effective protein inference algorithms from a quantification perspective. The source codes of our methods are available at: http://code.google.com/p/protein-inference/.
Protein inference: A protein quantification perspective
S1476927116300421
Human intestinal absorption (HIA) of the drugs administered through the oral route constitutes an important criterion for the candidate molecules. The computational approach for predicting the HIA of molecules may potentiate the screening of new drugs. In this study, ensemble learning (EL) based qualitative and quantitative structure–activity relationship (SAR) models (gradient boosted tree, GBT and bagged decision tree, BDT) have been established for the binary classification and HIA prediction of the chemicals, using the selected molecular descriptors. The structural diversity of the chemicals and the nonlinear structure in the considered data were tested by the similarity index and Brock–Dechert–Scheinkman statistics. The external predictive power of the developed SAR models was evaluated through the internal and external validation procedures recommended in the literature. All the statistical criteria parameters derived for the performance of the constructed SAR models were above their respective thresholds suggesting for their robustness for future applications. In complete data, the qualitative SAR models rendered classification accuracy of >99%, while the quantitative SAR models yielded correlation (R 2) of >0.91 between the measured and predicted HIA values. The performances of the EL-based SAR models were also compared with the linear models (linear discriminant analysis, LDA and multiple linear regression, MLR). The GBT and BDT SAR models performed better than the LDA and MLR methods. A comparison of our models with the previously reported QSARs for HIA prediction suggested for their better performance. The results suggest for the appropriateness of the developed SAR models to reliably predict the HIA of structurally diverse chemicals and can serve as useful tools for the initial screening of the molecules in the drug development process.
Predicting human intestinal absorption of diverse chemicals using ensemble learning based QSAR modeling approaches
S1476927116300433
HIV-1 membrane fusion plays an important role in the process that HIV-1 entries host cells. As a treatment strategy targeting HIV-1 entry process, fusion inhibitors have been proposed. Nevertheless, development of a short peptide possessing high anti-HIV potency is considered a daunting challenge. He et al. found that two residues, Met626 and Thr627, located the upstream of the C-terminal heptad repeat of the gp41, formed a unique hook-like structure (M-T hook) that can dramatically improve the binding stability and anti-HIV activity of the inhibitors. In this work, we explored the molecular mechanism why M-T hook structure could improve the anti-HIV activity of inhibitors. Firstly, molecular dynamic simulation was used to obtain information on the time evolution between gp41 and ligands. Secondly, based on the simulations, molecular mechanics Poisson–Boltzmann surface area (MM-PBSA) and molecular mechanics Generalized Born surface area (MM-GBSA) methods were used to calculate the binding free energies. The binding free energy of the ligand with M-T hook was considerably higher than the other without M-T. Further studies showed that the hydrophobic interactions made the dominant contribution to the binding free energy. The numbers of Hydrogen bonds between gp41 and the ligand with M-T hook structure were more than the other. These findings should provide insights into the inhibition mechanism of the short peptide fusion inhibitors and be useful for the rational design of novel fusion inhibitors in the future.
Insights into the Functions of M-T Hook Structure in HIV Fusion Inhibitor Using Molecular Modeling
S1476927116300445
Searching novel, safe and effective anti-inflammatory agents has remained an evolving research enquiry in the mainstream of inflammatory disorders. In the present investigation series of thiazoles bearing pyrazole as a possible pharmacophore were synthesized and assessed for their anti inflammatory activity using in vitro and in vivo methods. In order to decipher the possible anti-inflammatory mechanism of action of the synthesized compounds, cyclooxygenase I and II (COX-I and COX-II) inhibition assays were also carried out. The results obtained clearly focus the significance of compounds 5d, 5h and 5i as selective COX-II inhibitors. Moreover, compound 5h was also identified as a lead molecule for inhibition of the carrageenin induced rat paw edema in animal model studies. Molecular docking results revealed significant interactions of the test compounds with the active site of COX-II, which perhaps can be explored for design and development of novel COX-II selective anti-inflammatory agents.
Synthesis and in silico investigation of thiazoles bearing pyrazoles derivatives as anti-inflammatory agents
S1476927116300457
Angiopoietin-like protein 8 (ANGPTL8) (also known as betatrophin) is a newly identified secretory protein with a potential role in autophagy, lipid metabolism and pancreatic beta-cell proliferation. Its structural characterization is required to enhance our current understanding of its mechanism of action which could help in identifying its receptor and/or other binding partners. Based on the physiological significance and necessity of exploring structural features of ANGPTL8, the present study is conducted with a specific aim to model the structure of ANGPTL8 and study its possible interactions with Lipoprotein Lipase (LPL). To the best of our knowledge, this is the first attempt to predict 3-dimensional (3D) structure of ANGPTL8. Three different approaches were used for modeling of ANGPTL8 including homology modeling, de-novo structure prediction and their amalgam which is then proceeded by structure verification using ERRATT, PROSA, Qmean and Ramachandran plot scores. The selected models of ANGPTL8 were further evaluated for protein–protein interaction (PPI) analysis with LPL using CPORT and HADDOCK server. Our results have shown that the crystal structure of iSH2 domain of Phosphatidylinositol 3-kinase (PI3K) p85β subunit (PDB entry: 3mtt) is a good candidate for homology modeling of ANGPTL8. Analysis of inter-molecular interactions between the structure of ANGPTL8 and LPL revealed existence of several non-covalent interactions. The residues of LPL involved in these interactions belong from its lid region, thrombospondin (TSP) region and heparin binding site which is suggestive of a possible role of ANGPTL8 in regulating the proteolysis, motility and localization of LPL. Besides, the conserved residues of SE1 region of ANGPTL8 formed interactions with the residues around the hinge region of LPL. Overall, our results support a model of inhibition of LPL by ANGPTL8 through the steric block of its catalytic site which will be further explored using wet lab studies in future.
Structural characterization of ANGPTL8 (betatrophin) with its interacting partner lipoprotein lipase
S1476927116300469
Protein structure prediction is considered as one of the most challenging and computationally intractable combinatorial problem. Thus, the efficient modeling of convoluted search space, the clever use of energy functions, and more importantly, the use of effective sampling algorithms become crucial to address this problem. For protein structure modeling, an off-lattice model provides limited scopes to exercise and evaluate the algorithmic developments due to its astronomically large set of data-points. In contrast, an on-lattice model widens the scopes and permits studying the relatively larger proteins because of its finite set of data-points. In this work, we took the full advantage of an on-lattice model by using a face-centered-cube lattice that has the highest packing density with the maximum degree of freedom. We proposed a graded energy—strategically mixes the Miyazawa–Jernigan (MJ) energy with the hydrophobic-polar (HP) energy—based genetic algorithm (GA) for conformational search. In our application, we introduced a 2×2 HP energy guided macro-mutation operator within the GA to explore the best possible local changes exhaustively. Conversely, the 20×20 MJ energy model—the ultimate objective function of our GA that needs to be minimized—considers the impacts amongst the 20 different amino acids and allow searching the globally acceptable conformations. On a set of benchmark proteins, our proposed approach outperformed state-of-the-art approaches in terms of the free energy levels and the root-mean-square deviations.
Guided macro-mutation in a graded energy based genetic algorithm for protein structure prediction
S1476927116300500
The classical sequencing by hybridization takes into account a binary information about sequence composition. A given element from an oligonucleotide library is or is not a part of the target sequence. However, the DNA chip technology has been developed and it enables to receive a partial information about multiplicity of each oligonucleotide the analyzed sequence consist of. Currently, it is not possible to assess the exact data of such type but even partial information should be very useful. Two realistic multiplicity information models are taken into consideration in this paper. The first one, called “one and many” assumes that it is possible to obtain information if a given oligonucleotide occurs in a reconstructed sequence once or more than once. According to the second model, called “one, two and many”, one is able to receive from biochemical experiment information if a given oligonucleotide is present in an analyzed sequence once, twice or at least three times. An ant colony optimization algorithm has been implemented to verify the above models and to compare with existing algorithms for sequencing by hybridization which utilize the additional information. The proposed algorithm solves the problem with any kind of hybridization errors. Computational experiment results confirm that using even the partial information about multiplicity leads to increased quality of reconstructed sequences. Moreover, they also show that the more precise model enables to obtain better solutions and the ant colony optimization algorithm outperforms the existing ones. Test data sets and the proposed ant colony optimization algorithm are available on: http://bioserver.cs.put.poznan.pl/download/ACO4mSBH.zip.
A multilevel ant colony optimization algorithm for classical and isothermic DNA sequencing by hybridization with multiplicity information available
S1476927116300512
Carcinogenicity prediction is an important process that can be performed to cut down experimental costs and save animal lives. The current reliability of the results is however disputed. Here, a blind exercise in carcinogenicity category assessment is performed using augmented top priority fragment classification. The procedure analyses the applicability domain of the dataset, allocates in clusters the compounds using a leading molecular fragment, and a similarity measure. The exercise is applied to three compound datasets derived from the Lois Gold Carcinogenic Database. The results, showing good agreement with experimental data, are compared with published ones. A final discussion on our viewpoint on the possibilities that the carcinogenicity modelling of chemical compounds offers is presented.
Carcinogenicity prediction of noncongeneric chemicals by augmented top priority fragment classification
S1476927116300536
Human Leukocyte Antigens (HLA) are highly polymorphic proteins that play a key role in the immune system. HLA molecule is present on the cell membrane of antigen-presenting cells of the immune system and presents short peptides, originating from the proteins of invading pathogens or self-proteins, to the T-cell Receptor (TCR) molecule of the T-cells. In this study, peptide-binding characteristics of HLA-B*44:02, 44:03, 44:05 alleles bound to three nonameric peptides were studied using molecular dynamics simulations. Polymorphisms among these alleles (Asp116Tyr and Asp156Leu) result in major differences in the allele characteristics. While HLA-B*44:02 (Asp116, Asp156) and HLA-B*44:03 (Asp116, Leu156) depend on tapasin for efficient peptide loading, HLA-B*44:05 (Tyr116, Asp156) is tapasin independent. On the other hand, HLA-B*44:02 and HLA-B*44:03 mismatch is closely related to transplant rejection and acute-graft-versus-host disease. In order to understand the dynamic characteristics, the simulation trajectories were analyzed by applying Root Mean Square Deviation (RMSD) and Root Mean Square Fluctuation (RMSF) calculations and hydrogen bonding analysis. Binding dynamics of the three HLA-B*44 alleles and peptide sequences are comparatively discussed. In general, peptide binding stability is found to depend on the peptide rather than the allele type for HLA-B*44 alleles.
Dynamic characterization of HLA-B*44 Alleles: A comparative molecular dynamics simulation study
S1476927116300548
Metalloproteases involved in extracellular matrix remodeling play a pivotal role in cell response by regulating the bioavailability of cytokines and growth factors. Recently, the disintegrin and metalloprotease, ADAMTS1 has been demonstrated to be able to activate the transforming growth factor TGF-β, a major factor in fibrosis and cancer. The KTFR sequence from ADAMTS1 is responsible for the interaction with the LSKL peptide from the latent form of TGF-β, leading to its activation. While the atomic details of the interaction site can be the basis of the rational design of efficient inhibitory molecules, the binding mode of interaction is totally unknown. In this study, we show that recombinant fragments of human ADAMTS1 containing KTFR sequence keep the ability to bind the latent form of TGF-β. The recombinant fragment with the best affinity is modeled to investigate the binding mode of LSKL peptide with ADAMTS1 at the atomic level. Using a combined approach with molecular docking and multiple independent molecular dynamics (MD) simulations, we provide the binding mode of LSKL peptide with ADAMTS1. The MD simulations starting with the two lowest energy model predicted by molecular docking shows stable interactions characterized by 3 salt bridges (K3–NH3 + with E626–COO−; L4–COO− with K619–NH3 +; L1–NH3 + with E624–COO−) and 2 hydrogen bonds (S2–OH with E623–COO−; L4–NH with E623–COO−). The knowledge of this interaction mechanism paves the way to the design of more potent and more specific inhibitors against the inappropriate activation of TGF-β by ADAMTS1 in liver diseases.
In silico characterization of the interaction between LSKL peptide, a LAP-TGF-beta derived peptide, and ADAMTS1
S147692711630072X
In recent years, computer aided redesigning methods based on genome-scale metabolic network models (GEMs) have played important roles in metabolic engineering studies; however, most of these methods are hindered by intractable computing times. In particular, methods that predict knockout strategies leading to overproduction of desired biochemical are generally unable to do high level prediction because the computational time will increase exponentially. In this study, we propose a new framework named IdealKnock, which is able to efficiently evaluate potentials of the production for different biochemical in a system by merely knocking out pathways. In addition, it is also capable of searching knockout strategies when combined with the OptKnock or OptGene framework. Furthermore, unlike other methods, IdealKnock suggests a series of mutants with targeted overproduction, which enables researchers to select the one of greatest interest for experimental validation. By testing the overproduction of a large number of native metabolites, IdealKnock showed its advantage in successfully breaking through the limitation of maximum knockout number in reasonable time and suggesting knockout strategies with better performance than other methods. In addition, gene–reaction relationship is well considered in the proposed framework.
IdealKnock: A framework for efficiently identifying knockout strategies leading to targeted overproduction
S1476927116300986
Nipah virus and Hendra virus, two members of the genus Henipavirus, are newly emerging zoonotic pathogens which cause acute respiratory illness and severe encephalitis in human. Lack of the effective antiviral therapy endorses the urgency for the development of vaccine against these deadly viruses. In this study, we employed various computational approaches to identify epitopes which has the potential for vaccine development. By analyzing the immune parameters of the conserved sequences of G glycoprotein using various databases and bioinformatics tools, we identified two potential epitopes which may be used as peptide vaccines. Using different B cell epitope prediction servers, four highly similar B cell epitopes were identified. Immunoinformatics analyses revealed that LAEDDTNAQKT is a highly flexible and accessible B-cell epitope to antibody. Highly similar putative CTL epitopes were analyzed for their binding with the HLA-C 12*03 molecule. Docking simulation assay revealed that LTDKIGTEI has significantly lower binding energy, which bolstered its potential as epitope-based vaccine design. Finally, cytotoxicity analysis has also justified their potential as promising epitope-based vaccine candidate. In sum, our computational analysis indicates that either LAEDDTNAQKT or LTDKIGTEI epitope holds a promise for the development of universal vaccine against all kinds of pathogenic Henipavirus. Further in vivo and in vitro studies are necessary to validate the obtained findings.
Two highly similar LAEDDTNAQKT and LTDKIGTEI epitopes in G glycoprotein may be useful for effective epitope based vaccine design against pathogenic Henipavirus
S1476927116301475
The coactivators CBP (CREBBP) and its paralog p300 (EP300), two conserved multi-domain proteins in eukaryotic organisms, regulate gene expression in part by binding DNA-binding transcription factors. It was previously reported that the CBP/p300 KIX domain mutant (Y650A, A654Q, and Y658A) altered both c-Myb-dependent gene activation and repression, and that mice with these three point mutations had reduced numbers of platelets, B cells, T cells, and red blood cells. Here, our transient transfection assays demonstrated that mouse embryonic fibroblast cells containing the same mutations in the KIX domain and without a wild-type allele of either CBP or p300, showed decreased c-Myb-mediated transcription. Dr. Wright’s group solved a 3-D structure of the mouse CBP:c-Myb complex using NMR. To take advantage of the experimental structure and function data and improved theoretical calculation methods, we performed MD simulations of CBP KIX, CBP KIX with the mutations, and c-Myb, as well as binding energy analysis for both the wild-type and mutant complexes. The binding between CBP and c-Myb is mainly mediated by a shallow hydrophobic groove in the center where the side-chain of Leu302 of c-Myb plays an essential role and two salt bridges at the two ends. We found that the KIX mutations slightly decreased stability of the CBP:c-Myb complex as demonstrated by higher binding energy calculated using either MM/PBSA or MM/GBSA methods. More specifically, the KIX mutations affected the two salt bridges between CBP and c-Myb (CBP-R646 and c-Myb-E306; CBP-E665 and c-Myb-R294). Our studies also revealed differing dynamics of the hydrogen bonds between CBP-R646 and c-Myb-E306 and between CBP-E665 and c-Myb-R294 caused by the CBP KIX mutations. In the wild-type CBP:c-Myb complex, both of the hydrogen bonds stayed relatively stable. In contrast, in the mutant CBP:c-Myb complex, hydrogen bonds between R646 and E306 showed an increasing trend followed by a decreasing trend, and hydrogen bonds of the E665:R294 pair exhibited a fast decreasing trend over time during MD simulations. In addition, our data showed that the KIX mutations attenuate CBP’s hydrophobic interaction with Leu302 of c-Myb. Furthermore, our 500-ns MD simulations showed that CBP KIX with the mutations has a slightly lower potential energy than wild-type CBP. The CBP KIX structures with or without its interacting protein c-Myb are different for both wild-type and mutant CBP KIX, and this is likewise the case for c-Myb with or without CBP, suggesting that the presence of an interacting protein influences the structure of a protein. Taken together, these analyses will improve our understanding of the exact functions of CBP and its interaction with c-Myb.
Experimental and molecular dynamics studies showed that CBP KIX mutation affects the stability of CBP:c-Myb complex
S1476927116301542
Nuclear factor kappa B (NF-κB) is a transcription factor, plays a crucial role in the regulation of various physiological processes such as differentiation, cell proliferation and apoptosis. It also coordinates the expression of various soluble proinflammatory mediators like cytokines and chemokines. The 1, 8-dihydroxy-4-methylanthracene-9, 10-dione (DHMA) was isolated from Luffa acutangala and its in vitro cytotoxic activity against NCI-H460 cells was reported earlier. It also effectively induces apoptosis through suppressing the expression NF-κB protein. Based on experimental evidence, the binding affinity of compound 1 with NF-κB p50 (monomer) and NF-κB-DNA was investigated using molecular docking and its stability was confirmed through molecular dynamic simulation. The reactivity of the compound was evaluated using density functional theory (DFT) calculation. From the docking results, we noticed that the hydroxyl group of DHMA forms hydrogen bond interactions with polar and negatively charged amino acid Tyr57 and Asp239 and the protein-ligand complex was stabilized through pi-pi stacking with the help of polar amino acid His114, which plays a key role in binding of NF-κB to DNA at a site of DNA-binding region (DBR). The result indicates that the isolated bioactive compound DHMA might have altered the binding affinity between DNA and NF-κB. These findings suggest that potential use of DHMA in cancer chemoprevention and therapeutics.
Exploring the inhibitory potential of bioactive compound from Luffa acutangula against NF-κB—A molecular docking and dynamics approach
S1477842413000031
In this paper, we present Monaco – a domain-specific language for developing event-based, reactive process control programs – and its visual interactive programming environment. The main purpose of the language is to bring process control programming closer to domain experts. Important design goals have therefore been to keep the language concise and to allow programs to be written that reflect the perceptions of domain experts. Monaco is similar to Statecharts in its expressive power, but adopts an imperative notation. Moreover, Monaco uses a state-of-the-art component approach with interfaces and polymorphic implementations, and enforces strict hierarchical component architectures that support hierarchical abstraction of control functionality. We present the main design goals, the essential programming elements, the visual interactive programming environment, results from industrial case studies, and a formal definition of the semantics of the reactive behavior of Monaco programs in the form of labeled transition systems.
Monaco—A domain-specific language solution for reactive process control programming with hierarchical components
S1477842413000134
In order to improve the effectiveness of fault localization, researchers are interested in test-suite reduction to provide suitable test-suite inputs. Different test-suite reduction approaches have been proposed. However, the results are usually not ideal. Reducing the test-suite improperly or excessively can even negatively affect fault-localization effectiveness. In this paper, we propose a two-step test-suite reduction approach to remove the test cases which have little or no effect on fault localization, and improve the distribution evenness of concrete execution paths of test cases. This approach consists of coverage matrix based reduction and path vector based reduction, so it analyzes not only the test cases coverage but also the concrete path information. We design and implement experiments to verify the effect of our approach. The experimental results show that our reduced test-suite can improve fault-localization effectiveness. On average, our approach can reduce the size of a test-suite in 47.87% (for Siemens programs) and 23.03% (for space program). At the same time, on average our approach can improve the fault-localization effectiveness, 2.12 on Siemens programs and 0.13 on space program by Tarantula approach.
A test-suite reduction approach to improving fault-localization effectiveness
S1477842413000146
Method inlining is one of the most important optimizations in method-based just-in-time (JIT) compilers. It widens the compilation scope and therefore allows optimizing multiple methods as a whole, which increases the performance. However, if method inlining is used too frequently, the compilation time increases and too much machine code is generated. This has negative effects on the performance. Trace-based JIT compilers only compile frequently executed paths, so-called traces, instead of whole methods. This may result in faster compilation, less generated machine code, and better optimized machine code. In the previous work, we implemented a trace recording infrastructure and a trace-based compiler for Java TM , by modifying the Java HotSpot VM. Based on this work, we evaluate the effect of trace inlining on the performance and the amount of generated machine code. Trace inlining has several major advantages when compared to method inlining. First, trace inlining is more selective than method inlining, because only frequently executed paths are inlined. Second, the recorded traces may capture information about virtual calls, which simplify inlining. A third advantage is that trace information is context sensitive so that different method parts can be inlined depending on the specific call site. These advantages allow more aggressive inlining while the amount of generated machine code is still reasonable. We evaluate several inlining heuristics on the benchmark suites DaCapo 9.12 Bach, SPECjbb2005, and SPECjvm2008 and show that our trace-based compiler achieves an up to 51% higher peak performance than the method-based Java HotSpot client compiler. Furthermore, we show that the large compilation scope of our trace-based compiler has a positive effect on other compiler optimizations such as constant folding or null check elimination.
Context-sensitive trace inlining for Java
S1477842413000158
Corecursion is the ability of defining a function that produces some infinite data in terms of the function and the data itself, as supported by lazy evaluation. However, in languages such as Haskell strict operations fail to terminate even on infinite regular data, that is, cyclic data. Regular corecursion is naturally supported by coinductive Prolog, an extension where predicates can be interpreted either inductively or coinductively, that has proved to be useful for formal verification, static analysis and symbolic evaluation of programs. In this paper we use the meta-programming facilities offered by Prolog to propose extensions to coinductive Prolog aiming to make regular corecursion more expressive and easier to program with. First, we propose a new interpreter to solve the problem of non-terminating failure as experienced with the standard semantics of coinduction (as supported, for instance, in SWI-Prolog). Another problem with the standard semantics is that predicates expressed in terms of existential quantification over a regular term cannot directly defined by coinduction; to this aim, we introduce finally clauses, to allow more flexibility in coinductive definitions. Then we investigate the possibility of annotating arguments of coinductive predicates, to restrict coinductive definitions to a subset of the arguments; this allows more efficient definitions, and further enhance the expressive power of coinductive Prolog. We investigate the effectiveness of such features by showing different example programs manipulating several kinds of cyclic values, ranging from automata and context free grammars to graphs and repeating decimals; the examples show how computations on cyclic values can be expressed with concise and relatively simple programs. The semantics defined by these vanilla meta-interpreters are an interesting starting point for a more mature design and implementation of coinductive Prolog.
Regular corecursion in Prolog
S147784241300016X
Polymorphic programming languages have been adapted for constructing distributed access control systems, where a program represents a proof of eligibility according to a given policy. As a security requirement, it is typically stated that the programs of such languages should satisfy noninterference. However, this property has not been defined and proven semantically. In this paper, we first propose a semantics based on Henkin models for a predicative polymorphic access control language based on lambda-calculus. A formal semantic definition of noninterference is then proposed through logical relations. We prove a type soundness theorem which states that any well-typed program of our language meets the noninterference property defined in this paper. In this way, it is guaranteed that access requests from an entity do not interfere with those from unrelated or more trusted entities.
Noninterference in a predicative polymorphic calculus for access control
S1477842413000183
We present a new set of algorithms for performing arithmetic computations on the set of natural numbers, represented as ordered rooted binary trees. We show formally that these algorithms are correct and discuss their time and space complexity in comparison to traditional arithmetic operations on bitstrings. Our binary tree algorithms follow the structure of a simple type language, similar to that of Gödel's System T. Generic implementations using Haskell's type class mechanism are shared between instances shown to be isomorphic to the set of natural numbers. This representation independence is illustrated by instantiating our computational framework to the language of balanced parenthesis languages. The self-contained source code of the paper is available at http://logic.cse.unt.edu/tarau/research/2012/jtypes.hs.
Binary trees as a computational framework
S1477842413000286
JavaScript emerges today as one of the most important programming languages for the development of client-side web applications. Therefore, it is essential that browsers be able to execute JavaScript programs efficiently. However, the dynamic nature of this programming language makes it very challenging to achieve this much needed efficiency. In this paper we propose parameter-based value specialization as a way to improve the quality of the code produced by JIT engines. We have empirically observed that almost 60% of the JavaScript functions found in the world's 100 most popular websites are called only once, or are called with the same parameters. Capitalizing on this observation, we adapt a number of classic compiler optimizations to specialize code based on the runtime values of function's actual parameters. We have implemented the techniques proposed in this paper in IonMonkey, an industrial quality JavaScript JIT compiler developed at the Mozilla Foundation. Our experiments, run across three popular JavaScript benchmarks, SunSpider, V8 and Kraken, show that, in spite of its highly speculative nature, our optimization pays for itself. As an example, we have been able to speed up V8 by 4.83%, and to reduce the size of its generated native code by 18.84%.
Just-in-time value specialization
S1477842414000025
Smart cards are portable integrated devices that store and process data. Speed, security and portability properties enable smart cards to have a widespread usage in various fields including telecommunication, transportation and the credit card industry. However, the development of smart card applications is a difficult task due to hardware and software constraints. The necessity of the knowledge of both a very low-level communication protocol and a specific hardware causes smart card software development to be a big challenge for the developers. Written codes tend to be error-prone and hard to debug because of the limited memory resources. Hence, in this study, we introduce a model driven architecture which aims to facilitate smart card software development by both providing an easy design of smart card systems and automatic generation of the required smart card software from the system models. Differentiating from the previous work, the study in here contributes to the field by both providing various smart card metamodels in different abstraction layers and defines model-to-model transformations between the instances of these metamodels in order to support the realization of the same system on different smart card platforms. Applicability of the proposed methodology is shown for rapid and efficient application development in two major smart card frameworks: Java Card and ZeitControl Basic Card. Lessons learned during the industrial usage of the architecture are also reported in the paper. Finally, we discuss how the components of the architecture can be integrated in order to provide a domain-specific language for smart card software.
A model driven architecture for the development of smart card software
S1477842414000037
The ability to annotate code and, in general, the capability to attach arbitrary meta-data to portions of a program are features that have become more and more common in programming languages. Annotations in Java make it possible to attach custom, structured meta-data to declarations of classes, fields and methods. However, the mechanism has some limits: annotations can only decorate declarations and their instantiation can only be resolved statically. With this work, we propose an extension to Java (named @Java) with a richer annotation model, supporting code block and expression annotations, as well as dynamically evaluated members. In other words, in our model, the granularity of annotations extends to the statement and expression level and annotations may hold the result of runtime-evaluated expressions. Our extension to the Java annotation model is twofold: (i) we introduced block and expression annotations and (ii) we allow every annotation to hold dynamically evaluated values. Our implementation also provides an extended reflection API to support inspection and retrieval of our enhanced annotations.
@Java: Bringing a richer annotation model to Java
S1477842414000049
Symbolic computation is an important area of both Mathematics and Computer Science, with many large computations that would benefit from parallel execution. Symbolic computations are, however, challenging to parallelise as they have complex data and control structures, and both dynamic and highly irregular parallelism. The SymGridPar framework (SGP) has been developed to address these challenges on small-scale parallel architectures. However the multicore revolution means that the number of cores and the number of failures are growing exponentially, and that the communication topology is becoming increasingly complex. Hence an improved parallel symbolic computation framework is required. This paper presents the design and initial evaluation of SymGridPar2 (SGP2), a successor to SymGridPar that is designed to provide scalability onto 105 cores, and hence also provide fault tolerance. We present the SGP2 design goals, principles and architecture. We describe how scalability is achieved using layering and by allowing the programmer to control task placement. We outline how fault tolerance is provided by supervising remote computations, and outline higher-level fault tolerance abstractions. We describe the SGP2 implementation status and development plans. We report the scalability and efficiency, including weak scaling to about 32,000 cores, and investigate the overheads of tolerating faults for simple symbolic computations.
Reliable scalable symbolic computation: The design of SymGridPar2
S1477842414000062
The XSLT language is key technology to develop software which manipulates data encoded in XML, a versatile formalism widely adopted for information description and exchange. This motivates the adoption of formal techniques to certify the correctness (with respect to the expected output) and robustness (e.g., tolerance to malformed inputs) of the XSLT code. Unfortunately, such code cannot be validated using only static approaches (i.e., without executing it), due to the complexity of the XSLT formalism. In this paper we show how a software verification technology, namely the model checking, can be adapted to obtain an effective and easy to use XSLT validation framework. The core of the presented methodology is the XSLToMurphi algorithm, which is able to build a formal model of an XSLT transformation, suitable to be verified through the CMurphi tool.
Model checking XSL transformations
S1477842414000323
We present a sparse evaluation technique that is effectively applicable to a set of elaborate semantic-based static analyses. Existing sparse evaluation techniques are effective only when the underlying analyses have comparably low precision. For example, if a pointer analysis precision is not affected by numeric statements like x≔1 then existing sparse evaluation techniques can remove the statement, but otherwise, the statement cannot be removed. Our technique, which is a fine-grained sparse evaluation technique, is effectively applicable even to elaborate analyses. A key insight of our technique is that, even though a statement is relevant to an analysis, it is typical that analyzing the statement involves only a tiny subset of its input abstract memory and the most are irrelevant. By exploiting this sparsity, our technique transforms the original analysis into a form that does not involve the fine-grained irrelevant semantic behaviors. We formalize our technique within the abstract interpretation framework. In experiments with a C static analyzer, our technique improved the analysis speed by on average 14×.
A sparse evaluation technique for detailed semantic analyses
S1477842414000335
The rise of mobile computing platforms has given rise to a new class of applications: mobile applications that interact with peer applications running on neighbouring phones. Developing such applications is challenging because of problems inherent to concurrent and distributed programming, and because of problems inherent to mobile networks, such as the fact that wireless network connectivity is often intermittent, and the lack of centralized infrastructure to coordinate the peers. We present AmbientTalk, a distributed programming language designed specifically to develop mobile peer-to-peer applications. AmbientTalk aims to make it easy to develop mobile applications that are resilient to network failures by design. We describe the language׳s concurrency and distribution model in detail, as it lies at the heart of AmbientTalk׳s support for responsive, resilient application development. The model is based on communicating event loops, itself a descendant of the actor model. We contribute a small-step operational semantics for this model and use it to establish data race and deadlock freedom.
AmbientTalk: programming responsive mobile peer-to-peer applications with actors
S1477842414000347
It is well-known that the Dolev–Yao adversary is a powerful adversary. Besides acting as the network, intercepting, decomposing, composing and sending messages, he can remember as much information as he needs. That is, his memory is unbounded. We recently proposed a weaker Dolev–Yao like adversary, which also acts as the network, but whose memory is bounded. We showed that this Bounded Memory Dolev–Yao adversary, when given enough memory, can carry out many existing protocol anomalies. In particular, the known anomalies arise for bounded memory protocols, where although the total number of sessions is unbounded, there are only a bounded number of concurrent sessions and the honest participants of the protocol cannot remember an unbounded number of facts or an unbounded number of nonces at a time. This led us to the question of whether it is possible to infer an upper-bound on the memory required by the Dolev–Yao adversary to carry out an anomaly from the memory restrictions of the bounded protocol. This paper answers this question negatively (Theorem 8).
Bounded memory protocols
S1477842414000359
As real-time systems increase in complexity to provide more and more functionality and perform more demanding computations, the problem of statically analyzing the Worst-Case Execution Time (WCET) bound of real-time programs is becoming more and more time-consuming and imprecise. The problem stems from the fact that with increasing program size, the number of potentially relevant program and hardware states that need to be considered during WCET analysis increases as well. However, only a relatively small portion of the program actually contributes to the final WCET bound. Large parts of the program are thus irrelevant and are analyzed in vain. In the best case this only leads to increased analysis time. Very often, however, the analysis of irrelevant program parts interferes with the analysis of those program parts that turn out to be relevant. We explore a novel technique based on graph pruning that promises to reduce the analysis overhead and, at the same time, increase the analysis’ precision. The basic idea is to eliminate those program parts from the analysis problem that are known to be irrelevant for the final WCET bound. This reduces the analysis overhead, since only a subset of the program and hardware states have to be tracked. Consequently, more aggressive analysis techniques may be applied, effectively reducing the overestimation of the WCET. As a side-effect, interference from irrelevant program parts is eliminated, e.g., on addresses of memory accesses, on loop bounds, or on the cache or processor state. First experiments using a commercial WCET analysis tool show that our approach is feasible in practice and leads to reductions of up to 12% when a standard IPET approach is used for the analysis.
Refinement of worst-case execution time bounds by graph pruning
S1477842415000020
J% is an extension of the Java programming language that efficiently supports the integration of domain-specific languages. In particular, J% allows the embedding of domain-specific language code into Java programs in a syntax-checked and type-safe manner. This paper presents J%׳s support for the sql language. J% checks the syntax and semantics of sql statements at compile-time. It supports query validation against a database schema or through execution to a live database server. The J% compiler generates code that uses standard jdbc api calls, enhancing runtime efficiency and security against sql injection attacks.
A type-safe embedding of SQL into Java using the extensible compiler framework J%
S1477842415000032
Adaptable Parsing Expression Grammar (APEG) is a formal method for defining the syntax of programming languages. It provides an on-the-fly mechanism to perform modifications of the syntax of the language during parsing time. The primary goal of this dynamic mechanism is the formal specification and the automatic parser generation for extensible languages. In this paper, we show how APEG can be used for the definition of the extensible languages SugarJ and Fortress, clarifying many aspects of the syntax of these languages. We also show that the mechanism for on-the-fly modification of syntax rules can be useful for defining grammars in a modular way, implementing almost all types of language composition in the context of specification of extensible languages.
An on-the-fly grammar modification mechanism for composing and defining extensible languages
S1477842415000044
It is well-known that today׳s compilers and state of the art libraries have three major drawbacks. First, the compiler sub-problems are optimized separately; this is not efficient because the separate sub-problems optimization gives a different schedule for each sub-problem and these schedules cannot coexist as the refining of one, causes the degradation of another. Second, they take into account only part of the specific algorithm׳s information. Third, they take into account only a few hardware architecture parameters. These approaches cannot give an optimal solution. In this paper, a new methodology/pre-compiler is introduced, which speeds up loop kernels, by overcoming the above problems. This methodology solves four of the major scheduling sub-problems, together as one problem and not separately; these are the sub-problems of finding the schedules with the minimum numbers of (i) L1 data cache accesses, (ii) L2 data cache accesses, (iii) main memory data accesses, (iv) addressing instructions. First, the exploration space (possible solutions) is found according to the algorithm׳s information, e.g. array subscripts. Then, the exploration space is decreased by orders of magnitude, by applying constraint propagation to the software and hardware parameters. We take the C-code and the memory architecture parameters as input and we automatically produce a new faster C-code; this code cannot be obtained by applying the existing compiler transformations to the original code. The proposed methodology has been evaluated for five well-known algorithms in both general and embedded processors; it is compared with gcc and clang compilers and also with iterative compilation.
A methodology for speeding up loop kernels by exploiting the software information and the memory architecture
S1477842415000056
Reuse in programming language development is an open research problem. Many authors have proposed frameworks for modular language development. These frameworks focus on maximizing code reuse, providing primitives for componentizing language implementations. There is also an open debate on combining feature-orientation with modular language development. Feature-oriented programming is a vision of computer programming in which features can be implemented separately, and then combined to build a variety of software products. However, even though feature-orientation and modular programming are strongly connected, modular language development frameworks are not usually meant primarily for feature-oriented language definition. In this paper we present a model of language development that puts feature implementation at the center, and describe its implementation in the Neverlang framework. The model has been evaluated through several languages implementations: in this paper, a state machine language is used as a means of comparison with other frameworks, and a JavaScript interpreter implementation is used to further illustrate the benefits that our model provides.
Neverlang: A framework for feature-oriented language development
S1477842415000068
The definition of a metamodel that precisely captures domain knowledge for effective know-how capitalization is a challenging task. A major obstacle for domain experts who want to build a metamodel is that they must master two radically different languages: an object-oriented, MOF-compliant, modeling language to capture the domain structure and first order logic (the Object Constraint Language) for the definition of well-formedness rules. However, there are no guidelines to assist the conjunct usage of both paradigms, and few tools support it. Consequently, we observe that most metamodels have only an object-oriented domain structure, leading to inaccurate metamodels. In this paper, we perform the first empirical study, which analyzes the current state of practice in metamodels that actually use logical expressions to constrain the structure. We analyze 33 metamodels including 995 rules coming from industry, academia and the Object Management Group, to understand how metamodelers articulate both languages. We implement a set of metrics in the OCLMetrics tool to evaluate the complexity of both parts, as well as the coupling between both. We observe that all metamodels tend to have a small, core subset of concepts, which are constrained by most of the rules, in general the rules are loosely coupled to the structure and we identify the set of OCL constructs actually used in rules.
An analysis of metamodeling practices for MOF and OCL
S147784241500007X
In this paper, we compose six different Python and Prolog VMs into 4 pairwise compositions: one using C interpreters, one running on the JVM, one using meta-tracing interpreters, and one using a C interpreter and a meta-tracing interpreter. We show that programs that cross the language barrier frequently execute faster in a meta-tracing composition, and that meta-tracing imposes a significantly lower overhead on composed programs relative to mono-language programs.
Approaches to interpreter composition
S1477842415000081
Protected module architectures (PMAs) are isolation mechanisms of emerging processors that provide security building blocks for modern software systems. Reasoning about these building blocks means reasoning about elaborate assembly code, which can be very complex due to the loose structure of the code. One way to overcome this complexity is providing the code with a well-structured semantics. This paper presents one such semantics, namely a fully abstract trace semantics, for an assembly language enhanced with PMA. The trace semantics represents the behaviour of protected assembly code with simple abstractions, unburdened by low-level details, at the maximum degree of precision. Furthermore, it captures the capabilities of attackers to protected code and simplifies the formulation of a secure compiler targeting PMA-enhanced assembly language.
Fully abstract trace semantics for protected module architectures
S1477842415000238
The multi-core trend is widening the gap between programming languages and hardware. Taking parallelism into account in the programs is necessary to improve performance. Unfortunately, current mainstream programming languages fail to provide suitable abstractions to do so. The most common pattern relies on the use of mutexes to ensure mutual exclusion between concurrent accesses to a shared memory. However, this model is error-prone and scales poorly by lack of modularity. Recent research proposes atomic sections as an alternative. The user simply delimits portions of code that should be free from interference. The responsibility for ensuring interference freedom is left either to the compiler or to the run-time system. In order to provide enough modularity, it is necessary that both atomic sections could be nested and threads could be forked inside an atomic section. In this paper we focus on the semantics of programming languages providing these features. More precisely, without being tied to a specific programming language, we consider program traces satisfying some basic well-formedness conditions. Our main contribution is the precise definition of atomicity, well-synchronisation and the proof that the latter implies the strong form of the former. A formalisation of our results in the Coq proof assistant is described.
A formal semantics of nested atomic sections with thread escape
S147784241500024X
The Trapezoid Step Functions (TSF) domain is introduced in order to approximate continuous functions by a finite sequence of trapezoids, adopting linear functions to abstract the upper and the lower bounds of a continuous variable in each time slot. The lattice structure of TSF is studied, showing how to build and compute a sound abstraction of a given continuous function. Experimental results underline the effectiveness of the approach in terms of both precision and efficiency with respect to the domain of Interval Valued Step Functions (IVSF).
The abstract domain of Trapezoid Step Functions
S1477842415000263
Models have been widely used in the information system development process. Models are not just means for system analysis and documentation. They may be also transformed into system implementation, primarily program code. Generated program code of screen forms and transaction programs mainly implements generic functionalities that can be expressed by simple retrieval, insertion, update, or deletion operations over database records. Besides the program code of generic functionalities, each application usually includes program code for specific business logic that represents application-specific functionalities, which may include complex calculations, as well as a series of database operations. There is a lack of domain-specific and tool-supported techniques for specification of such application-specific functionalities at the level of platform-independent models (PIMs). In this paper, we propose an approach and a domain-specific language (DSL), named IIS⁎CFuncLang, aimed at enabling a complete specification of application-specific functionalities at the PIM level. We have developed algorithms for transformation of IIS ⁎ CFuncLang specifications into executable program code, such as PL/SQL program code. In order to support specification of application-specific functionalities using IIS ⁎ CFuncLang, we have also developed appropriate tree-based and textual editors. The language, editors, and the transformations are embedded into a Model-Driven Software Development tool, named Integrated Information Systems CASE (IIS ⁎ Case). IIS ⁎ Case supports platform-independent design and automated prototyping of information systems, which allows us to verify and test our approach in practice.
A DSL for modeling application-specific functionalities of business applications
S1477842415000275
Binary translation is an important technique for porting programs as it allows binary code for one platform to execute on another. It is widely used in virtual machines and emulators. However, implementing a correct (and efficient) binary translator is still very challenging because many delicate details must be handled smartly. Manually identifying mistranslated instructions in an application program is difficult, especially when the application is large. Therefore, automatic validation tools are needed urgently to uncover hidden problems in a binary translator. We developed a new validation tool for binary translators. In our validation tool, the original binary code and the translated binary code run simultaneously. Both versions of the binary code continuously send their architecture states and the stored values, which are the values stored into memory cells, to a third process, the validator. Since most mistranslated instructions will result in wrong architecture states during execution, our validator can catch most mistranslated instructions emitted by a binary translator by comparing the corresponding architecture states. Corresponding architecture states may differ due to (1) translation errors, (2) different (but correct) memory layouts, and (3) return values of certain system calls. The need to differentiate the three sources of differences makes comparing architecture states very difficult, if not impossible. In our validator, we take special care to make memory layouts exactly the same and make the corresponding system calls always return exactly the same values in the original and in the translated binaries. Therefore, any differences in the corresponding architecture states indicate mistranslated instructions emitted by the binary translator. Besides solving the architecture-state-comparison problems, we also propose several methods to speed up the automatic validation. The first is the validation-block method, which reduces the number of validations while keeping the accuracy of instruction-level validation. The second is quick validation, which provides extremely fast validation at the expense of less accurate error information. Our validator can be applied to different binary translators. In our experiment, the validator has successfully validated programs translated by static, dynamic, and hybrid binary translators.
Automatic validation for binary translation
S1477842415000287
Model transformation is a key concept in model-driven software engineering. The definition of model transformations is usually based on meta-models describing the abstract syntax of languages. While meta-models are thereby able to abstract from superfluous details of concrete syntax, they often loose structural information inherent in languages, like information on model elements always occurring together in particular shapes. As a consequence, model transformations cannot naturally re-use language structures, thus leading to unnecessary complexity in their development as well as in quality assurance. In this paper, we propose a new approach to model transformation development which allows to simplify the developed transformations and improve their quality via the exploitation of the languages׳ structures. The approach is based on context-free graph grammars and transformations defined by pairing productions of source and target grammars. We show that such transformations have important properties: they terminate and are sound, complete, and deterministic.
Grammar-based model transformations: Definition, execution, and quality properties
S147784241500041X
In a reconfigurable system, the response to contextual or internal change may trigger reconfiguration events which, on their turn, activate scripts that change the system׳s architecture at runtime. To be safe, however, such reconfigurations are expected to obey the fundamental principles originally specified by its architect. This paper introduces an approach to ensure that such principles are observed along reconfigurations by verifying them against concrete specifications in a suitable logic. Architectures, reconfiguration scripts, and principles are specified in Archery, an architectural description language with formal semantics. Principles are encoded as constraints, which become formulas of a two-layer graded hybrid logic, where the upper layer restricts reconfigurations, and the lower layer constrains the resulting configurations. Constraints are verified by translating them into logic formulas, which are interpreted over models derived from Archery specifications of architectures and reconfigurations. Suitable notions of bisimulation and refinement, to which the architect may resort to compare configurations, are given, and their relationship with modal validity is discussed.
On the verification of architectural reconfigurations
S1477842415000421
One of the main components of the Mizar project is the Mizar language, a computer language invented to reflect the natural language of mathematics. From the very beginning various linguistic constructions and grammar rules which enable us to write texts which resemble classical mathematical papers have been developed and implemented in the language. The Mizar Mathematical Library is a repository of computer-verified mathematical texts written in the Mizar language. Besides well-known and important theorems, the library contains series of some quite technical lemmas describing some properties formulated for different values of numbers. For example the sequence of lemmas for n being Nat st n <=1 holds n=0 or n=1; for n being Nat st n <=2 holds n=0 or n=1 or n=2; for n being Nat st n <=3 holds n=0 or n=1 or n=2 or n=3; which for a long time contained 13 such formulae. In this paper, we present an extension of the Mizar language – an ellipsis that is used to define flexary logical connectives. We define flexary conjunction and flexary disjunction, which can be understood as generalization of classical conjunction and classical disjunction, respectively. The proposed extension enables us to get rid of lists of such lemmas and to formulate them as single theorems, e.g. for m,n being Nat st n <=m holds n=0 or ... or n=m; covering all cases between the bounds 0 and m in this case. Moreover, a specific inference rule to process flexary formulae, formulae using flexary connectives, is introduced. We describe how ellipses are incorporated into the Mizar language and how they are verified by the Mizar proof checker.
Flexary connectives in Mizar
S1477842415000500
We present a technique to combine deep and shallow embedding in the context of compiling embedded languages in order to provide the benefits of both techniques. When compiling embedded languages it is natural to use an abstract syntax tree to represent programs. This is known as a deep embedding and it is a rather cumbersome technique compared to other forms of embedding, typically leading to more code and being harder to extend. In shallow embeddings, language constructs are mapped directly to their semantics which yields more flexible and succinct implementations. But shallow embeddings are not well-suited for compiling embedded languages. Our technique uses a combination of deep and shallow embedding, which helps keeping the deep embedding small and makes extending the embedded language much easier. The technique also has some unexpected but welcome secondary effects. It provides fusion of functions to remove intermediate results for free without any additional effort. It also helps us to give the embedded language a more natural programming interface.
Combining deep and shallow embedding of domain-specific languages
S1477842415000512
We extend an existing first-order typing system for strictness analysis to the fully higher-order case, covering both the derivation system and the inference algorithm. The resulting strictness typing system has expressive capabilities far beyond that of traditional strictness analysis systems. This extension is developed with the explicit aim of formally proving soundness of higher-order strictness typing with respect to a natural operational semantics. A key aspect of our approach is the introduction of a proof assistant at an early stage, namely during development of the proof. As such, the theorem prover aids the design of the language theoretic concepts. The new results in combination with their formal proof can be seen as a case study towards the achievement of the long term PoplMark Challenge. The proof framework developed for this case study can furthermore be used in other typing system case studies.
Derivation and inference of higher-order strictness types
S1477842415000524
In a programming classroom for beginners, a delicate balance must be struck between teaching the design, implementation, and testing fundamentals of programming and the need for students to find their first programming course enjoyable. A course that focuses solely on the fundamentals is not likely to nourish the excitement a student may have for Computer Science. A course that focuses solely in making programming fun is not likely to have students walk away with a solid grasp of the fundamentals. A very successful approach to strike this balance uses functional video games to motivate the need to learn principles of program design and Computer Science in a context that is of interest and fun for most students. Such an approach has successfully engaged students to learn design and implementation principles using primitive data, finite compound data, structural recursion for compound data of arbitrary size, and abstraction. This article explores how to use a functional video game approach to engage beginning students in problem solving that employs generative and accumulative recursion while at the same time reinforcing the lessons on structural recursion and abstraction. In addition to these two new forms of recursion, beginning students are also introduced to depth-first searching, breadth-first searching, heuristic-based searching, and the use of randomness. The article uses the N-puzzle problem to illustrate how all these topics are seamlessly addressed in the beginner׳s classroom while keeping student enthusiasm high as evidenced by student feedback.
Generative and accumulative recursion made fun for beginners
S1477842415000536
Imprecise manipulation of source code (semi-parsing) is useful for tasks such as robust parsing, error recovery, lexical analysis, and rapid development of parsers for data extraction. An island grammar precisely defines only a subset of a language syntax (islands), while the rest of the syntax (water) is defined imprecisely. Usually water is defined as the negation of islands. Albeit simple, such a definition of water is naïve and impedes composition of islands. When developing an island grammar, sooner or later a language engineer has to create water tailored to each individual island. Such an approach is fragile, because water can change with any change of a grammar. It is time-consuming, because water is defined manually by an engineer and not automatically. Finally, an island surrounded by water cannot be reused because water has to be defined for every grammar individually. In this paper we propose a new technique of island parsing — bounded seas. Bounded seas are composable, robust, reusable and easy to use because island-specific water is created automatically. Our work focuses on applications of island parsing to data extraction from source code. We have integrated bounded seas into a parser combinator framework as a demonstration of their composability and reusability.
Bounded seas
S1477842415000548
Multiview modeling languages like UML are a very powerful tool to deal with the ever increasing complexity of modern software systems. By splitting the description of a system into different views—the diagrams in the case of UML—system properties relevant for a certain development activity are highlighted while other properties are hidden. This multiview approach has many advantages for the human modeler, but at the same time it is very susceptible to various kinds of defects that may be introduced during the development process. Besides defects which relate only to one view, it can also happen that two different views, which are correct if considered independently, contain inconsistent information when combined. Such inconsistencies between different views usually indicate a defect in the model and can be critical if they propagate up to the executable system. In this paper, we present an approach to formally verify the reachability of a global state of a set of communicating UML state machines, i.e., we present a solution for an intradiagram consistency checking problem. We then extend this approach to solve an interdiagram consistency checking problem. In particular, we verify whether the message exchange modeled in a UML sequence diagram conforms to a set of communicating state machines. For solving both kinds of problems, we proceed as follows. As a first step, we formalize the semantics of UML state machines and of UML sequence diagrams. In the second step, we build upon this formal semantics and encode both verification tasks as decision problems of propositional logic (SAT) allowing the use of efficient SAT technology. We integrate both approaches in a graphical modeling environment, enabling modelers to use formal verification techniques without any special background knowledge. We experimentally evaluate the scalability of our approach.
Intra- and interdiagram consistency checking of behavioral multiview models
S147784241500055X
We propose a language-independent symbolic execution framework for languages endowed with a formal operational semantics based on term rewriting. Starting from a given definition of a language, a new language definition is generated, with the same syntax as the original one, but whose semantical rules are transformed in order to rewrite over logical formulas denoting possibly infinite sets of program states. Then, the symbolic execution of concrete programs is, by definition, the execution of the same programs with the symbolic semantics. We prove that the symbolic execution thus defined has the properties naturally expected from it (with respect to concrete program execution). A prototype implementation of our approach was developed in the K framework. We demonstrate the tool׳s genericity by instantiating it on several languages, and illustrate it on the reachability analysis and model checking of several programs.
Symbolic execution based on language transformation
S1477842415000561
Understanding the run-time behavior of software systems can be a challenging activity. Debuggers are an essential category of tools used for this purpose as they give developers direct access to the running systems. Nevertheless, traditional debuggers rely on generic mechanisms to introspect and interact with the running systems, while developers reason about and formulate domain-specific questions using concepts and abstractions from their application domains. This mismatch creates an abstraction gap between the debugging needs and the debugging support leading to an inefficient and error-prone debugging effort, as developers need to recover concrete domain concepts using generic mechanisms. To reduce this gap, and increase the efficiency of the debugging process, we propose a framework for developing domain-specific debuggers, called the Moldable Debugger, that enables debugging at the level of the application domain. The Moldable Debuggeris adapted to a domain by creating and combining domain-specific debugging operations with domain-specific debugging views, and adapts itself to a domain by selecting, at run time, appropriate debugging operations and views. To ensure the proposed model has practical applicability (i.e., can be used in practice to build real debuggers), we discuss, from both a performance and usability point of view, three implementation strategies. We further motivate the need for domain-specific debugging, identify a set of key requirements and show how our approach improves debugging by adapting the debugger to several domains.
Practical domain-specific debuggers using the Moldable Debugger framework
S1477842415000573
Language workbenches are environments for simplifying the creation and use of computer languages. The annual Language Workbench Challenge (LWC) was launched in 2011 to allow the many academic and industrial researchers in this area an opportunity to quantitatively and qualitatively compare their approaches. We first describe all four LWCs to date, before focussing on the approaches used, and results generated, during the third LWC. We give various empirical data for ten approaches from the third LWC. We present a generic feature model within which the approaches can be understood and contrasted. Finally, based on our experiences of the existing LWCs, we propose a number of benchmark problems for future LWCs.
Evaluating and comparing language workbenches
S1477842415000585
Reference attribute grammars (RAGs) provide a practical declarative means to implement programming language compilers and other tools. RAGs have previously been extended to support nonterminal attributes (also known as higher-order attributes), circular attributes, and context-dependent declarative rewrites of the abstract syntax tree. In this previous work, interdependencies between these extensions are not considered. In this article, we investigate how these extensions can interact, and still be well defined. We introduce a generalized evaluation algorithm that can handle grammars where circular attributes and rewrites are interdependent. To this end, we introduce circular nonterminal attributes, and show how RAG rewrites are equivalent to such attributes.
Declarative rewriting through circular nonterminal attributes
S1477842415000615
Different approaches to information system (IS) development are based on different data models. The selection of a data model for conceptual design, among other things, depends on the problem domain, the knowledge, and the personal preferences of an IS designer. In some situations, a simultaneous usage of different approaches to the conceptual database design and IS development may lead to the most appropriate solutions. In our previous research we have developed a tool that provides an evolutive and incremental approach to IS development, which is based on the form type data model. The approaches based on the Extended Entity-Relationship (EER) and class data models are broadly accepted throughout the community of IS designers. In order to support the simultaneous usage of approaches based on the form type, EER and class data models, we have developed the Multi-Paradigm Information System Modeling Tool (MIST). In this paper, we present a part of our MIST tool that supports EER approach to a database design. MIST components currently provide a formal specification of an EER database schema specification and its transformation into the relational data model, or the class model. Also, MIST allows generation of Structured Query Language code for a database creation and procedural code for implementing database constraints. In addition, Java code that stores and processes data from the database, may be generated from the class model. In this paper, we present the evaluation study of the MIST EER domain-specific language. Users' perceptions of language quality characteristics are used for the evaluation.
Concepts and evaluation of the extended entity-relationship approach to database design in a multi-paradigm information system modeling tool
S1477842415000627
Nowadays, concurrent programs are an inevitable part of many software applications. They can increase the computation performance of the applications by parallelizing their computations. One of the approaches to realize the concurrency is using multi thread programming. However, these systems are structurally complex considering the control of the parallelism (such as thread synchronization and resource control) and also considering the interaction between their components. So, the design of these systems can be difficult and their implementation can be error-prone especially when the addressed system is big and complex. On the other hand, a Domain-specific Modeling Language (DSML) is one of the Model Driven Development (MDD) approaches which tackles this problem. Since DSMLs provide a higher abstraction level, they can lead to reduce the complexities of the concurrent programs. With increasing the abstraction level and generating the artifacts automatically, the performance of developing the software (both in design and implementation phases) is increased, and the efficiency is raised by reducing the probability of occurring errors. Thus, in this paper, a DSML is proposed for concurrent programs, called DSML4CP, to work in a higher level of abstraction than code level. To this end, the concepts of concurrent programs and their relationships are presented in a metamodel. The proposed metamodel provides a context for defining abstract syntax, and concrete syntax of the DSML4CP. This new language is supported by a graphical modeling tool which can visualize different instance models for domain problems. In order to clarify the expressions of the language; the static semantic controls are realized in the form of constraints. Finally, the architectural code generation is fulfilled via model transformation rules using the templates of the concurrent programs. To increase level of the DSML׳s leverage and to demonstrate the general support of concurrent programming by the DSML, the transformation mechanism of the tool supports two well-known and highly used programming languages for code generation; Java and C#. The performed experiments on two case studies indicate a high performance for proposed language. In this regard, the results show automatic generation of 79% of the final code and 86% of the functions/modules on average.
DSML4CP: A Domain-specific Modeling Language for Concurrent Programming
S1477842415000731
Duplicated code detection has been an active research field for several decades. Although many algorithms have been proposed, only a few researches have focussed on the comprehensive presentation of the detected clones. During the evaluation of clone detectors developed by the authors, it was observed that the results of the clone detectors were hard to comprehend. Therefore, in this paper a broadly suitable grouping method with which clone pairs can be easily grouped together to provide a more compact result is presented. The grouping algorithm is examined and a more precise revised algorithm is proposed to present all of the candidates to the user.
Supporting comprehensible presentation of clone candidates through two-dimensional maximisation
S1477842415000743
We present new static analysis methods for proving liveness properties of programs. In particular, with reference to the hierarchy of temporal properties proposed by Manna and Pnueli, we focus on guarantee (i.e., “something good occurs at least once”) and recurrence (i.e., “something good occurs infinitely often”) temporal properties. We generalize the abstract interpretation framework for termination presented by Cousot and Cousot. Specifically, static analyses of guarantee and recurrence temporal properties are systematically derived by abstraction of the program operational trace semantics. These methods automatically infer sufficient preconditions for the temporal properties by reusing existing numerical abstract domains based on piecewise-defined ranking functions. We augment these abstract domains with new abstract operators, including a dual widening. To illustrate the potential of the proposed methods, we have implemented a research prototype static analyzer, for programs written in a C-like syntax, that yielded interesting preliminary results.
Inference of ranking functions for proving temporal properties by abstract interpretation
S1477842415000822
In this paper we apply tree-automata techniques to refinement of abstract interpretation in Horn clause verification. We go beyond previous work on refining trace abstractions; firstly we handle tree automata rather than string automata and thereby can capture traces in any Horn clause derivations rather than just transition systems; secondly, we show how algorithms manipulating tree automata interact with abstract interpretations, establishing progress in refinement and generating refined clauses that eliminate causes of imprecision. We show how to derive a refined set of Horn clauses in which given infeasible traces have been eliminated, using a recent optimised algorithm for tree automata determinisation. We also show how we can introduce disjunctive abstractions selectively by splitting states in the tree automaton. The approach is independent of the abstract domain and constraint theory underlying the Horn clauses. Experiments using linear constraint problems and the abstract domain of convex polyhedra show that the refinement technique is practical and that iteration of abstract interpretation with tree automata-based refinement solves many challenging Horn clause verification problems. We compare the results with other state-of-the-art Horn clause verification tools.
Horn clause verification with convex polyhedral abstraction and tree automata-based refinement
S1477842415000913
Implementation of a design pattern can take many forms according to the programming language being used. Most of the literature presents design patterns in their conventional object-oriented implementations. Several other studies show the implementation in aspect-oriented languages such as AspectJ, EOS, and Caesar. In this work, we compare the implementation of three design patterns: singleton, observer, and decorator design patterns in these languages and also discuss the possibility of implementing them in ParaAJ: an extension to the AspectJ language that implements the idea of parametric aspects. We found that ParaAJ helps in making the implementation of the singleton and observer patterns reusable but it fails to help in the decorator case. The problem with the decorator pattern exists because of the current translation mechanism of ParaAJ׳s aspects to normal AspectJ׳s aspects. This opens the door for further work in ParaAJ to better support the idea of parametric aspects.
Implementing design patterns as parametric aspects using ParaAJ: The case of the singleton, observer, and decorator design patterns
S1477842415000925
This paper studies sound proof rules for checking positive invariance of algebraic and semi-algebraic sets, that is, sets satisfying polynomial equalities and those satisfying finite boolean combinations of polynomial equalities and inequalities, under the flow of polynomial ordinary differential equations. Problems of this nature arise in formal verification of continuous and hybrid dynamical systems, where there is an increasing need for methods to expedite formal proofs. We study the trade-off between proof rule generality and practical performance and evaluate our theoretical observations on a set of benchmarks. The relationship between increased deductive power and running time performance of the proof rules is far from obvious; we discuss and illustrate certain classes of problems where this relationship is interesting.
A hierarchy of proof rules for checking positive invariance of algebraic and semi-algebraic sets
S1477842415000937
Among the various critical systems that are worth to be formally analyzed, a wide set consists of controllers for dynamical systems. Those programs typically execute an infinite loop in which simple computations update internal states and produce commands to update the system state. Those systems are yet hardly analyzable by available static analysis method, since, even if performing mainly linear computations, the computation of a safe set of reachable states often requires quadratic invariants. In this paper we consider the general setting of a piecewise affine program; that is a program performing different affine updates on the system depending on some conditions. This typically encompasses linear controllers with saturations or controllers with different behaviors and performances activated on some safety conditions. Our analysis is inspired by works performed a decade ago by Johansson et al., and Morari et al., in the control community. We adapted their method focused on the analysis of stability in continuous-time or discrete-time settings to fit the static analysis paradigm and the computation of invariants, that is over-approximation of reachable sets using piecewise quadratic Lyapunov functions. This approach has been further extended to consider k-inductive properties of reachable traces (trajectories) of systems. The analysis has been implemented in Matlab and shown very good experimental results on a very large set of synthesized problems.
Automatic synthesis of k-inductive piecewise quadratic invariants for switched affine control programs
S1477842415000949
Event loops are a main control architecture to implement actors. In this paper we first analyse the impact that this choice has on the design of actor-based concurrent programs. Then, we discuss control loops as the main architecture adopted to implement agents, and we frame them as an extension of event loops effective to improve the programming of autonomous components that need to integrate both reactive and proactive behaviors, in a modular way.
Programming with event loops and control loops – From actors to agents
S1477842415300038
Recent researches have reported that Android programs are vulnerable to unexpected exceptions. One reason is that the current design of Android platform solely depends on Java exception mechanism, which is unaware of the component-based structure of Android programs. This paper proposes a component-level exception mechanism for programmers to build robust Android programs with. With the mechanism, they can define an intra-component handler for each component to recover from exceptions, and they can propagate uncaught exceptions to caller component along the reverse of component activation flow. Theoretically, we have formalized an Android semantics with exceptions to prove the robustness property of the mechanism. In practice, we have implemented the mechanism with a domain-specific library that extends existing Android components. This lightweight approach does not demand the change of the Android platform. In our experiment with Android benchmark programs, the library is found to catch a number of runtime exceptions that would otherwise get the programs terminated abnormally. We also measure the overhead of using the library to show that it is very small. Our proposal is a new mechanism for defending Android programs from unexpected exceptions.
A lightweight approach to component-level exception mechanism for robust android apps
S147784241530004X
Most of today's embedded systems are very complex. These systems, controlled by computer programs, continuously interact with their physical environments through network of sensory input and output devices. Consequently, the operations of such embedded systems are highly reactive and concurrent. Since embedded systems are deployed in many safety-critical applications, where failures can lead to catastrophic events, an approach that combines mathematical logic and formal verification is employed in order to ensure correct behavior of the control algorithm. This paper presents What You Prove Is What You Execute (WYPIWYE) compilation strategy for a Globally Asynchronous Locally Synchronous (GALS) programming language called Safey-Critical SystemJ. SC-SystemJ is a safety-critical subset of the SystemJ language. A formal big-step transition semantics of SC-SystemJ is developed for compiling SC-SystemJ programs into propositional Linear Temporal Logic formulas. These LTL formulas are then converted into a network of Mealy automata using a novel and efficient compilation algorithm. The resultant Mealy automata have a straightforward syntactic translation into Promela code. The resultant Promela models can be used for verifying correctness properties via the SPIN model-checker. Finally there is a single translation procedure to compile both: Promela and C/Java code for execution, which satisfies the De-Bruijn index, i.e. this final translation step is simple enough that is can be manually verified.
Compiling and verifying SC-SystemJ programs for safety-critical reactive systems
S1477842415300075
Memory fragmentation is a serious obstacle preventing efficient memory usage. Garbage collectors may solve the problem; however, they cause serious performance impact, memory and energy consumption. Therefore, various memory allocators have been developed. Software developers must test memory allocators, and find an efficient one for their programs. Instead of this cumbersome method, we propose a novel approach for dynamically deciding the best memory allocator for every application. The proposed solution tests each process with various memory allocators. After the testing, it selects an efficient memory allocator according to condition of operating system (OS). If OS runs out of memory, then it selects the most memory efficient allocator for new processes. If most of the CPU power was occupied, then it selects the fastest allocator. Otherwise, the balanced allocator is selected. According to test results, the proposed solution offers up to 58% less fragmented memory, and 90% faster memory operations. In average of 107 processes, it offers 7.16±2.53% less fragmented memory, and 1.79±7.32% faster memory operations. The test results also prove the proposed approach is unbeatable by any memory allocator. In conclusion, the proposed method is a dynamic and efficient solution to the memory fragmentation problem.
The intelligent memory allocator selector
S1477842415300208
Recent advances in tooling and modern programming languages have progressively brought back the practice of developing domain-specific languages as a means to improve software development. Consequently, the problem of making composition between languages easier by emphasizing code reuse and componentized programming is a topic of increasing interest in research. In fact, it is not uncommon for different languages to share common features, and, because in the same project different DSLs may coexist to model concepts from different problem areas, it is interesting to study ways to develop modular, extensible languages. Earlier work has shown that traits can be used to modularize the semantics of a language implementation; a lot of attention is often spent on embedded DSLs; even when external DSLs are discussed, the main focus is on modularizing the semantics. In this paper we will show a complete trait-based approach to modularize not only the semantics but also the syntax of external DSLs, thereby simplifying extension and therefore evolution of a language implementation. We show the benefits of implementing these techniques using the Scala programming language.
Language components for modular DSLs using traits
S147784241530021X
This paper presents the design, implementation, and applications of a software testing tool, TAO, which allows users to specify and generate test cases and oracles in a declarative way. Extended from its previous grammar-based test generation tool, TAO provides a declarative notation for defining denotational semantics on each productive grammar rule, such that when a test case is generated, its expected semantics will be evaluated automatically as well, serving as its test oracle. TAO further provides a simple tagging mechanism to embed oracles into test cases for bridging the automation between test case generation and software testing. Two practical case studies are used to illustrate how automated oracle generation can be effectively integrated with grammar-based test generation in different testing scenarios: locating fault-inducing input patterns on Java applications; and Selenium-based automated web testing.
A semantic approach for automated test oracle generation
S1477842416000026
The C Programming Language is known for being an efficient language that can be compiled on almost any architecture and operating system. However the absence of dynamic safety checks and a relatively weak type system allows programmer oversights that are hard to spot. In this paper, we present RTC, a runtime monitoring tool that instruments unsafe code and monitors the program execution. RTC is built on top of the ROSE compiler infrastructure. RTC finds memory bugs and arithmetic overflows and underflows, and run-time type violations. Most of the instrumentations are directly added to the source file and only require a minimal runtime system. As a result, the instrumented code remains portable. In tests against known error detection benchmarks, RTC found 98% of all memory related bugs and had zero false positives. In performance tests conducted with well known algorithms, such as binary search and MD5, we determined that our tool has an average run-time overhead rate of 9.7× and memory overhead rate of 3.5×.
Lightweight runtime checking of C programs with RTC
S1477842416000038
The actor model of computation has gained significant popularity over the last decade. Its high level of abstraction makes it appealing for concurrent applications in parallel and distributed systems. However, designing a real-world actor framework that subsumes full scalability, strong reliability, and high resource efficiency requires many conceptual and algorithmic additives to the original model. In this paper, we report on designing and building CAF, the C++ Actor Framework. CAF targets at providing a concurrent and distributed native environment for scaling up to very large, high-performance applications, and equally well down to small constrained systems. We present the key specifications and design concepts—in particular a message-transparent architecture, type-safe message interfaces, and pattern matching facilities—that make native actors a viable approach for many robust, elastic, and highly distributed developments. We demonstrate the feasibility of CAF in three scenarios: first for elastic, upscaling environments, second for including heterogeneous hardware like GPUs, and third for distributed runtime systems. Extensive performance evaluations indicate ideal runtime at very low memory footprint for up to 64 CPU cores, or when offloading work to a GPU. In these tests, CAF continuously outperforms the competing actor environments Erlang, Charm++, SalsaLite, Scala, ActorFoundry, and even the raw message passing framework OpenMPI.
Revisiting actor programming in C++
S147784241600004X
The actor model is a message-passing concurrency model that avoids deadlocks and low-level data races by construction. This facilitates concurrent programming, especially in the context of complex interactive applications where modularity, security and fault-tolerance are required. The tradeoff is that the actor model sacrifices expressiveness and safety guarantees with respect to parallel access to shared state. In this paper we present domains as a set of novel language abstractions for safely encapsulating and sharing state within the actor model. We introduce four types of domains, namely immutable, isolated, observable and shared domains that each is tailored to a certain access pattern on that shared state. The domains are characterized with an operational semantics. For each we discuss how the actor model׳s safety guarantees are upheld even in the presence of conceptually shared state. Furthermore, the proposed language abstractions are evaluated with a case study in Scala comparing them to other synchronization mechanisms to demonstrate their benefits in deadlock freedom, parallel reads, and enforced isolation.
Domains: Sharing state in the communicating event-loop actor model
S1477842416000051
The actor-based language, Timed Rebeca, was introduced to model distributed and asynchronous systems with timing constraints and message passing communication. A toolset was developed for automated translation of Timed Rebeca models to Erlang. The translated code can be executed using a timed extension of McErlang for model checking and simulation. In this work, we added a new toolset that provides statistical model checking of Timed Rebeca models. Using statistical model checking, we are now able to verify larger models against safety properties compared to McErlang model checking. We examine the typical case studies of elevators and ticket service to show the efficiency of statistical model checking and applicability of our toolset.
Statistical model checking of Timed Rebeca models
S1477842416000063
Conventional array partitioning analyses split arrays into contiguous partitions to infer properties of sets of cells. Such analyses cannot group together non-contiguous cells, even when they have similar properties. In this paper, we propose an abstract domain which utilizes semantic properties to split array cells into groups. Cells with similar properties will be packed into groups and abstracted together. Additionally, groups are not necessarily contiguous. This abstract domain allows us to infer complex array invariants in a fully automatic way. Experiments on examples from the Minix 1.1 memory management and a tiny industrial operating system demonstrate the effectiveness of the analysis.
An array content static analysis based on non-contiguous partitions
S1477842416300318
This paper describes a new modelling language for the effective design and validation of Java annotations. Since their inclusion in the 5th edition of Java, annotations have grown from a useful tool for the addition of meta-data to play a central role in many popular software projects. Usually they are not conceived in isolation, but in groups, with dependency and integrity constraints between them. However, the native support provided by Java for expressing this design is very limited. To overcome its deficiencies and make explicit the rich conceptual model which lies behind a set of annotations, we propose a domain-specific modelling language. The proposal has been implemented as an Eclipse plug-in, including an editor and an integrated code generator that synthesises annotation processors. The environment also integrates a model finder, able to detect unsatisfiable constraints between different annotations, and to provide examples of correct annotation usages for validation. The language has been tested using a real set of annotations from the Java Persistence API (JPA). Within this subset we have found enough rich semantics expressible with Ann and omitted nowadays by the Java language, which shows the benefits of Ann in a relevant field of application.
Ann: A domain-specific language for the effective design and validation of Java annotations
S1524070313000118
We present a skeleton-based algorithm for intrinsic symmetry detection on imperfect 3D point cloud data. The data imperfections such as noise and incompleteness make it difficult to reliably compute geodesic distances, which play essential roles in existing intrinsic symmetry detection algorithms. In this paper, we leverage recent advances in curve skeleton extraction from point clouds for symmetry detection. Our method exploits the properties of curve skeletons, such as homotopy to the input shape, approximate isometry-invariance, and skeleton-to-surface mapping, for the detection task. Starting from a curve skeleton extracted from an input point cloud, we first compute symmetry electors, each of which is composed of a set of skeleton node pairs pruned with a cascade of symmetry filters. The electors are used to vote for symmetric node pairs indicating the symmetry map on the skeleton. A symmetry correspondence matrix (SCM) is constructed for the input point cloud through transferring the symmetry map from skeleton to point cloud. The final symmetry regions on the point cloud are detected via spectral analysis over the SCM. Experiments on raw point clouds, captured by a 3D scanner or the Microsoft Kinect, demonstrate the robustness of our algorithm. We also apply our method to repair incomplete scans based on the detected intrinsic symmetries.
Skeleton-based intrinsic symmetry detection on point clouds
S1524070313000179
We propose a novel compact surface representation, namely geometry curves, which record the essence of shape geometry and topology. The geometry curves mainly contain two parts: the interior and boundary lines. The interior lines, which correspond to the feature lines, record the geometry information of the 3D shapes; the boundary lines, which correspond to the boundary or fundamental polygons, record the topology information of the 3D shapes. As a vector representation, geometry curves can depict highly complex geometry details. The concept of geometry curves can be utilized in many potential applications, e.g., mesh compression, shape modeling and editing, animation, and level of details. Furthermore, we develop a procedure for automatically constructing geometry curves which obtain an excellent approximation to the original mesh.
Geometry curves: A compact representation for 3D shapes
S1524070313000180
Reliable estimation of visual saliency is helpful to guide many computer graphics tasks including shape matching, simplification, segmentation, etc. Inspired by basic principles induced by psychophysics studies, we propose a novel approach for computing saliency for 3D mesh surface considering both local contrast and global rarity. First, a multi-scale local shape descriptor is introduced to capture local geometric features with various regions, which is rotationally invariant. Then, we present an efficient patch-based local contrast method based on the multi-scale local descriptor. The global rarity is defined by its specialty to all other vertices. To be more efficient, we compute it on clusters first and interpolate on vertices later. Finally, our mesh saliency is obtained by the linear combination of the local contrast and the global rarity. Our method is efficient, robust, and yields mesh saliency that agrees with human perception. The algorithm is tested on many models and outperformed previous works. We also demonstrated the benefits of our algorithm in some geometry processing applications.
Mesh saliency with global rarity
S1524070313000192
We propose a robust thickness estimation approach for 3D objects based on the Shape Diameter Function (SDF). Our method first applies a modified strategy to estimate the local diameter with increased accuracy. We then compute a scale-dependent robust thickness estimate from a point cloud, constructed using this local diameter estimation and a variant of a robust distance function. The robustness of our method is benchmarked against several operations such as remeshing, geometric noise and artifacts common in triangle soups. The experimental results show a more stable local thickness estimation than the original SDF, and consistent segmentation results on defect-laden inputs.
Robust diameter-based thickness estimation of 3D objects
S1524070313000222
Sunken relief is an art form made by cutting the relief sculpture itself into a flat surface with a shallow overall depth. This paper focuses on the problem of direct generation of line-based sunken relief from a 3D mesh. We show how to extract, post-process and organize the messy feature lines in regular forms, applicable for lines engraving on the sculpture surfaces. We further describe how to construct a smooth height field from the input object, and derive a continuous pitting corrosion method to generate the cutting paths. The whole framework is conducted in object-space, making it flexible for stroke stylization and depth control of the engraving lines. We demonstrate the results with several impressive renderings and photographs used to illustrate the paper itself.
Line-based sunken relief generation from a 3D mesh
S1524070313000234
We present a video-based approach to learn the specific driving characteristics of drivers in the video for advanced traffic control. Each vehicle’s specific driving characteristics are calculated with an offline learning process. Given each vehicle’s initial status and the personalized parameters as input, our approach can vividly reproduce the traffic flow in the sample video with a high accuracy. The learned characteristics can also be applied to any agent-based traffic simulation systems. We then introduce a new traffic animation method that attempts to animate each vehicle with its real driving habits and show its adaptation to the surrounding traffic situation. Our results are compared to existing traffic animation methods to demonstrate the effectiveness of our presented approach.
Video-based personalized traffic learning
S1524070313000258
We introduce a method for surface reconstruction from point sets that is able to cope with noise and outliers. First, a splat-based representation is computed from the point set. A robust local 3D RANSAC-based procedure is used to filter the point set for outliers, then a local jet surface – a low-degree surface approximation – is fitted to the inliers. Second, we extract the reconstructed surface in the form of a surface triangle mesh through Delaunay refinement. The Delaunay refinement meshing approach requires computing intersections between line segment queries and the surface to be meshed. In the present case, intersection queries are solved from the set of splats through a 1D RANSAC procedure.
Splat-based surface reconstruction from defect-laden point sets
S1524070313000271
Simulating realistic crowd behaviors is a challenging problem in computer graphics. Yet, several satisfying simulation models exhibiting natural pedestrians or group emerging behaviors exist. Choosing among these model generally depends on the considered crowd density or the topology of the environment. Conversely, achieving a user-desired kinematic or dynamic pattern at a given instant of the simulation reveals to be much more tedious. In this paper, a novel generic control methodology is proposed to solve this crowd editing issue. Our method relies on an adjoint formulation of the underlying optimization procedure. It is independent to a certain extent of the choice of the simulation model, and is designed to handle several forms of constraints. A variety of examples attesting the benefits of our approach are proposed, along with quantitative performance measures.
Optimal crowd editing
S1524070313000593
With the rapid growth of available 3D models, fast retrieval of suitable 3D models has become a crucial task for industrial applications. This paper proposes a novel sketch-based 3D model retrieval approach which utilizes both global feature-based and local feature-based techniques. Unlike current approaches which use either global or local features, as well as do not take into account semantic relations between local features, we extract these two kinds of feature information from the representative 2D views of 3D models that can facilitate semantic description and retrieval for 3D models. Global features represent the gross exterior boundary shape information, and local features describe the interior details by compact visual words. Specifically, an improved bag-of-features method is provided to extract local features and their latent semantic relations. In addition, an efficient two-stage matching strategy is used to measure the distance between the query sketch and 3D models for selection and refinement. Experiment results demonstrate that our approach which combines these two kinds of complementary features significantly outperforms several state-of-the-art approaches.
A new sketch-based 3D model retrieval approach by using global and local features
S152407031300060X
Intrinsic shape matching has become the standard approach for pose invariant correspondence estimation among deformable shapes. Most existing approaches assume global consistency. While global isometric matching is well understood, only a few heuristic solutions are known for partial matching. Partial matching is particularly important for robustness to topological noise, which is a common problem in real-world scanner data. We introduce a new approach to partial isometric matching based on the observation that isometries are fully determined by local information: a map of a single point and its tangent space fixes an isometry. We develop a new representation for partial isometric maps based on equivalence classes of correspondences between pairs of points and their tangent-spaces. We apply our approach to register partial point clouds and compare it to the state-of-the-art methods, where we obtain significant improvements over global methods for real-world data and stronger guarantees than previous partial matching algorithms.
A low-dimensional representation for robust partial isometric correspondences computation
S1524070313000623
This paper presents a novel approach based on the shape space concept to classify deformations of 3D models. A new quasi-conformal metric is introduced which measures the curvature changes at each vertex of each pose during the deformation. The shapes with similar deformation patterns follow a similar deformation curve in shape space. Energy functional of the deformation curve is minimized to calculate the geodesic curve connecting two shapes on the shape space manifold. The geodesic distance illustrates the similarity between two shapes, which is used to compute the similarity between the deformations. We applied our method to classify the left ventricle deformations of myopathic and control subjects, and the sensitivity and specificity of our method were 88.8% and 85.7%, which are higher than other methods based on the left ventricle cavity, which shows our method can quantify the similarity and disparity of the left ventricle motion well.
Deformation similarity measurement in quasi-conformal shape space
S1524070314000046
This paper presents a new approach to simplify 3D binary images and general orthogonal pseudo-polyhedra (OPP). The method is incremental and produces a level-of-detail sequence of OPP, where any object of this sequence bounds the previous objects and, therefore, is a bounding orthogonal approximation of them. The sequence finishes with the axis-aligned bounding box. OPP are encoded using the Extreme Vertices Model, a complete model that stores a subset of their vertices and performs fast Boolean operations. Simplification is achieved by using a new strategy, which relies on the application of 2D Boolean operations. We also present a technique, based on model continuity, for better shape preservation. Finally, we present a data structure to encode in a progressive and lossless way the generated sequence. Tests with several datasets show that the proposed method produces smaller storage sizes and good quality approximations compared with other methods that also produce bounding objects.
A new lossless orthogonal simplification method for 3D objects based on bounding structures
S1524070314000228
The 2.1D sketch is a layered image representation, which assigns a partial depth ordering of over-segmented regions in a monocular image. This paper presents a global optimization framework for inferring the 2.1D sketch from a monocular image. Our method only uses over-segmented image regions (i.e., superpixels) as input, without any information of objects in the image, since (1) segmenting objects in images is a difficult problem on its own and (2) the objective of our proposed method is to be generic as an initial module useful for downstream high-level vision tasks. This paper formulates the inference of the 2.1D sketch using a global energy optimization framework. The proposed energy function consists of two components: (1) one is defined based on the local partial ordering relations (i.e., figure-ground) between two adjacent over-segmented regions, which captures the marginal information of the global partial depth ordering and (2) the other is defined based on the same depth layer relations among all the over-segmented regions, which groups regions of the same object to account for the over-segmentation issues. A hybrid evolution algorithm is utilized to minimize the global energy function efficiently. In experiments, we evaluated our method on a test data set containing 100 diverse real images from Berkeley segmentation data set (BSDS500) with the annotated ground truth. Experimental results show that our method can infer the 2.1D sketch with high accuracy.
A global energy optimization framework for 2.1D sketch extraction from monocular images
S152407031400023X
In this paper, we present an efficient approach for parameterizing a genus-zero triangular mesh onto the sphere with an optimal radius in an as-rigid-as-possible (ARAP) manner, which is an extension of planar ARAP parametrization approach to spherical domain. We analyze the smooth and discrete ARAP energy and formulate our spherical parametrization energy from the discrete ARAP energy. The solution is non-trivial as the energy involves a large system of non-linear equations with additional spherical constraints. To this end, we propose a two-step iterative algorithm. In the first step, we adopt a local/global iterative scheme to calculate the parametrization coordinates. In the second step, we optimize a best approximate sphere on which parametrization triangles can be embedded in a rigidity-preserving manner. Our algorithm is simple, robust, and efficient. Experimental results show that our approach provides almost isometric spherical parametrizations with lowest rigidity distortion over state-of-the-art approaches.
As-rigid-as-possible spherical parametrization