title
stringlengths
8
300
abstract
stringlengths
0
10k
Multimodal Transfer: A Hierarchical Deep Convolutional Neural Network for Fast Artistic Style Transfer
Transferring artistic styles onto everyday photographs has become an extremely popular task in both academia and industry. Recently, offline training has replaced online iterative optimization, enabling nearly real-time stylization. When those stylization networks are applied directly to high-resolution images, however, the style of localized regions often appears less similar to the desired artistic style. This is because the transfer process fails to capture small, intricate textures and maintain correct texture scales of the artworks. Here we propose a multimodal convolutional neural network that takes into consideration faithful representations of both color and luminance channels, and performs stylization hierarchically with multiple losses of increasing scales. Compared to state-of-the-art networks, our network can also perform style transfer in nearly real-time by performing much more sophisticated training offline. By properly handling style and texture cues at multiple scales using several modalities, we can transfer not just large-scale, obvious style cues but also subtle, exquisite ones. That is, our scheme can generate results that are visually pleasing and more similar to multiple desired artistic styles with color and texture cues at multiple scales.
Semantic Process Mining Tools: Core Building Blocks
Process mining aims at discovering new knowledge based on information hidden in event logs. Two important enablers for such analysis are powerful process mining techniques and the omnipresence of event logs in today's information systems. Most information systems supporting (structured) business processes (e.g. ERP, CRM, and workflow systems) record events in some form (e.g. transaction logs, audit trails, and database tables). Process mining techniques use event logs for all kinds of analysis, e.g., auditing, performance analysis, process discovery, etc. Although current process mining techniques/tools are quite mature, the analysis they support is somewhat limited because it is purely based on labels in logs. This means that these techniques cannot benefit from the actual semantics behind these labels which could cater for more accurate and robust analysis techniques. Existing analysis techniques are purely syntax oriented, i.e., much time is spent on filtering, translating, interpreting, and modifying event logs given a particular question. This paper presents the core building blocks necessary to enable semantic process mining techniques/tools. Although the approach is highly generic, we focus on a particular process mining technique and show how this technique can be extended and implemented in the ProM framework tool.
Kernel-Based Learning of Hierarchical Multilabel Classification Models
We present a kernel-based algorithm for hierarchical text c lassification where the documents are allowed to belong to more than one category at a time. The clas sification model is a variant of the Maximum Margin Markov Network framework, where the classifi cation hierarchy is represented as a Markov tree equipped with an exponential family defined o n the edges. We present an efficient optimization algorithm based on incremental conditi onal gradient ascent in single-example subspaces spanned by the marginal dual variables. The optim ization is facilitated with a dynamic programming based algorithm that computes best update dire ctions in the feasible set. Experiments show that the algorithm can feasibly optimize t raining sets of thousands of examples and classification hierarchies consisting of hundreds of nodes. Training of the full hierarchical model is as efficient as training independent SVM-light clas sifiers for each node. The algorithm’s predictive accuracy was found to be competitive with other r ecently introduced hierarchical multicategory or multilabel classification learning algorithms .
CSR and financial performance: The role of CSR awareness in the restaurant industry
Initiatives for corporate social responsibility (CSR) often have served business as a source of competitive advantage. However, despite firms’ attempts to capitalize on their CSR efforts, stakeholders’ low awareness of these initiatives makes it difficult to realize the full value of the strategic CSR. In this study, we propose and test in the context of the restaurant industry whether CSR awareness, measured by CSR media coverage, moderates the relationship between the social and financial performance. Our results support the notion that stakeholders’ CSR awareness affects the manner in which CSR initiatives can result in financial gain. Our research has implications both for firms’ investment policies in social initiatives and for highlighting the importance of communicating CSR initiatives to relevant stakeholders. © 2016 Elsevier Ltd. All rights reserved.
Effects of home-based exercise training for patients with chronic heart failure and sleep apnoea: a randomized comparison of two different programmes.
OBJECTIVE To evaluate the effects of home-based exercise for patients with chronic heart failure and sleep apnoea and to compare two different training programmes. DESIGN A randomized, prospective controlled trial. SETTING Department of Cardiology, University Hospital, Brazil. SUBJECTS Fifty chronic heart failure patients with sleep apnoea were randomized in three groups: Group 1 (aerobic training, n = 18), Group 2 (aerobic with strength training, n = 18), and Group 3 (untrained, n = 14). INTERVENTIONS The training programme for Groups 1 and 2 began with three supervised exercise sessions, after they underwent three months of home-based exercise. Patients were followed by weekly telephone call and were reviewed monthly. Group 3 had the status of physical activity evaluated weekly by interview to make sure they remained untrained. MAIN OUTCOME MEASURES At baseline and after three months: cardiopulmonary exercise testing, isokinetic strength and endurance, Minnesota living with heart failure questionnaire and polysomnography. Adherence was evaluated weekly. RESULTS Of the 50 patients enrolled in the study, 45 completed the programme. Clinical events: Group 1 (one death), Group 2 (one myocardial infarction), Group 3 (one death and two strokes). None were training related. Training groups showed improvement in all outcomes evaluated and the adherence was an important factor (Group 1 = 98.5% and Group 2 = 100.2%, P = 0.743). Untrained Group 3 demonstrated significant decrease or no change on measurements after three months without training. CONCLUSION Home-based exercise training is an important therapeutic strategy in chronic heart failure patients with sleep apnoea, and strength training resulted in a higher increase in muscle strength and endurance.
Necessity for surgical revision of defibrillator leads implanted long-term: causes and management.
BACKGROUND Defibrillator lead malfunction is a potential long-term complication in patients with an implantable cardioverter-defibrillator (ICD). The aim of this study was to determine the incidence and causes of lead malfunction necessitating surgical revision and to evaluate 2 approaches to treat lead malfunction. METHODS AND RESULTS We included 1317 consecutive patients with an ICD implanted at 3 European centers between 1993 and 2004. The types and causes of lead malfunction were recorded. If the integrity of the high-voltage part of the lead could be ascertained, an additional pace/sense lead was implanted. Otherwise, the patients received a new ICD lead. Of the 1317 patients, 38 experienced lead malfunction requiring surgical revision and 315 died during a median follow-up of 6.4 years. At 5 years, the cumulative incidence was 2.5% (95% confidence interval, 1.5 to 3.6). Lead malfunction resulted in inappropriate ICD therapies in 76% of the cases. Implantation of a pace/sense lead was feasible in 63%. Both lead revision strategies were similar with regard to lead malfunction recurrence (P=0.8). However, the cumulative incidence of recurrence was high (20% at 5 years; 95% confidence interval, 1.7 to 37.7). CONCLUSIONS ICD lead malfunction necessitating surgical revision becomes a clinically relevant problem in 2.5% of ICD recipients within 5 years. In selected cases, simple implantation of an additional pace/sense lead is feasible. Regardless of the chosen approach, the incidence of recurrent ICD lead-related problems after lead revision is 8-fold higher in this population.
Prognostic influence of Barrett's oesophagus and Helicobacter pylori infection on healing of erosive gastro-oesophageal reflux disease (GORD) and symptom resolution in non-erosive GORD: report from the ProGORD study.
BACKGROUND Adequacy of acid suppression is a critical factor influencing healing in gastro-oesophageal reflux disease (GORD). The European prospective study ProGORD was set up to determine the endoscopic and symptomatic progression of GORD over five years under routine care, after initial acid suppression with esomeprazole. We report on factors influencing endoscopic healing and symptom resolution during the acute treatment phase. METHODS Patients with symptoms suggestive of GORD underwent endoscopy and biopsies were obtained from the oesophagus for diagnosis of abnormalities, including Barrett's oesophagus (BO). Data from 6215 patients were included in the "intention to treat" analysis, 3245 diagnosed as having erosive reflux disease (ERD) and 2970 non-erosive reflux disease (NERD). ERD patients were treated with esomeprazole 40 mg for 4-8 weeks for endoscopic healing while NERD patients received 20 mg for 2-4 weeks for resolution of heartburn symptoms. RESULTS Endoscopic healing occurred overall in 87.7% of ERD patients although healing was significantly lower in those with more severe oesophagitis (76.9%) and in those with BO (72.4%), particularly in Helicobacter pylori negative BO patients (70.1%). Age, sex, and body mass index appeared to have no significant impact on healing. Complete heartburn resolution was reported by 70.4% of ERD patients and by 64.8% of NERD patients at the last visit. Only H pylori infection had a significant influence on complete heartburn resolution in the NERD group (68.1% and 63.7% for H pylori positive and H pylori negative, respectively; p = 0.03). CONCLUSION The presence of Barrett's mucosa, as well as severe mucosal damage, exerts a negative impact on healing. H pylori infection had a positive influence on healing in ERD patients with coexistent BO but no influence on those without BO.
Rejuvenating the Face: An Analysis of 100 Absorbable Suture Suspension Patients.
Background Absorbable suture suspension (Silhouette InstaLift, Sinclair Pharma, Irvine, CA) is a novel, minimally invasive system that utilizes a specially manufactured synthetic suture to help address the issues of facial aging, while minimizing the risks associated with historic thread lifting modalities. Objectives The purpose of the study was to assess the safety, efficacy, and patient satisfaction of the absorbable suture suspension system in regards to facial rejuvenation and midface volume enhancement. Methods The first 100 treated patients who underwent absorbable suture suspension, by the senior author, were critically evaluated. Subjects completed anonymous surveys evaluating their experience with the new modality. Results Survey results indicate that absorbable suture suspension is a tolerable (96%) and manageable (89%) treatment that improves age related changes (83%), which was found to be in concordance with our critical review. Conclusions Absorbable suture suspension generates high patient satisfaction by nonsurgically lifting mid and lower face and neck skin and has the potential to influence numerous facets of aesthetic medicine. The study provides a greater understanding concerning patient selection, suture trajectory, and possible adjuvant therapies. Level of Evidence 4
Integration of computer-aided diagnosis/detection (CAD) results in a PACS environment using CAD–PACS toolkit and DICOM SR
Picture Archiving and Communication System (PACS) is a mature technology in health care delivery for daily clinical imaging service and data management. Computer-aided detection and diagnosis (CAD) utilizes computer methods to obtain quantitative measurements from medical images and clinical information to assist clinicians to assess a patient’s clinical state more objectively. CAD needs image input and related information from PACS to improve its accuracy; and PACS benefits from CAD results online and available at the PACS workstation as a second reader to assist physicians in the decision making process. Currently, these two technologies remain as two separate independent systems with only minimal system integration. This paper describes a universal method to integrate CAD results with PACS in its daily clinical environment. The method is based on Health Level 7 (HL7) and Digital imaging and communications in medicine (DICOM) standards, and Integrating the Healthcare Enterprise (IHE) workflow profiles. In addition, the integration method is Health Insurance Portability and Accountability Act (HIPAA) compliant. The paper presents (1) the clinical value and advantages of integrating CAD results in a PACS environment, (2) DICOM Structured Reporting formats and some important IHE workflow profiles utilized in the system integration, (3) the methodology using the CAD–PACS integration toolkit, and (4) clinical examples with step-by-step workflows of this integration.
Survey Research Methodology in Management Information Systems: An Assessment
Survey research is believed to be well understood and applied by MIS scholars. It has been applied for several years, it is well defined, and it has precise procedures which, when followed closely, yield valid and easily interpretable data. Our assessment of the use of survey research in the MIS field between 1980 and 1990 indicates that this perception is at odds with reality. Our analysis indicates that survey methodology is often misapplied and is plagued by five important weaknesses: (1) single method designs where multiple methods are needed, (2) unsystematic and often inadequate sampling procedures, (3) low response rates, (4) weak linkages between units of analysis and respondents, and (5) over reliance on cross-sectional surveys where longitudinal surveys are really needed. Our assessment also shows that the quality of survey research varies considerably among studies of different purposes: explanatory studies are of good quality overall, exploratory and descriptive studies are of moderate to poor quality. This article presents a general framework for classifying and examining survey research and uses this framework to assess, review and critique the usage of survey research conducted in the past decade in the MIS field. In an effort to improve the quality of survey research, this article makes specific recommendations that directly address the major problems highlighted in the review. AUTHORS' BIOGRAPHIES Alain Pinsonneault holds a Ph.d. in administration from University of California at Irvine (1990) and a M.Sc. in Management Information Systems from Ecole des Hautes Etudes Commerciales de Montreal (1986). His current research interests include the organizational implications of computing, especially with regard to the centralization/decentralization of decision making authority and middle managers workforce; the strategic and political uses of computing, the use of information technology to support group decision making process; and the benefits of computing. He has published articles in Decision Support Systems, European Journal of Operational Research, and in Management Information Systems Quarterly, and one book chapter. He has also given numerous conferences and he is an associate editor of Informatization and the Public Sector journal. His doctoral dissertation won the 1990 International Center for Information Technology Doctoral Award. Kenneth L. Kraemer is the Director of the Public Policy Research Organization and Professor of Management and Information and Computer Science. He holds a Ph.D. from University of Southern California. Professor Kraemer has conducted research into the management of computing in organizations for more than 20 years. He is currently studying the diffusion of computing in Asia-Pacific countries, the dynamics of computing development in organizations, the impacts of computing on productivity in the work environment, and policies for successful implementation of computer-based information systems. In addition, Professor Kraemer is coeditor of a series of books entitled Computing, Organization, Policy, and Society (CORPS) published by Columbia University Press. He has published numerous books on computing, the most recent of which being Managing Information Systems. He has served as a consultant to the Department of Housing and Urban Development, the Office of Technology Assessment and the United Nations, and as a national expert to the Organization for Economic Cooperation and Development. He was recently Shaw Professor in Information Systems and Computer Sciences at the National University of Singapore.
Nearest Neighbour based Synthesis of Quantum Boolean Circuits
Quantum Boolean circuit synthesis issues are becoming a key area of research in the domain of quantum computing. For gate-level synthesis, minterm based and ReedMuller canonical decomposition techniques are adopted as common approaches. Physical implementation of quantum circuits have inherent constraints and hence nearest neighbour template of input lines is gaining importance. In this work, we present a brief analysis of the various Fixed Polarity Reed Muller (FPRM) expressions for a given quantum Boolean circuit and also introduce the rules for the nearest neighbour template-based synthesis of these forms. The corresponding circuit costs are evaluated.
A 6.4-Gb/s CMOS SerDes core with feed-forward and decision-feedback equalization
A 4.9-6.4-Gb/s two-level SerDes ASIC I/O core employing a four-tap feed-forward equalizer (FFE) in the transmitter and a five-tap decision-feedback equalizer (DFE) in the receiver has been designed in 0.13-/spl mu/m CMOS. The transmitter features a total jitter (TJ) of 35 ps p-p at 10/sup -12/ bit error rate (BER) and can output up to 1200 mVppd into a 100-/spl Omega/ differential load. Low jitter is achieved through the use of an LC-tank-based VCO/PLL system that achieves a typical random jitter of 0.6 ps over a phase noise integration range from 6 MHz to 3.2 GHz. The receiver features a variable-gain amplifier (VGA) with gain ranging from -6to +10dB in /spl sim/1dB steps, an analog peaking amplifier, and a continuously adapted DFE-based data slicer that uses a hybrid speculative/dynamic feedback architecture optimized for high-speed operation. The receiver system is designed to operate with a signal level ranging from 50 to 1200 mVppd. Error-free operation of the system has been demonstrated on lossy transmission line channels with over 32-dB loss at the Nyquist (1/2 Bd rate) frequency. The Tx/Rx pair with amortized PLL power consumes 290 mW of power from a 1.2-V supply while driving 600 mVppd and uses a die area of 0.79 mm/sup 2/.
A Cautionary Note About Policy Conflict Resolution
Policy-based network management promises to deliver a high degree of automation for military network management. A policy-based network management system provides the capability to express networking requirements in the form of policies and have them automatically realized in the network, without requiring further manual updates. However, as with every technology, these benefits come at the expense of certain obvious risks. The biggest risk associated with policy-based management is that the policies themselves can interact in undesirable ways, by causing conflicting actions to be taken by the management system. Thus it is essential that policies be analyzed for conflicts, and that mechanisms be put in place for determining how to resolve these conflicts. A number of policy conflict resolution techniques have been described in the literature; however, they often concentrate on the abstract problem of formal policy analysis and have very little to do with practical policy conflict resolution in live management systems. This paper provides an overview of the state of the art in policy conflict detection and resolution, followed by a critical look at what is really needed to resolve practical policy conflicts in network management systems. The premise of this paper is that application-specific policy conflict detection and resolution can mostly be addressed by careful policy writing (or re-writing), rather than via cumbersome and unrealistically complex policy conflict resolution solutions
First-in-human phase 1/2a trial of CRLX101, a cyclodextrin-containing polymer-camptothecin nanopharmaceutical in patients with advanced solid tumor malignancies
Patients with advanced solid malignancies were enrolled to an open-label, single-arm, dose-escalation study, in which CRLX101 was administered intravenously over 60 min among two dosing schedules, initially weekly at 6, 12, and 18 mg/m2 and later bi-weekly at 12, 15, and 18 mg/m2. The maximum tolerated dose (MTD) was determined at 15 mg/m2 bi-weekly, and an expansion phase 2a study was completed. Patient samples were obtained for pharmacokinetic (PK) and pharmacodynamic (PD) assessments. Response was evaluated per RECIST criteria v1.0 every 8 weeks. Sixty-two patients (31 male; median age 63 years, range 39–79) received treatment. Bi-weekly dosing was generally well tolerated with myelosuppression being the dose-limiting toxicity. Among all phase 1/2a patients receiving the MTD (n = 44), most common grade 3/4 adverse events were neutropenia and fatigue. Evidence of systemic plasma exposure to both the polymer-conjugated and unconjugated CPT was observed in all treated patients. Mean elimination unconjugated CPT Tmax values ranged from 17.7 to 24.5 h, and maximum plasma concentrations and areas under the curve were generally proportional to dose for both polymer-conjugated and unconjugated CPT. Best overall response was stable disease in 28 patients (64 %) treated at the MTD and 16 (73 %) of a subset of NSCLC patients. Median progression-free survival (PFS) for patients treated at the MTD was 3.7 months and for the subset of NSCLC patients was 4.4 months. These combined phase 1/2a data demonstrate encouraging safety, pharmacokinetic, and efficacy results. Multinational phase 2 clinical development of CRLX101 across multiple tumor types is ongoing.
Study of uptake of cell penetrating peptides and their cargoes in permeabilized wheat immature embryos.
The uptake of five fluorescein labeled cell-penetrating peptides (Tat, Tat(2), mutated-Tat, peptide vascular endothelial-cadherin and transportan) was studied in wheat immature embryos. Interestingly, permeabilization treatment of the embryos with toluene/ethanol (1 : 20, v/v with permeabilization buffer) resulted in a remarkably higher uptake of cell-penetrating peptides, whereas nonpermeabilized embryos failed to show significant cell-penetrating peptide uptake, as observed under fluorescence microscope and by fluorimetric analysis. Among the cell-penetrating peptides investigated, Tat monomer (Tat) showed highest fluorescence uptake (4.2-fold greater) in permeabilized embryos than the nonpermeabilized embryos. On the other hand, mutated-Tat serving as negative control did not show comparable fluorescence levels even in permeabilized embryos. A glucuronidase histochemical assay revealed that Tat peptides can efficiently deliver functionally active beta-glucuronidase (GUS) enzyme in permeabilized immature embryos. Tat(2)-mediated GUS enzyme delivery showed the highest number of embryos with GUS uptake (92.2%) upon permeabilization treatment with toluene/ethanol (1 : 40, v/v with permeabilization buffer) whereas only 51.8% of nonpermeabilized embryos showed Tat(2)-mediated GUS uptake. Low temperature, endocytosis and macropinocytosis inhibitors reduced delivery of the Tat(2)-GUS enzyme cargo complex. The results suggest that more than one mechanism of cell entry is involved simultaneously in cell-penetrating peptide-cargo uptake in wheat immature embryos. We also studied Tat(2)-plasmid DNA (carrying Act-1GUS) complex formation by gel retardation assay, DNaseI protection assay and confocal laser microscopy. Permeabilized embryos transfected with Tat(2)-plasmid DNA complex showed 3.3-fold higher transient GUS gene expression than the nonpermeabilized embryos. Furthermore, addition of cationic transfecting agent Lipofectamine 2000 to the Tat(2)-plasmid DNA complex resulted in 1.5-fold higher transient GUS gene expression in the embryos. This is the first report demonstrating translocation of various cell-penetrating peptides and their potential to deliver macromolecules in wheat immature embryos in the presence of a cell membrane permeabilizing agent.
KARL JASPERS ON THE FUTURE OF GERMANY
Abstract : This paper presents a review of a book authored by a philosopher who was a resident of Germany during the Nazi regime. It presents briefly Jaspers views on the government of post-war Germany.
A two-stage intelligent search algorithm for the two-dimensional strip packing problem
This paper presents a two-stage intelligent search algorithm for a two-dimensional strip packing problem without guillotine constraint. In the first stage, a heuristic algorithm is proposed, which is based on a simple scoring rule that selects one rectangle from all rectangles to be packed, for a given space. In the second stage, a local search and a simulated annealing algorithm are combined to improve solutions of the problem. In particular, a multi-start strategy is designed to enhance the search capability of the simulated annealing algorithm. Extensive computational experiments on a wide range of benchmark problems from zero-waste to non-zero-waste instances are implemented. Computational results obtained in less than 60 seconds of computation time show that the proposed algorithm outperforms the supposedly excellent algorithms reported recently, on average. It performs particularly better for large instances. 2011 Elsevier B.V. All rights reserved.
Calibration Requirements and Procedures for Augmented Reality
Augmented reality entails the use of models and their associated renderings to supplement information in a real scene. In order for this information to be relevant or meaningful, the models must be positioned and displayed in such a way that they blend into the real world in terms of alignments, perspectives, illuminations, etc. For practical reasons the information necessary to obtain this realistic blending cannot be known a priori, and cannot be hard-wired into a system. Instead a number of calibration procedures are necessary so that the location and parameters of each of the system components are known. In this paper we identify the calibration steps necessary to build a complete computer model of the real world and then, using the augmented reality system developed at ECRC (Grasp) as an example, we describe each of the calibration processes.
Personalized PageRank Estimation and Search: A Bidirectional Approach
We present new algorithms for Personalized PageRank estimation and Personalized PageRank search. First, for the problem of estimating Personalized PageRank (PPR) from a source distribution to a target node, we present a new bidirectional estimator with simple yet strong guarantees on correctness and performance, and 3x to 8x speedup over existing estimators in experiments on a diverse set of networks. Moreover, it has a clean algebraic structure which enables it to be used as a primitive for the Personalized PageRank Search problem: Given a network like Facebook, a query like "people named John," and a searching user, return the top nodes in the network ranked by PPR from the perspective of the searching user. Previous solutions either score all nodes or score candidate nodes one at a time, which is prohibitively slow for large candidate sets. We develop a new algorithm based on our bidirectional PPR estimator which identifies the most relevant results by sampling candidates based on their PPR; this is the first solution to PPR search that can find the best results without iterating through the set of all candidate results. Finally, by combining PPR sampling with sequential PPR estimation and Monte Carlo, we develop practical algorithms for PPR search, and we show via experiments that our algorithms are efficient on networks with billions of edges.
Verified by Visa and MasterCard SecureCode: Or, How Not to Design Authentication
Banks worldwide are starting to authenticate online card transactions using the ‘3-D Secure’ protocol, which is branded as Verified by Visa and MasterCard SecureCode. This has been partly driven by the sharp increase in online fraud that followed the deployment of EMV smart cards for cardholder-present payments in Europe and elsewhere. 3-D Secure has so far escaped academic scrutiny; yet it might be a textbook example of how not to design an authentication protocol. It ignores good design principles and has significant vulnerabilities, some of which are already being exploited. Also, it provides a fascinating lesson in security economics. While other single sign-on schemes such as OpenID, InfoCard and Liberty came up with decent technology they got the economics wrong, and their schemes have not been adopted. 3-D Secure has lousy technology, but got the economics right (at least for banks and merchants); it now boasts hundreds of millions of accounts. We suggest a path towards more robust authentication that is technologically sound and where the economics would work for banks, merchants and customers – given a gentle regulatory nudge.
Community Evaluation and Exchange of Word Vectors at wordvectors.org
Vector space word representations are useful for many natural language processing applications. The diversity of techniques for computing vector representations and the large number of evaluation benchmarks makes reliable comparison a tedious task both for researchers developing new vector space models and for those wishing to use them. We present a website and suite of offline tools that that facilitate evaluation of word vectors on standard lexical semantics benchmarks and permit exchange and archival by users who wish to find good vectors for their applications. The system is accessible at: www.wordvectors.org.
External validation of QDSCORE(®) for predicting the 10-year risk of developing Type 2 diabetes.
BACKGROUND A small number of risk scores for the risk of developing diabetes have been produced but none has yet been widely used in clinical practice in the UK. The aim of this study is to independently evaluate the performance of QDSCORE(®) for predicting the 10-year risk of developing diagnosed Type 2 diabetes in a large independent UK cohort of patients from general practice. METHODS A prospective cohort study of 2.4 million patients (13.6 million person years) aged between 25 and 79 years from 364 practices from the UK contributing to The Health Improvement Network (THIN) database between 1 January 1993 and 20 June 2008. RESULTS QDSCORE(®) showed good performance data when evaluated on a large external data set. The score is well calibrated with reasonable agreement between observed and predicted outcomes. There is a slight underestimation of risk in both men and women aged 60 years and above, although the magnitude of underestimation is small. The ability of the score to differentiate between those who develop diabetes and those who do not is good, with values for the area under the receiver operating characteristic curve exceeding 0.8 for both men and women. Performance data in this external validation are consistent with those reported in the development and internal validation of the risk score. CONCLUSIONS QDSCORE(®) has shown to be a useful tool to predict the 10-year risk of developing Type 2 diabetes in the UK.
DifNet: Semantic Segmentation by Diffusion Networks
Deep Neural Networks (DNNs) have recently shown state of the art performance on semantic segmentation tasks, however, they still suffer from problems of poor boundary localization and spatial fragmented predictions. The difficulties lie in the requirement of making dense predictions from a long path model all at once, since details are hard to keep when data goes through deeper layers. Instead, in this work, we decompose this difficult task into two relative simple sub-tasks: seed detection which is required to predict initial predictions without the need of wholeness and preciseness, and similarity estimation which measures the possibility of any two nodes belong to the same class without the need of knowing which class they are. We use one branch network for one sub-task each, and apply a cascade of random walks base on hierarchical semantics to approximate a complex diffusion process which propagates seed information to the whole image according to the estimated similarities. The proposed DifNet consistently produces improvements over the baseline models with the same depth and with the equivalent number of parameters, and also achieves promising performance on Pascal VOC and Pascal Context dataset. Our DifNet is trained end-to-end without complex loss functions.
Scalable and Efficient Bayes-Adaptive Reinforcement Learning Based on Monte-Carlo Tree Search
Bayesian planning is a formally elegant approach to learning optimal behaviour under model uncertainty, trading off exploration and exploitation in an ideal way. Unfortunately, planning optimally in the face of uncertainty is notoriously taxing, since the search space is enormous. In this paper we introduce a tractable, sample-based method for approximate Bayes-optimal planning which exploits Monte-Carlo tree search. Our approach avoids expensive applications of Bayes rule within the search tree by sampling models from current beliefs, and furthermore performs this sampling in a lazy manner. This enables it to outperform previous Bayesian model-based reinforcement learning algorithms by a significant margin on several well-known benchmark problems. As we show, our approach can even work in problems with an infinite state space that lie qualitatively out of reach of almost all previous work in Bayesian exploration.
Magic moments in situated mediascapes
In this paper, we describe the situation and factors that lead to "Magic Moments" in mediascape experiences and discuss the implications for how to design these magic moments without them appearing contrived. We introduce a framework for Experience Design and describe a set of design heuristics which should extend the field of HCI to encompass aspects of user experience, mobility, the outside environment and facets of the new medium. The distinctive feature of mediascapes is their link to the physical environment. The findings are primarily based on analysis of public reaction to Riot! 1831, a mediascape in the form of an interactive drama which is based on the actual riots that took place in a public square in Bristol, England in 1831.
Network coordination following discharge from psychiatric inpatient treatment: a study protocol
BACKGROUND Inadequate discharge planning following inpatient stays is a major issue in the provision of a high standard of care for patients who receive psychiatric treatment. Studies have shown that half of patients who had no pre-discharge contact with outpatient services do not keep their first outpatient appointment. Additionally, discharged patients who are not well linked to their outpatient care networks are at twice the risk of re-hospitalization. The aim of this study is to investigate if the Post-Discharge Network Coordination Program at ipw has a demonstrably significant impact on the frequency and duration of patient re-hospitalization. Subjects are randomly assigned to either the treatment group or to the control group. The treatment group participates in the Post-Discharge Network Coordination Program. The control group receives treatment as usual with no additional social support. Further outcome variables include: social support, change in psychiatric symptoms, quality of life, and independence in daily functioning. METHODS/DESIGN The study is conducted as a randomized controlled trial. Subjects are randomly assigned to either the control group or to the treatment group. Computer generated block randomization is used to assure both groups have the same number of subjects. Stratified block randomization is used for the psychiatric diagnosis of ICD-10, F1. Approximately 160 patients are recruited in two care units at Psychiatrie-Zentrum Hard Embrach and two care units at Klinik Schlosstal Winterthur. DISCUSSION The proposed post-discharge network coordination program intervenes during the critical post-discharge period. It focuses primarily on promoting the integration of the patients into their social networks, and additionally to coordinating outpatient care and addressing concerns of daily life. TRIAL REGISTRATION ISRCTN ISRCTN58280620.
Representation Learning over Dynamic Graphs
How can we effectively encode evolving information over dynamic graphs into low-dimensional representations? In this paper, we propose DyRep – an inductive deep representation learning framework that learns a set of functions to efficiently produce low-dimensional node embeddings that evolves over time. The learned embeddings drive the dynamics of two key processes namely, communication and association between nodes in dynamic graphs. These processes exhibit complex nonlinear dynamics that evolve at different time scales and subsequently contribute to the update of node embeddings. We employ a time-scale dependent multivariate point process model to capture these dynamics. We devise an efficient unsupervised learning procedure and demonstrate that our approach significantly outperforms representative baselines on two real-world datasets for the problem of dynamic link prediction and event time prediction.
ExFuse: Enhancing Feature Fusion for Semantic Segmentation
Modern semantic segmentation frameworks usually combine low-level and high-level features from pre-trained backbone convolutional models to boost performance. In this paper, we first point out that a simple fusion of low-level and high-level features could be less effective because of the gap in semantic levels and spatial resolution. We find that introducing semantic information into low-level features and highresolution details into high-level features is more effective for the later fusion. Based on this observation, we propose a new framework, named ExFuse, to bridge the gap between low-level and high-level features thus significantly improve the segmentation quality by 4.0% in total. Furthermore, we evaluate our approach on the challenging PASCAL VOC 2012 segmentation benchmark and achieve 87.9% mean IoU, which outperforms the previous state-of-the-art results.
Proficiency in Information Communication Technology and its Use: A Survey among Clinical Students in a Ghanaian Medical School
Insufficient prior knowledge about the array of skills possessed by medical students in information communication technology account for failed efforts at incorporating ICT into their academic work. The aim of this study is to access information communication and technology skills and its use among clinical students undergoing medical training in northern Ghana. A longitudinal questionnaire was administered to 175 clinical year (1st, 2nd, and 3rd year) medical students aged between 22 and 29 years (mean ± standard deviation; 25. 0 ± 1. 26 years). Out of the total 175 questionnaires administered 140 (82. 0%) students returned their questionnaires. Questionnaires from 5 students were incomplete leaving 135 complete and analyzable questionnaires, resulting in a 77. 0% responses rate. Of the remaining 135 students, 55. 6% of the respondents were proficient in the use of ICT related tools, 37. 8% were using ICT resources for their academic work, and 85. 2% were using such resources for social purposes, while use of ICT for academic work by gender was: 88. 2% for males, and 11. 8% for females. By gender 49. 0% males and 52. 2% females were using ICT for social purposes. The study revealed high and low levels of proficiency in ICT depending upon the ICT task to be performed, and concluded that a good curriculum designed to encourage ICT use by students as well as develop in them a multiplicity of skills, coupled with a teaching methodology that is student centred and encourages student engagement in active cognitive activities involving the use of ICTs may help stem this skewedness in proficiency.
Lipase in aqueous-polar organic solvents: activity, structure, and stability.
Studying alterations in biophysical and biochemical behavior of enzymes in the presence of organic solvents and the underlying cause(s) has important implications in biotechnology. We investigated the effects of aqueous solutions of polar organic solvents on ester hydrolytic activity, structure and stability of a lipase. Relative activity of the lipase monotonically decreased with increasing concentration of acetone, acetonitrile, and DMF but increased at lower concentrations (upto ~20% v/v) of dimethylsulfoxide, isopropanol, and methanol. None of the organic solvents caused any appreciable structural change as evident from circular dichorism and NMR studies, thus do not support any significant role of enzyme denaturation in activity change. Change in 2D [15N, 1H]-HSQC chemical shifts suggested that all the organic solvents preferentially localize to a hydrophobic patch in the active-site vicinity and no chemical shift perturbation was observed for residues present in protein's core. This suggests that activity alteration might be directly linked to change in active site environment only. All organic solvents decreased the apparent binding of substrate to the enzyme (increased Km ); however significantly enhanced the kcat . Melting temperature (Tm ) of lipase, measured by circular dichroism and differential scanning calorimetry, altered in all solvents, albeit to a variable extent. Interestingly, although the effect of all organic solvents on various properties on lipase is qualitatively similar, our study suggest that magnitudes of effects do not appear to follow bulk solvent properties like polarity and the solvent effects are apparently dictated by specific and local interactions of solvent molecule(s) with the protein.
Being Immersed in Social Networking Environment: Facebook Groups, Uses and Gratifications, and Social Outcomes
A Web survey of 1,715 college students was conducted to examine Facebook Groups users' gratifications and the relationship between users' gratifications and their political and civic participation offline. A factor analysis revealed four primary needs for participating in groups within Facebook: socializing, entertainment, self-status seeking, and information. These gratifications vary depending on user demographics such as gender, hometown, and year in school. The analysis of the relationship between users' needs and civic and political participation indicated that, as predicted, informational uses were more correlated to civic and political action than to recreational uses.
PostBL: Post-mesh Boundary Layer Mesh Generation Tool
A boundary layer mesh is a mesh with dense element distribution in the normal direction along specific boundaries. PostBL is a utility to generate boundary layers elements on an already existing mesh model. PostBL supports creation of hexahedral, prism, quad and tri boundary layer elements. It is a part of MeshKit, which is an open-source library for mesh generation functionalities. Generally, boundary layer mesh generation is a pre-meshing process; in this effort, we start from a model that has already been meshed. Boundary layers elements can be generated along the entire skin, selected exterior or internal surface boundaries. MeshKit uses graph-based approach for representing meshing operations; PostBL is one such meshing operation. It can be coupled to other meshing operations like Jaal, NetGen, TetGen, CAMAL and custom meshing tools like RGG. Simple examples demonstrating generation of boundary layers on different mesh types and OECD Vattenfall T-Junction benchmark hexahedral-mesh are presented.
Protein timing and its effects on muscular hypertrophy and strength in individuals engaged in weight-training
The purpose of this review was to determine whether past research provides conclusive evidence about the effects of type and timing of ingestion of specific sources of protein by those engaged in resistance weight training. Two essential, nutrition-related, tenets need to be followed by weightlifters to maximize muscle hypertrophy: the consumption of 1.2-2.0 g protein.kg -1 of body weight, and ≥44-50 kcal.kg-1 of body weight. Researchers have tested the effects of timing of protein supplement ingestion on various physical changes in weightlifters. In general, protein supplementation pre- and post-workout increases physical performance, training session recovery, lean body mass, muscle hypertrophy, and strength. Specific gains, differ however based on protein type and amounts. Studies on timing of consumption of milk have indicated that fat-free milk post-workout was effective in promoting increases in lean body mass, strength, muscle hypertrophy and decreases in body fat. The leucine content of a protein source has an impact on protein synthesis, and affects muscle hypertrophy. Consumption of 3-4 g of leucine is needed to promote maximum protein synthesis. An ideal supplement following resistance exercise should contain whey protein that provides at least 3 g of leucine per serving. A combination of a fast-acting carbohydrate source such as maltodextrin or glucose should be consumed with the protein source, as leucine cannot modulate protein synthesis as effectively without the presence of insulin. Such a supplement post-workout would be most effective in increasing muscle protein synthesis, resulting in greater muscle hypertrophy and strength. In contrast, the consumption of essential amino acids and dextrose appears to be most effective at evoking protein synthesis prior to rather than following resistance exercise. To further enhance muscle hypertrophy and strength, a resistance weight- training program of at least 10-12 weeks with compound movements for both upper and lower body exercises should be followed.
Achieving Human Parity in Conversational Speech Recognition
Conversational speech recognition has served as a flagship speech recognition task since the release of the Switchboard corpus in the 1990s. In this paper, we measure the human error rate on the widely used NIST 2000 test set, and find that our latest automated system has reached human parity. The error rate of professional transcribers is 5.9% for the Switchboard portion of the data, in which newly acquainted pairs of people discuss an assigned topic, and 11.3% for the CallHome portion where friends and family members have open-ended conversations. In both cases, our automated system establishes a new state of the art, and edges past the human benchmark, achieving error rates of 5.8% and 11.0%, respectively. The key to our system’s performance is the use of various convolutional and LSTM acoustic model architectures, combined with a novel spatial smoothing method and lattice-free MMI acoustic training, multiple recurrent neural network language modeling approaches, and a systematic use of system combination.
Influence of adding nano-graphite powders on the microstructure and gas hydrogen storage properties of ball-milled Mg90Al10 alloys
Abstract A small quantity of nano-graphite powders were introduced into the Mg 90 Al 10 alloy to obtain the ball milled Mg 90 Al 10 + x wt% graphite ( x  = 0, 1, 3, 5 and 8) composites. The influences of adding nano-graphite powders on the microstructure and hydrogen storage performances were studied. The microstructure analysis showed that adding nano-graphite powders not only promotes the formation of Mg 17 Al 12 phase, but also increases the number of Al atoms that enter into the crystal lattice of Mg. The performance test indicated that adding nano-graphite powders can improve the activation performance and enhance the hydrogenation/dehydrogenation kinetics and capacity. It also is benificial to lower the thermodynamic stability of the hydride. The dehydrogenation enthalpy (Δ H de ) and activation energy ( E de (a)) decrease at different degrees after adding nano-graphite powders. Dehydrogenation enthalpy of the composites is 75.43, 74.63, 73.20, 72.68 and 71.52 kJ mol −1 H 2 when the value of x is 0, 1, 3, 5 and 8, respectively. Dehydrogenation activation energy of the composites is 162.06, 161.26, 159.07, 156.15 and 158.26 kJ mol −1 H 2 when the value of x is 0, 1, 3, 5 and 8, respectively.
Gandhi and His Critics
Written for all those whose curiosity about Gandhi was sparked by Richard Attenborough's film, or for anyone who would like to know more about this strangely enigmatic leader, this is a fascinating in-depth study of Gandhi's personality and thought. The book explores the evolution of Gandhi's ideas, his attitudes toward religion, the racial problem, and the caste system, his conflict with the British, his approach to Muslim separatism and the division of India, his attitude toward social and economic change, his doctrine of nonviolence, and other key issues. Throughout, the author sheds new light on the mighty figure who initiated revolutions against racism, colonialism, and violence--three major revolutions of our time.
Impact of nitazoxanide on sustained virologic response in Egyptian patients with chronic hepatitis C genotype 4: a double-blind placebo-controlled trial.
BACKGROUND Nitazoxanide, approved for the treatment of Cryptosporidium parvum and Giardia lamblia, was found to inhibit hepatitis C virus replication. AIM The aim of this study was to assess the impact of nitazoxanide as an add-on therapy to pegylated interferon α-2a and ribavirin on sustained virologic response (SVR) in patients with chronic hepatitis C. PATIENTS AND METHODS A total of 200 patients with chronic hepatitis C were enrolled in the study, assigned randomly in a 1 : 1 ratio to two groups: group A (placebo group) and group B (nitazoxanide group). Five patients withdrew from the study after they signed the consent form.A total of 195 patients were evaluated: 97 patients in group A versus 98 patients in group B at a dose of 500 mg twice daily. Placebo and nitazoxanide were administered as an add-on therapy to pegylated interferon α-2a plus ribavirin following a 12-week lead-in phase. SVR was evaluated. Statistical analysis was carried out using the SPSS software. RESULTS The mean age of the patients in group A was 46.5 versus 45.7 years in group B. In group A, 85 out of 97 (87.6%) patients were men and in group B, 84 out of 98 (85.7%) patients were men.In group A, 59 out of 97 (60.82%) patients achieved an SVR versus 57 out of 98 (58.16%) patients in group B (P=0.70); this difference was not significant. CONCLUSION Our data did not show any significant impact of nitazoxanide on SVR.
Prevention of selenite-induced cataractogenesis by rutin in Wistar rats
PURPOSE To investigate whether rutin retards selenite-induced cataractogenesis in Wistar rat pups. METHODS On postpartum day ten, Group I rat pups received an intraperitoneal injection of saline. Group II and III rat pups received a subcutaneous injection of sodium selenite. Group III also received an intraperitoneal injection of rutin once daily on postpartum days 9-14. Both eyes of each pup were examined from day 16 up to postpartum day 30. After sacrifice, extricated pup lenses were analyzed for mean activities of catalase, superoxide dismutase, glutathione peroxidase, glutathione S-transferase, and glutathione reductase. In addition, the mean concentrations of reduced glutathione (GSH) and of malondialdehyde were analyzed in samples of lenses and hemolysate. RESULTS There was dense lenticular opacification in all of Group II, minimal opacification in 33.3% of Group III, no opacification in 66.7% of Group III, and no opacification in Group I. Significantly lower mean activities of lenticular antioxidant enzymes were noted in Group II, compared to Group I and III. Significantly lower mean concentrations of GSH and higher mean concentrations of malondialdehyde were noted in samples of hemolysate and lens from Group II, compared to the values in Group I and III. CONCLUSION Rutin prevents experimental selenite-induced cataractogenesis in rat pups, possibly by preventing depletion of antioxidant enzymes and of GSH, and by inhibiting lipid peroxidation.
Concurrent partnerships and the spread of HIV.
OBJECTIVE To examine how concurrent partnerships amplify the rate of HIV spread, using methods that can be supported by feasible data collection. METHODS A fully stochastic simulation is used to represent a population of individuals, the sexual partnerships that they form and dissolve over time, and the spread of an infectious disease. Sequential monogamy is compared with various levels of concurrency, holding all other features of the infection process constant. Effective summary measures of concurrency are developed that can be estimated on the basis of simple local network data. RESULTS Concurrent partnerships exponentially increase the number of infected individuals and the growth rate of the epidemic during its initial phase. For example, when one-half of the partnerships in a population are concurrent, the size of the epidemic after 5 years is 10 times as large as under sequential monogamy. The primary cause of this amplification is the growth in the number of people connected in the network at any point in time: the size of the largest "component'. Concurrency increases the size of this component, and the result is that the infectious agent is no longer trapped in a monogamous partnership after transmission occurs, but can spread immediately beyond this partnership to infect others. The summary measure of concurrency developed here does a good job in predicting the size of the amplification effect, and may therefore be a useful and practical tool for evaluation and intervention at the beginning of an epidemic. CONCLUSION Concurrent partnerships may be as important as multiple partners or cofactor infections in amplifying the spread of HIV. The public health implications are that data must be collected properly to measure the levels of concurrency in a population, and that messages promoting one partner at a time are as important as messages promoting fewer partners.
Active learning from noisy and abstention feedback
An active learner is given an instance space, a label space and a hypothesis class, where one of the hypotheses in the class assigns ground truth labels to instances. Additionally, the learner has access to a labeling oracle, which it can interactively query for the label of any example in the instance space. The goal of the learner is to find a good estimate of the hypothesis in the hypothesis class that generates the ground truth labels while making as few interactive queries to the oracle as possible. This work considers a more general setting where the labeling oracle can abstain from providing a label in addition to returning noisy labels. We provide a model for this setting where the abstention rate and the noise rate increase as we get closer to the decision boundary of the ground truth hypothesis. We provide an algorithm and an analysis of the number of queries it makes to the labeling oracle; finally we provide matching lower bounds to demonstrate that our algorithm has near-optimal estimation accuracy.
3D hand posture recognition from small unlabeled point sets
This paper is concerned with the evaluation and comparison of several methods for the classification and recognition of static hand postures from small unlabeled point sets corresponding to physical landmarks, e.g. reflective marker positions in a motion capture environment. We compare various classification algorithms based upon multiple interpretations and feature transformations of the point sets, including those based upon aggregate features (e.g. mean) and a pseudo-rasterization of the space. We find aggregate feature classifiers to be balanced across multiple users but relatively limited in maximum achievable accuracy. Certain classifiers based upon the pseudo-rasterization performed best among tested classification algorithms. The inherent difficulty in classifying certain users leads us to conclude that online learning may be necessary for the recognition of natural gestures.
An Evaluation of Different Types of Blended Learning Activities in Higher Education
Blended learning has provided benefits to higher education. Therefore, universities tend to introduce blended learning activities to improve their teaching-learning practices. There are different types of blended learning activities such as forum based discussions, online games, and assignments. In order to investigates students' satisfaction towards different types of blended learning activities and their usefulness for students to score well in examinations, an action research was conducted over a period of three years from 2015-2017. The results of the study informed that students preferred to engage with activities that started in classroom and continued online. Also, students found online discussion were less useful than the other types of blended learning activities. Based on the finding of the research paper discusses how blended learning activities should be designed to make them more useful and effective.
Cross-Lingual Sentiment Classification with Bilingual Document Representation Learning
Cross-lingual sentiment classification aims to adapt the sentiment resource in a resource-rich language to a resource-poor language. In this study, we propose a representation learning approach which simultaneously learns vector representations for the texts in both the source and the target languages. Different from previous research which only gets bilingual word embedding, our Bilingual Document Representation Learning model BiDRL directly learns document representations. Both semantic and sentiment correlations are utilized to map the bilingual texts into the same embedding space. The experiments are based on the multilingual multi-domain Amazon review dataset. We use English as the source language and use Japanese, German and French as the target languages. The experimental results show that BiDRL outperforms the state-of-the-art methods for all the target languages.
Depth-disparity calibration for augmented reality on binocular optical see-through displays
We present a study of depth-disparity calibration for augmented reality applications using binocular optical see-through displays. Two techniques were proposed and compared. The "paired-eyes" technique leverages the Panum's fusional area to help viewer find alignment between the virtual and physical objects. The "separate-eyes" technique eliminates the need of binocular fusion and involves using both eyes sequentially to check the virtual-physical object alignment on retinal images. We conducted a user study to measure the calibration results and assess the subjective experience of users with the proposed techniques.
HARF: Hierarchy-Associated Rich Features for Salient Object Detection
The state-of-the-art salient object detection models are able to perform well for relatively simple scenes, yet for more complex ones, they still have difficulties in highlighting salient objects completely from background, largely due to the lack of sufficiently robust features for saliency prediction. To address such an issue, this paper proposes a novel hierarchy-associated feature construction framework for salient object detection, which is based on integrating elementary features from multi-level regions in a hierarchy. Furthermore, multi-layered deep learning features are introduced and incorporated as elementary features into this framework through a compact integration scheme. This leads to a rich feature representation, which is able to represent the context of the whole object/background and is much more discriminative as well as robust for salient object detection. Extensive experiments on the most widely used and challenging benchmark datasets demonstrate that the proposed approach substantially outperforms the state-of-the-art on salient object detection.
An Optimized Union-Find Algorithm for Connected Components Labeling Using GPUs
In this paper, we report on an optimized union-find (UF) algorithm that can label the connected components on a 2D image efficiently by employing GPU architecture. The proposed method comprises three phases: UF-based local merge, boundary analysis, and link. The coarse labeling in local merge, which makes computation efficient because the length of the labelequivalence list is sharply suppressed . Boundary analysis only manages the cells on the boundary of each thread block to launch fewer CUDA threads. We compared our method with the label equivalence algorithm [1], conventional parallel UF algorithm [2], and line-based UF algorithm [3]. Evaluation results show that the proposed algorithm speeds up the average running time by around 5x, 3x, and 1.3x, respectively.
Deep Learning the City : Quantifying Urban Perception At A Global Scale
Siamese-like networks, Streetscore-CNN (SS-CNN) and Ranking SS-CNN, to predict pairwise comparisons Figure 1: User Interface for Crowdsourced Online Game Performance Analysis • SS-CNN: We calculate the % of pairwise comparisons in test set predicted correctly by (1) Softmax of output neurons in final layer (2) comparing TrueSkill scores [2] obtained from synthetic pairwise comparisons from the CNN (3) extracting features from penultimate layer of CNN and feeding pairwise feature representations to a RankSVM [3] • RSS-CNN: We compare the ranking function outputs for both images in a test pair to decide which image wins, and calculate the binary prediction accuracy.
An Autofocus Method for Backprojection Imagery in Synthetic Aperture Radar
In this letter, we present an autofocus routine for backprojection imagery from spotlight-mode synthetic aperture radar data. The approach is based on maximizing image sharpness and supports the flexible collection and imaging geometries of BP, including wide-angle apertures and the ability to image directly onto a digital elevation map. While image-quality-based autofocus approaches can be computationally intensive, in the backprojection setting, we demonstrate a natural geometric interpretation that allows for optimal single-pulse phase corrections to be derived in closed form as the solution of a quartic polynomial. The approach is applicable to focusing standard backprojection imagery, as well as providing incremental focusing in sequential imaging applications based on autoregressive backprojection. An example demonstrates the efficacy of the approach applied to real data for a wide-aperture backprojection image.
Taint-Enhanced Policy Enforcement: A Practical Approach to Defeat a Wide Range of Attacks
Policy-based confinement, employed in SELinux and specification-based intrusion detection systems, is a popular approach for defending against exploitation of vulnerabilities in benign software. Conventional access control policies employed in these approaches are effective in detecting privilege escalation attacks. However, they are unable to detect attacks that “hijack” legitimate access privileges granted to a program, e.g., an attack that subverts an FTP server to download the password file. (Note that an FTP server would normally need to access the password file for performing user authentication.) Some of the common attack types reported today, such as SQL injection and cross-site scripting, involve such subversion of legitimate access privileges. In this paper, we present a new approach to strengthen policy enforcement by augmenting security policies with information about the trustworthiness of data used in securitysensitive operations. We evaluated this technique using 9 available exploits involving several popular software packages containing the above types of vulnerabilities. Our technique sucessfully defeated these exploits.
The polyphase cascaded-cell DC/DC converter
This paper describes a new type of unisolated dc/dc converter employing cascaded converter cells, and its operation is analyzed. The topology and the operation of the converter have similarities with the modular multilevel converter for polyphase ac/dc conversion, but the ac voltages and currents are in this case only employed for redistributing power within the converter. It is shown how the nominal modulation indices for ac and dc voltage influence the semiconductor expenditure for the converter. A methodology for optimizing the design of the converter with regard to the required silicon area is devised. It is found that at voltage conversion ratios close to unity it is preferable to use a combination of full-bridge cells and half-bridge cells. Under such circumstances the proposed converter can offer lower semiconductor expenditure than a conventional buck converter.
Attributed Graph Grammar for floor plan analysis
In this paper, we propose the use of an Attributed Graph Grammar as unique framework to model and recognize the structure of floor plans. This grammar represents a building as a hierarchical composition of structurally and semantically related elements, where common representations are learned stochastically from annotated data. Given an input image, the parsing consists on constructing that graph representation that better agrees with the probabilistic model defined by the grammar. The proposed method provides several advantages with respect to the traditional floor plan analysis techniques. It uses an unsupervised statistical approach for detecting walls that adapts to different graphical notations and relaxes strong structural assumptions such are straightness and orthogonality. Moreover, the independence between the knowledge model and the parsing implementation allows the method to learn automatically different building configurations and thus, to cope the existing variability. These advantages are clearly demonstrated by comparing it with the most recent floor plan interpretation techniques on 4 datasets of real floor plans with different notations.
A Study of Ant Colony Optimization for Image Compression
Ravi.P Head, Department of computer science, Govt .Arts College for Women, Ramanathapuram E-Mail: [email protected] Tamilselvi.S Department of computer science, Govt .Arts College for Women, Ramanathapuram E-Mail: [email protected] -------------------------------------------------------------------ABSTRACT--------------------------------------------------------Images are an important form of data and are used in almost every application. Images occupy large amount of memory space. Image compression is most essential requirement for efficient utilization of storage space and transmission bandwidth. Image compression technique involves reducing the size of the image without degrading the quality of the image. A restriction on these methods is the high computational cost of image compression. Ant colony optimization is applied for image compression. An analogy with the real ants' behavior was presented as a new paradigm called Ant Colony Optimization (ACO). ACO is Probabilistic technique for Searching for optimal path in the graph based on behavior of ants seeking a path between their colony and source of food. The main features of ACO are the fast search of good solutions, parallel work and use of heuristic information, among others. Ant colony optimization (ACO) is a technique which can be used for various applications. This paper provides an insight optimization techniques used for image compression like Ant Colony Optimization (ACO) algorithm.
Similarity Search in High Dimensions via Hashing
The nearest or near neighbor query problems arise in a large variety of database applications usually in the context of similarity searching Of late there has been increasing interest in build ing search index structures for performing simi larity search over high dimensional data e g im age databases document collections time series databases and genome databases Unfortunately all known techniques for solving this problem fall prey to the curse of dimensionality That is the data structures scale poorly with data dimen sionality in fact if the number of dimensions exceeds to searching in k d trees and re lated structures involves the inspection of a large fraction of the database thereby doing no better than brute force linear search It has been sug gested that since the selection of features and the choice of a distance metric in typical applications is rather heuristic determining an approximate nearest neighbor should su ce for most practi cal purposes In this paper we examine a novel scheme for approximate similarity search based on hashing The basic idea is to hash the points Supported by NAVY N grant and NSF Grant IIS Supported by Stanford Graduate Fellowship and NSF NYI Award CCR Supported by ARO MURI Grant DAAH NSF Grant IIS and NSF Young Investigator Award CCR with matching funds from IBM Mitsubishi Schlum berger Foundation Shell Foundation and Xerox Corporation Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage the VLDB copyright notice and the title of the publication and its date appear and notice is given that copying is by permission of the Very Large Data Base Endowment To copy otherwise or to republish requires a fee and or special permission from the Endowment Proceedings of the th VLDB Conference Edinburgh Scotland from the database so as to ensure that the prob ability of collision is much higher for objects that are close to each other than for those that are far apart We provide experimental evidence that our method gives signi cant improvement in running time over other methods for searching in high dimensional spaces based on hierarchical tree de composition Experimental results also indicate that our scheme scales well even for a relatively large number of dimensions more than
HED Meteorites and Their Relationship to the Geology of Vesta and the Dawn Mission
Howardite-eucrite-diogenite (HED) meteorites, thought to be derived from 4 Vesta, provide the best sampling available for any differentiated asteroid. However, deviations in oxygen isotopic composition from a common massfractionation line suggest that a few eucrite-like meteorites are from other bodies, or that Vesta was not completely homogenized during differentiation. The petrology and geochemistry of HEDs provide insights into igneous processes that produced a crust composed of basalts, gabbros, and ultramafic cumulate rocks. Although most HED magmas were fractionated, it is unresolved whether some eucrites may have been primary melts. The geochemistry of HEDs indicates that bulk Vesta is depleted in volatile elements and is relatively reduced, but has chondritic refractory element abundances. The compositions of HEDs may favor a magma ocean model, but inconsistencies remain. Geochronology indicates that Vesta accreted and differentiated within the first several million years of solar system history, that magmatism continued over a span of ~10 Myr, and that its thermal history extended for perhaps 100 Myr. The protracted cooling history is probably responsible for thermal metamorphism of most HEDs. Impact chronology indicates that Vesta experienced many significant collisions, including during the late heavy bombardment. The age of the huge south pole crater is controversial, but it probably ejected Vestoids and many HEDs. Continued impacts produced a regolith composed of eucrite and diogenite fragments containing only minor exotic materials. HED meteorites serve as ground truth for orbital spectroscopic and chemical analyses by the Dawn spacecraft, and their properties are critical for instrument calibration and interpretation of Vesta’s geologic history.
Power Quality Improvements in Isolated Twelve-Pulse AC-DC Converters Using Delta/Double-Polygon Transformer
This paper presents twelve-pulse AC-DC converters for constant current applications. The delta/delta-star transformer based 12-pulse AC-DC converter doesn't meet power quality requirements. To overcome this drawback, a delta/double polygon transformer based AC-DC converter is designed, simulated and developed to compare its performance in terms of power quality parameters with that of delta/delta-star transformer based AC-DC converter. To meet power quality requirement over wide range of load variation, pulse doubling technique is used to make it 24-pulse AC-DC converter.
Web Browser Fingerprinting Using Only Cascading Style Sheets
Many commercial Websites employ Web browser fingerprinting to track visitors using Hypertext Transfer Protocol (HTTP) headers, JavaScript, and other methods. Although a user can disable JavaScript or utilize a prevention tool to avoid being tracked, countermeasures against Web browser fingerprinting using Cascading Style Sheets (CSS) have not been established. Therefore, in this paper, we propose a method of fingerprinting that employs only CSS and discuss the effectiveness of our method.
Comparison of TSCS regression and neural network models for panel data forecasting: debt policy
Empirical studies of variations in debt ratios across firms have analyzed important determinants of capital structure using statistical models. Researchers, however, rarely employ nonlinear models to examine the determinants and make little effort to identify a superior prediction model among competing ones. This paper reviews the time-series cross-sectional (TSCS) regression and the predictive abilities of neural network (NN) utilizing panel data concerning debt ratio of high-tech industries in Taiwan. We built models with these two methods using the same set of measurements as determinants of debt ratio and compared the forecasting performance of five models, namely, three TSCS regression models and two NN models. Models built with neural network obtained the lowest mean square error and mean absolute error. These results reveal that the relationships between debt ratio and determinants are nonlinear and that NNs are more competent in modeling and forecasting the test panel data. We conclude that NN models can be used to solve panel data analysis and forecasting problems.
Embodiment of abstract concepts: good and bad in right- and left-handers.
Do people with different kinds of bodies think differently? According to the body-specificity hypothesis, people who interact with their physical environments in systematically different ways should form correspondingly different mental representations. In a test of this hypothesis, 5 experiments investigated links between handedness and the mental representation of abstract concepts with positive or negative valence (e.g., honesty, sadness, intelligence). Mappings from spatial location to emotional valence differed between right- and left-handed participants. Right-handers tended to associate rightward space with positive ideas and leftward space with negative ideas, but left-handers showed the opposite pattern, associating rightward space with negative ideas and leftward with positive ideas. These contrasting mental metaphors for valence cannot be attributed to linguistic experience, because idioms in English associate good with right but not with left. Rather, right- and left-handers implicitly associated positive valence more strongly with the side of space on which they could act more fluently with their dominant hands. These results support the body-specificity hypothesis and provide evidence for the perceptuomotor basis of even the most abstract ideas.
Information Technology Continuance Research : Current State and Future Directions *
Contemporary research on information technology (IT) continuance is plagued by inadequate understanding of the continuance concept and inappropriate use of theories for studying this phenomenon. Following a review of the IT continuance literature, this paper identifies some of the extant misconceptions about continuance research and suggests theoretical avenues for advancing this research in a meaningful manner. Based on these insights, an extended expectation-confirmation theoretic model of IT continuance is proposed.
How do we perceive the pain of others? A window into the neural processes involved in empathy
To what extent do we share feelings with others? Neuroimaging investigations of the neural mechanisms involved in the perception of pain in others may cast light on one basic component of human empathy, the interpersonal sharing of affect. In this fMRI study, participants were shown a series of still photographs of hands and feet in situations that are likely to cause pain, and a matched set of control photographs without any painful events. They were asked to assess on-line the level of pain experienced by the person in the photographs. The results demonstrated that perceiving and assessing painful situations in others was associated with significant bilateral changes in activity in several regions notably, the anterior cingulate, the anterior insula, the cerebellum, and to a lesser extent the thalamus. These regions are known to play a significant role in pain processing. Finally, the activity in the anterior cingulate was strongly correlated with the participants' ratings of the others' pain, suggesting that the activity of this brain region is modulated according to subjects' reactivity to the pain of others. Our findings suggest that there is a partial cerebral commonality between perceiving pain in another individual and experiencing it oneself. This study adds to our understanding of the neurological mechanisms implicated in intersubjectivity and human empathy.
Brain mechanisms of proactive interference in working memory
It has long been known that storage of information in working memory suffers as a function of proactive interference. Here we review the results of experiments using approaches from cognitive neuroscience to reveal a pattern of brain activity that is a signature of proactive interference. Many of these results derive from a single paradigm that requires one to resolve interference from a previous experimental trial. The importance of activation in left inferior frontal cortex is shown repeatedly using this task and other tasks. We review a number of models that might account for the behavioral and imaging findings about proactive interference, raising questions about the adequacy of these models.
Concurrent chemotherapy and radiotherapy for organ preservation in advanced laryngeal cancer.
BACKGROUND Induction chemotherapy with cisplatin plus fluorouracil followed by radiotherapy is the standard alternative to total laryngectomy for patients with locally advanced laryngeal cancer. The value of adding chemotherapy to radiotherapy and the optimal timing of chemotherapy are unknown. METHODS We randomly assigned patients with locally advanced cancer of the larynx to one of three treatments: induction cisplatin plus fluorouracil followed by radiotherapy, radiotherapy with concurrent administration of cisplatin, or radiotherapy alone. The primary end point was preservation of the larynx. RESULTS A total of 547 patients were randomly assigned to one of the three study groups. The median follow-up period was 3.8 years. At two years, the proportion of patients who had an intact larynx after radiotherapy with concurrent cisplatin (88 percent) differed significantly from the proportions in the groups given induction chemotherapy followed by radiotherapy (75 percent, P=0.005) or radiotherapy alone (70 percent, P<0.001). The rate of locoregional control was also significantly better with radiotherapy and concurrent cisplatin (78 percent, vs. 61 percent with induction cisplatin plus fluorouracil followed by radiotherapy and 56 percent with radiotherapy alone). Both of the chemotherapy-based regimens suppressed distant metastases and resulted in better disease-free survival than radiotherapy alone. However, overall survival rates were similar in all three groups. The rate of high-grade toxic effects was greater with the chemotherapy-based regimens (81 percent with induction cisplatin plus fluorouracil followed by radiotherapy and 82 percent with radiotherapy with concurrent cisplatin, vs. 61 percent with radiotherapy alone). The mucosal toxicity of concurrent radiotherapy and cisplatin was nearly twice as frequent as the mucosal toxicity of the other two treatments during radiotherapy. CONCLUSIONS In patients with laryngeal cancer, radiotherapy with concurrent administration of cisplatin is superior to induction chemotherapy followed by radiotherapy or radiotherapy alone for laryngeal preservation and locoregional control.
Secure Multiparty Computation and Secret Sharing
In a data-driven society, individuals and companies encounter numerous situations where private information is an important resource. How can parties handle confidential data if they do not trust everyone involved? This text is the first to present a comprehensive treatment of unconditionally secure techniques for multiparty computation (MPC) and secret sharing. In a secure MPC, each party possesses some private data, whereas secret sharing provides a way for one party to spread information on a secret such that all parties together hold full information, yet no single party has all the information. The authors present basic feasibility results from the last thirty years, generalizations to arbitrary access structures using linear secret sharing, some recent techniques for efficiency improvements, and a general treatment of the theory of secret sharing, focusing on asymptotic results with interesting applications related to MPC.
A Review of Platelet-Rich Plasma: History, Biology, Mechanism of Action, and Classification.
Platelet-rich plasma (PRP) is currently used in different medical fields. The interest in the application of PRP in dermatology has recently increased. It is being used in several different applications as in tissue regeneration, wound healing, scar revision, skin rejuvenating effects, and alopecia. PRP is a biological product defined as a portion of the plasma fraction of autologous blood with a platelet concentration above the baseline. It is obtained from the blood of patients collected before centrifugation. The knowledge of the biology, mechanism of action, and classification of the PRP should help clinicians better understand this new therapy and to easily sort and interpret the data available in the literature regarding PRP. In this review, we try to provide useful information for a better understanding of what should and should not be treated with PRP.
Malicious Software Classification using VGG 16 Deep Neural Network ’ s Bottleneck Features
Malicious software (malware) has been extensively employed for illegal purposes and thousands of new samples are discovered every day. The ability to classify samples with similar characteristics into families makes possible to create mitigation strategies that work for a whole class of programs. In this paper, we present a malware family classification approach using VGG16 deep neural network’s bottleneck features. Malware samples are represented as byteplot grayscale images and the convolutional layers of a VGG16 deep neural network pre-trained on the ImageNet dataset is used for bottleneck features extraction. These features are used to train a SVM classifier for the malware family classification task. The experimental results on a dataset comprising 10,136 samples from 20 different families showed that our approach can effectively be used to classify malware families with an accuracy of 92.97%, outperforming similar approaches proposed in the literature which require feature engineering and considerable domain expertise.
A Study Investigating Typical Concepts and Guidelines for Ontology Building
In semantic technologies, the shared common understanding of the structure of information among artifacts (people or software agents) can be realized by building an ontology. To do this, it is imperative for an ontology builder to answer several questions: a) what are the main components of an ontology? b) How an ontology look likes and how it works? c) Verify if it is required to consider reusing existing ontologies or not? c) What is the complexity of the ontology to be developed? d) What are the principles of ontology design and development? e) How to evaluate an ontology? This paper answers all the key questions above. The aim of this paper is to present a set of guiding principles to help ontology developers and also inexperienced users to answer such questions.
Generation of nodding, head tilting and eye gazing for human-robot dialogue interaction
Head motion occurs naturally and in synchrony with speech during human dialogue communication, and may carry paralinguistic information, such as intentions, attitudes and emotions. Therefore, natural-looking head motion by a robot is important for smooth human-robot interaction. Based on rules inferred from analyses of the relationship between head motion and dialogue acts, this paper proposes a model for generating head tilting and nodding, and evaluates the model using three types of humanoid robot (a very human-like android, "Geminoid F", a typical humanoid robot with less facial degrees of freedom, "Robovie R2", and a robot with a 3-axis rotatable neck and movable lips, "Telenoid R2"). Analysis of subjective scores shows that the proposed model including head tilting and nodding can generate head motion with increased naturalness compared to nodding only or directly mapping people's original motions without gaze information. We also find that an upwards motion of a robot's face can be used by robots which do not have a mouth in order to provide the appearance that utterance is taking place. Finally, we conduct an experiment in which participants act as visitors to an information desk attended by robots. As a consequence, we verify that our generation model performs equally to directly mapping people's original motions with gaze information in terms of perceived naturalness.
Future E-Enabled Aircraft Communications and Security: The Next 20 Years and Beyond
Aircraft data communications and networking are key enablers for civilian air transportation systems to meet projected aviation demands of the next 20 years and beyond. In this paper, we show how the envisioned e-enabled aircraft plays a central role in streamlining system modernization efforts. We show why performance targets such as safety, security, capacity, efficiency, environmental benefit, travel comfort, and convenience will heavily depend on communications, networking and cyber-physical security capabilities of the e-enabled aircraft. The paper provides a comprehensive overview of the state-of-the-art research and standardization efforts. We highlight unique challenges, recent advances, and open problems in enhancing operations as well as certification of the future e-enabled aircraft.
OmniArt: Multi-task Deep Learning for Artistic Data Analysis
Vast amounts of artistic data is scattered on-line from both museums and art applications. Collecting, processing and studying it with respect to all accompanying attributes is an expensive process. With a motivation to speed up and improve the quality of categorical analysis in the artistic domain, in this paper we propose an efficient and accurate method for multi-task learning with a shared representation applied in the artistic domain. We continue to show how different multi-task configurations of our method behave on artistic data and outperform handcrafted feature approaches as well as convolutional neural networks. In addition to the method and analysis, we propose a challenge like nature to the new aggregated data set with almost half a million samples and structuredmeta-data to encourage further research and societal engagement. ACM Reference format: Gjorgji Strezoski and Marcel Worring. 2017. OmniArt: Multi-task Deep Learning for Artistic Data Analysis.
[The multidimensional relationship questionnaire: a study of reliability and validity].
PURPOSE The Multidimensional Relationship Questionnaire (MRQ) was developed to measure psychological tendencies associated with intimate relationships by Snell Schicke and Arbeiter (2002). The purpose of the present study was to investigate the reliability and validity of the Turkish version of the Multidimensional Relationship Questionnaire. METHOD 480 university students from various faculties, with a history of involvement in an intimate relationship at present or in past (308 female, 172 male) participated in the study. The Relationship Assessment Scale (RAS) was used for the criterion validity. RESULTS In order to determine the construct validity of MRQ, factor analysis was conducted using principal components analysis with varimax rotation. The factor analysis resulted in eight factors;focus on relationship extremely, relational satisfaction, fear of relationship/relational anxiety, relational monitoring, relational esteem, external relational control, relational assertiveness, and internal relational control. The correlation coefficients of the MRQ with RAS were between -.41 and .69. The Cronbach's alpha for the MRQ was .81. The computed test-retest reliability coefficient was .80. MRQ subscales were found to show significant difference, with respect to sex of the participant, only in the subscale of "external relational control". CONCLUSION Analysis demonstrated that MRQ had a satisfactory level of reliability and validity in Turkish university students.
Worst-case throughput analysis of real-time dynamic streaming applications
Wireless embedded applications have stringent temporal constraints. The frame arrival rate imposes a throughput requirement that must be satisfied. These applications are often dynamic and streaming in nature. The FSM-based Scenario-Aware Dataflow (FSM-SADF) model of computation (MoC) has been proposed to model such dynamic streaming applications. FSM-SADF splits a dynamic system into a set of static modes of operation, called scenarios. Each scenario is modeled by a Synchronous Dataflow (SDF) graph. The possible scenario transitions are specified by a finite-state machine (FSM). FSM-SADF allows a more accurate design-time analysis of dynamic streaming applications, capitalizing on the analysability of SDF. However, existing FSM-SADF analysis techniques assume 1) scenarios are self-timed bounded, for which strong-connectedness is a sufficient condition, and 2) inter-scenario synchronizations are only captured by initial tokens that are common between scenarios. These conditions are too restrictive for many real-life applications. In this paper, we lift these restrictive assumptions and introduce a generalized FSM-SADF analysis approach based on the max-plus linear systems theory. We present both exact and conservative worst-case throughput analysis techniques that have varying levels of accuracy and scalability. The analysis techniques are implemented in a publicly available dataflow analysis tool and experimentally evaluated with different wireless applications.
An Overview of Process and Product Requirements for Next Generation Thin Film Electronics, Advanced Touch Panel Devices, and Ultra High Barriers
Roll-to-Roll (R2R) production of thin film based display components (e.g., active matrix TFT backplanes and touch screens) combine the advantages of the use > of inexpensive, lightweight, and flexible substrates with high throughput production. Significant cost reduction opportunities can also be found in terms of processing tool capital cost, utilized substrate area, and process gas flow when compared with batch processing systems. Applied Materials has developed a variety of different web handling and coating technologies/platforms to enable high volume R2R manufacture of thin film silicon solar cells, TFT active matrix backplanes, touch screen devices, and ultra-high barriers for organic electronics. The work presented in this chapter therefore describes the latest advances in R2R PVD processing and principal challenges inherent in moving from lab and pilot scale manufacturing to high volume manufacturing of flexible display devices using CVD for the deposition of active semiconductors layers, gate insulators, and high performance barrier/passivation layers. This chapter also includes brief description of the process and cost advantage of the use of rotatable PVD source technologies (primarily for use in flexible touch panel manufacture) and a summary of the current performance levels obtained for R2R processed amorphous silicon and IGZO TFT backplanes. Results will also be presented for barrier film for final device/frontplane encapsulation for display applications.
Using Virtualization to Create and Deploy Computer Security Lab Exercises
Providing computer security laboratory exercises enables students to experience and understand the underlying concepts associated with computer security, but there are many impediments to the creation of realistic exercises of this type. Virtualization provides a mechanism for creating and deploying authentic computer security laboratory experiences for students while minimizing the associated configuration time and reducing the associated hardware requirements. This paper provides a justification for using virtualization to create and deploy computer security lab exercises by presenting and discussing examples of applied lab exercises that have been successfully used at two leading computer security programs. The application of virtualization mitigates many of the challenges encountered in using traditional computer laboratory environments for information assurance educational scenarios.
Many Hands Make Light the Work : The Causes and Consequences of Social Loafing
Two experiments found that when asked to perform the physically exerting tasks of clapping and shouting, people exhibit a sizable decrease in individual effort when performing in groups as compared to when they perform alone. This decrease, which we call social loafing, is in addition to losses due to faulty coordination of group efforts. Social loafing is discussed in terms of its experimental generality and theoretical importance. The widespread occurrence, the negative consequences for society, and some conditions that can minimize social loafing are also explored.
Exploiting The Laws of Order in Smart Contracts
We investigate a family of bugs in blockchain-based smart contracts, which we call event-ordering (or EO) bugs. These bugs are intimately related to the dynamic ordering of contract events, i.e., calls of its functions on the blockchain, and enable potential exploits of millions of USD worth of Ether. Known examples of such bugs and prior techniques to detect them have been restricted to a small number of event orderings, typicall 1 or 2. Our work provides a new formulation of this general class of EO bugs as finding concurrency properties arising in long permutations of such events. The technical challenge in detecting our formulation of EO bugs is the inherent combinatorial blowup in path and state space analysis, even for simple contracts. We propose the first use of partial-order reduction techniques, using happen-before relations extracted automatically for contracts, along with several other optimizations built on a dynamic symbolic execution technique. We build an automatic tool called ETHRACER that requires no hints from users and runs directly on Ethereum bytecode. It flags 7-11% of over ten thousand contracts analyzed in roughly 18.5 minutes per contract, providing compact event traces that human analysts can run as witnesses. These witnesses are so compact that confirmations require only a few minutes of human effort. Half of the flagged contracts have subtle EO bugs, including in ERC-20 contracts that carry hundreds of millions of dollars worth of Ether. Thus, ETHRACER is effective at detecting a subtle yet dangerous class of bugs which existing tools miss.
From a Competition for Self-Driving Miniature Cars to a Standardized Experimental Platform: Concept, Models, Architecture, and Evaluation
Context: Competitions for self-driving cars facilitated the development and research in the domain of autonomous vehicles towards potential solutions for the future mobility. Objective: Miniature vehicles can bridge the gap between simulation-based evaluations of algorithms relying on simplified models, and those time-consuming vehicle tests on real-scale proving grounds. Method: This article combines findings from a systematic literature review, an in-depth analysis of results and technical concepts from contestants in a competition for self-driving miniature cars, and experiences of participating in the 2013 competition for self-driving cars. Results: A simulation-based development platform for real-scale vehicles has been adapted to support the development of a self-driving miniature car. Furthermore, a standardized platform was designed and realized to enable research and experiments in the context of future mobility solutions. Conclusion: A clear separation between algorithm conceptualization and validation in a model-based simulation environment enabled efficient and riskless experiments and validation. The design of a reusable, low-cost, and energy-efficient hardware architecture utilizing a standardized software/hardware interface enables experiments, which would otherwise require resources like a large real-scale
EA-Analyzer: automating conflict detection in a large set of textual aspect-oriented requirements
One of the aims of Aspect-Oriented Requirements Engineering is to address the composability and subsequent analysis of crosscutting and non-crosscutting concerns during requirements engineering. A composition definition explicitly represents interdependencies and interactions between concerns. Subsequent analysis of such compositions helps to reveal conflicting dependencies that need to be resolved in requirements. However, detecting conflicts in a large set of textual aspect-oriented requirements is a difficult task as a large number of explicitly defined interdependencies need to be analyzed. This paper presents EA-Analyzer, the first automated tool for identifying conflicts in aspect-oriented requirements specified in natural-language text. The tool is based on a novel application of a Bayesian learning method. We present an empirical evaluation of the tool with three industrial-strength requirements documents from different domains and a fourth academic case study used as a de facto benchmark in several areas of the aspect-oriented community. This evaluation shows that the tool achieves up to 93.90 % accuracy regardless of the documents chosen as the training and validation sets.
In silico approaches for designing highly effective cell penetrating peptides
Cell penetrating peptides have gained much recognition as a versatile transport vehicle for the intracellular delivery of wide range of cargoes (i.e. oligonucelotides, small molecules, proteins, etc.), that otherwise lack bioavailability, thus offering great potential as future therapeutics. Keeping in mind the therapeutic importance of these peptides, we have developed in silico methods for the prediction of cell penetrating peptides, which can be used for rapid screening of such peptides prior to their synthesis. In the present study, support vector machine (SVM)-based models have been developed for predicting and designing highly effective cell penetrating peptides. Various features like amino acid composition, dipeptide composition, binary profile of patterns, and physicochemical properties have been used as input features. The main dataset used in this study consists of 708 peptides. In addition, we have identified various motifs in cell penetrating peptides, and used these motifs for developing a hybrid prediction model. Performance of our method was evaluated on an independent dataset and also compared with that of the existing methods. In cell penetrating peptides, certain residues (e.g. Arg, Lys, Pro, Trp, Leu, and Ala) are preferred at specific locations. Thus, it was possible to discriminate cell-penetrating peptides from non-cell penetrating peptides based on amino acid composition. All models were evaluated using five-fold cross-validation technique. We have achieved a maximum accuracy of 97.40% using the hybrid model that combines motif information and binary profile of the peptides. On independent dataset, we achieved maximum accuracy of 81.31% with MCC of 0.63. The present study demonstrates that features like amino acid composition, binary profile of patterns and motifs, can be used to train an SVM classifier that can predict cell penetrating peptides with higher accuracy. The hybrid model described in this study achieved more accuracy than the previous methods and thus may complement the existing methods. Based on the above study, a user- friendly web server CellPPD has been developed to help the biologists, where a user can predict and design CPPs with much ease. CellPPD web server is freely accessible at http://crdd.osdd.net/raghava/cellppd/ .
Transformational Leadership and Its Predictive Effects on Leadership Effectiveness
Academic departments play an important role in the success of institutions of higher education and success of departments directly depends on effectiveness of their head. This study is an attempt to determine heads of academic departments’ leadership styles and its relationship with leadership effectiveness at Malaysian Research Universities (RUs). Using Multifactor Leadership Questionnaire 5x (MLQ), the study employed 298 lecturers of three Malaysian RUs. Results indicated that lecturers perceived the heads of departments exhibited combination of transformational, transactional, and laissez-faire leadership styles. The result of regression analysis demonstrated that contingent reward, idealized influence (attribute), inspirational motivation, individualized consideration, laissez-faire, intellectual stimulation, and management-by-exception active are significant predictors of leadership effectiveness. These factors accounted for 82% of the variance in leadership effectiveness. In addition, the results suggest that contingent reward has important effects on leadership effectiveness. The implications of the research findings are discussed.
Rich allelic variations of Viviparous-1A and their associations with seed dormancy/pre-harvest sprouting of common wheat
The allelic variations of Vp-1B have been confirmed to have close association with seed dormancy (SD) and pre-harvest sprouting (PHS) of Chinese wheat in previous research, but little was known regarding whether the alleles of two other orthologs of Vp1 on 3AL (Vp-1A) and 3DL (Vp-1D) are also present and related to these traits. In view of this, 11 primer pairs flanking the whole sequences of these two orthologs were designed to investigate their allelic variations. The results identified six alleles of Vp-1A using the primer pair A17-19 among 81 wheat cultivars and advanced lines, which were designated as Vp-1Aa, Vp-1Ab, Vp-1Ac, Vp-1Ad, Vp-1Ae, and Vp-1Af. Except for Vp-1Ac, the other five alleles were proven novel, but no allelic variation was found in Vp-1D. On sequence analysis of alleles of Vp-1A, five deletions were observed, all occurring in the same region holding many TTC repeats. Of the six alleles detected in this study, four (Vp-1Aa, Vp-1Ac, Vp-1Ae, and Vp-1Af) were generally distributed in varieties exhibiting higher average germination index (GI, range 0.46–0.56) and spike sprouting (SS, range 39.6–49.4%); however, the alleles Vp-1Ab and Vp-1Ad were distributed in genotypes carrying higher SD (GI 0.19–0.26) and stronger PHS resistance (SS 12.3–17.2%). On Spearman correlation analysis, the allele Vp-1Ab had significantly negative correlation with GI (−0.479) and SS (−0.542) at the 0.01 level, and the three alleles Vp-1Aa, Vp-1Ac, and Vp-1Ae had significantly positive correlation with GI [0.311 (0.05 level), 0.401 (0.01 level), and 0.294 (0.05 level)] and SS [0.283 (0.05 level), 0.309 (0.05 level), and 0.266 (0.05 level)]. The other alleles, including Vp-1Ad and Vp-1Af, also exhibited correlation, albeit not significant, with these two traits. This negative correlation showed that Vp-1Ab helped to improve SD and PHS tolerance, but Vp-1Aa, Vp-1Ac, and Vp-1Ae appeared to exert the opposite effect. To further confirm the association between alleles of Vp-1A and the two traits, a recombinant inbred line (RIL) population with 157 lines was genotyped using the primer pair A17-19, developed from the cross between Wanxianbaimaizi (Vp-1Ab) and Jing411 (Vp-1Ac). General linear model analysis indicated that variation in Vp-1A had a significant (P < 0.001) association with the two traits, explaining 23.4% of the variation in GI and 16.7% of the variation in SS in the population across three crop seasons.
"Now His Time Really Seems to Have Come": Ideas about Mahler's Music in Late Imperial and First Republic Vienna
In Vienna from about 1918 until the 1930s, contemporaries perceived a high point in the music-historical significance of Mahler's works, with regard to both the history of compositional style and the social history of music. The ideas and meanings that became attached to Mahler's works in this milieu are tied inextricably to the city's political and cultural life. Although the performances of Mahler's works under the auspices of Vienna's Social Democrats are sometimes construed today as mere acts of political appropriation, David Josef Bach's writings suggest that the innovative and controversial aspects of Mahler's works held social value in line with the ideal of Arbeiterbildung. Richard Specht, Arnold Schoenberg, and Theodor Adorno embraced oft-criticized features in Mahler's music, regarding the composer as a prophetic artist whose compositional style was the epitome of faithful adherence to one's inner artistic vision, regardless of its popularity. While all three critics addressed the relationship between detail and whole in Mahler's music, Adorno construed it as an act of subversion. Mahler's popularity also affected Viennese composers during this time in obvious and subtle ways. The formal structure and thematic construction of Berg's Chamber Concerto suggest a compositional approach close to what his student Adorno described a few years later regarding Mahler's music.
Permanent-Magnet Shape Optimization Effects on Synchronous Motor Performance
The magnet shape in permanent-magnet (PM) synchronous motors substantially affects the back-electromotive-force (EMF) waveform and the stator iron losses, which are of particular importance in traction applications, where the energy available in the battery box is limited. This paper presents a methodology based on geometry optimization, providing sinusoidal back-EMF waveform. The method has been applied in a surface PM motor case for electric vehicle, and its validity has been checked by measurements on two prototypes, the first one with constant magnet width and the second one with optimized magnet shape.
Anemia as a risk factor for cardiovascular disease and all-cause mortality in diabetes: the impact of chronic kidney disease.
Anemia is a potential nontraditional risk factor for cardiovascular disease (CVD). This study evaluated whether anemia is a risk factor for adverse outcomes in people with diabetes and whether the risk is modified by the presence of chronic kidney disease (CKD). Persons with diabetes from four community-based studies were pooled: Atherosclerosis Risk in Communities, Cardiovascular Health Study, Framingham Heart Study, and Framingham Offspring Study. Anemia was defined as a hematocrit <36% in women and <39% in men. CKD was defined as an estimated GFR of 15 to 60 ml/min per 1.73 m(2). Study outcomes included a composite of myocardial infarction (MI)/fatal coronary heart disease (CHD)/stroke/death and each outcome separately. Cox regression analysis was used to study the effect of anemia on the risk for outcomes after adjustment for potential confounders. The study population included 3015 individuals: 30.4% were black, 51.6% were women, 8.1% had anemia, and 13.8% had CKD. Median follow-up was 8.6 yr. There were 1215 composite events, 600 MI/fatal CHD outcomes, 300 strokes, and 857 deaths. In a model with a CKD-anemia interaction term, anemia was associated with the following hazard ratios (95% confidence intervals) in patients with CKD: 1.70 (1.24 to 2.34) for the composite outcome, 1.64 (1.03 to 2.61) for MI/fatal CHD, 1.81 (0.99 to 3.29) for stroke, and 1.88 (1.33 to 2.66) for all-cause mortality. Anemia was not a risk factor for any outcome in those without CKD (P > 0.2 for all outcomes). In persons with diabetes, anemia is primarily a risk factor for adverse outcomes in those who also have CKD.
Data summaries for on-demand queries over linked data
Typical approaches for querying structured Web Data collect (crawl) and pre-process (index) large amounts of data in a central data repository before allowing for query answering. However, this time-consuming pre-processing phase however leverages the benefits of Linked Data -- where structured data is accessible live and up-to-date at distributed Web resources that may change constantly -- only to a limited degree, as query results can never be current. An ideal query answering system for Linked Data should return current answers in a reasonable amount of time, even on corpora as large as the Web. Query processors evaluating queries directly on the live sources require knowledge of the contents of data sources. In this paper, we develop and evaluate an approximate index structure summarising graph-structured content of sources adhering to Linked Data principles, provide an algorithm for answering conjunctive queries over Linked Data on theWeb exploiting the source summary, and evaluate the system using synthetically generated queries. The experimental results show that our lightweight index structure enables complete and up-to-date query results over Linked Data, while keeping the overhead for querying low and providing a satisfying source ranking at no additional cost.
Ontology-Based Interface Specifications for a NLP Pipeline Architecture
The high level of heterogeneity between linguistic annotations usually complic ates the interoperability of processing modules within an NLP pipeline. In this paper, a framework for the interoperation of NLP co mp nents, based on a data-driven architecture, is presented. Here, ontologies of linguistic annotation are employed to provide a conceptu al basis for the tag-set neutral processing of linguistic annotations. The framework proposed here is based on a set of struc tured OWL ontologies: a reference ontology, a set of annotation models which formalize different annotation schemes, and a declarativ e linking between these, specified separately. This modular architecture is particularly scalable and flexible as it allows for the integration of different reference ontologies of linguistic annotations in order to overcome the absence of a consensus for an ontology of ling uistic terminology. Our proposal originates from three lines of research from different fields: research on annotation type systems in UIMA; the ontological architecture OLiA, originally developed for sustainable documentation and annotation-independent corpus browsin g, and the ontologies of the OntoTag model, targeted towards the processing of linguistic annotations in Semantic Web applications. We describ how UIMA annotations can be backed up by ontological specifications of annotation schemes as in the OLiA model, and how these ar e linked to the OntoTag ontologies, which allow for further ontological processing.
An Overview of Genome Organization and How We Got There: from FISH to Hi-C.
In humans, nearly two meters of genomic material must be folded to fit inside each micrometer-scale cell nucleus while remaining accessible for gene transcription, DNA replication, and DNA repair. This fact highlights the need for mechanisms governing genome organization during any activity and to maintain the physical organization of chromosomes at all times. Insight into the functions and three-dimensional structures of genomes comes mostly from the application of visual techniques such as fluorescence in situ hybridization (FISH) and molecular approaches including chromosome conformation capture (3C) technologies. Recent developments in both types of approaches now offer the possibility of exploring the folded state of an entire genome and maybe even the identification of how complex molecular machines govern its shape. In this review, we present key methodologies used to study genome organization and discuss what they reveal about chromosome conformation as it relates to transcription regulation across genomic scales in mammals.
Fractional Order PID controller design for speed control of chopper fed DC Motor Drive using Artificial Bee Colony algorithm
This article deals with an interesting application of Fractional Order (FO) Proportional Integral Derivative (PID) Controller for speed regulation in a DC Motor Drive. The design of five interdependent Fractional Order controller parameters has been formulated as an optimization problem based on minimization of set point error and controller output. The task of optimization was carried out using Artificial Bee Colony (ABC) algorithm. A comparative study has also been made to highlight the advantage of using a Fractional order PID controller over conventional PID control scheme for speed regulation of application considered. Extensive simulation results are provided to validate the effectiveness of the proposed approach.
From cognitive to neural models of working memory.
Working memory refers to the temporary retention of information that was just experienced or just retrieved from long-term memory but no longer exists in the external environment. These internal representations are short-lived, but can be stored for longer periods of time through active maintenance or rehearsal strategies, and can be subjected to various operations that manipulate the information in such a way that makes it useful for goal-directed behaviour. Empirical studies of working memory using neuroscientific techniques, such as neuronal recordings in monkeys or functional neuroimaging in humans, have advanced our knowledge of the underlying neural mechanisms of working memory. This rich dataset can be reconciled with behavioural findings derived from investigating the cognitive mechanisms underlying working memory. In this paper, I review the progress that has been made towards this effort by illustrating how investigations of the neural mechanisms underlying working memory can be influenced by cognitive models and, in turn, how cognitive models can be shaped and modified by neuroscientific data. One conclusion that arises from this research is that working memory can be viewed as neither a unitary nor a dedicated system. A network of brain regions, including the prefrontal cortex (PFC), is critical for the active maintenance of internal representations that are necessary for goal-directed behaviour. Thus, working memory is not localized to a single brain region but probably is an emergent property of the functional interactions between the PFC and the rest of the brain.
C-Band Single-Chip Radar Front-End in AlGaN/GaN Technology
This paper presents the design and measurement results of a single-chip front-end monolithic microwave integrated circuit (MMIC), incorporating a high-power amplifier, transmit–receive switch, low-noise amplifier, and calibration coupler, realized in 0.25 $\mu \text{m}$ AlGaN/GaN-on-SiC MMIC technology of UMS (GH25-10). The MMIC is operating in C-band (5.2–5.6 GHz) and is targeting the next generation spaceborne synthetic aperture radar. The use of GaN technology has resulted in a design that is robust against antenna load variation in transmit as well as against high received power levels, without the need for an additional limiter. By including a transmit-receive switch on the MMIC there is no need for an external circulator, resulting in a significant size and weight reduction of the transmit–receive module. The measured output power in transmit is higher than 40 W with 36% PAE. The receive gain is higher than 31 dB with better than 2.4 dB noise figure. To the best of the author’s knowledge this is the first time such performance has been demonstrated for a single-chip implementation of a C-band transmit–receive front-end.
Benefit of Direct Charge Measurement (DCM) on Interconnect Capacitance Measurement
This paper discusses application of direct charge measurement (DCM) on characterizing on-chip interconnect capacitance. Measurement equipment and techniques are leveraged from Flat Panel Display testing. On-chip active device is not an essential necessity for DCM test structure and it is easy to implement parallel measurements. Femto-Farad measurement sensitivity is achieved without having on-chip active device. Measurement results of silicon and glass substrates, including parallel measurements, are presented.
African American Men and Intimate Partner Violence
Drawing on interviews with African American males in violent intimate relationships, this paper focuses on individual causes (exposure to violence), cultural causes (constructions of masculinity) and structural causes (unemployment and incarceration) of intimate partner violence (IPV) among African American men. IPV is “triggered” by two threats to masculinity, though I focus exclusively on the first trigger (breadwinning). The analyses are framed by Merton’s strain theory (1968) and his theory of unintended consequences (1976). I argue that at least for African American men, this framework when added to feminist theory and masculinity theory extends our understanding of battering, from the perspective of the batterer, beyond what other models have been able to accomplish. In short, from the point of view of many batterers, battering provides an accessible mechanism for African American men—who live in a social world plagued by a system of racial domination— to reassert their masculinity and thus maintain their male privilege and dominance in their heterosexual relationships. Yet battering has the unanticipated consequence of alienating them further from these same intimate partners, thus perpetuating the cycle of violence.
Image-Quality-Based Adaptive Face Recognition
The accuracy of automated face recognition systems is greatly affected by intraclass variations between enrollment and identification stages. In particular, changes in lighting conditions is a major contributor to these variations. Common approaches to address the effects of varying lighting conditions include preprocessing face images to normalize intraclass variations and the use of illumination invariant face descriptors. Histogram equalization is a widely used technique in face recognition to normalize variations in illumination. However, normalizing well-lit face images could lead to a decrease in recognition accuracy. The multiresolution property of wavelet transforms is used in face recognition to extract facial feature descriptors at different scales and frequencies. The high-frequency wavelet subbands have shown to provide illumination-invariant face descriptors. However, the approximation wavelet subbands have shown to be a better feature representation for well-lit face images. Fusion of match scores from low- and high-frequency-based face representations have shown to improve recognition accuracy under varying lighting conditions. However, the selection of fusion parameters for different lighting conditions remains unsolved. Motivated by these observations, this paper presents adaptive approaches to face recognition to overcome the adverse effects of varying lighting conditions. Image quality, which is measured in terms of luminance distortion in comparison to a known reference image, will be used as the base for adapting the application of global and region illumination normalization procedures. Image quality is also used to adaptively select fusion parameters for wavelet-based multistream face recognition.
Music, memory and emotion
Because emotions enhance memory processes and music evokes strong emotions, music could be involved in forming memories, either about pieces of music or about episodes and information associated with particular music. A recent study in BMC Neuroscience has given new insights into the role of emotion in musical memory.
Resuscitating Businessman Risk: A Rationale for Familiarity-Based Portfolios
This paper studies two frequently observed portfolio behaviors that are seemingly inconsistent with rational portfolio choice. The first is the tendency of workers and entrepreneurs to hold their company's stock. The second is the propensity of workers to limit their equity holdings through time. The explanation offered here for both of these behaviors lies in the option to switch jobs when one?s company does poorly. This is equivalent to holding put options on one's own company stock and call options on the other company's stock, where both options must be exercised at the same time. Given these initial undiversified implicit financial holdings, workers need to allocate a relatively large share of their regular financial assets to their own company's stock and a relatively small share to the stock of their alternative employment simply to restore overall portfolio balance. I find that, under certain conditions, workers optimally hold almost 40% of their financial wealth in their company's stock.
Inclusion of e-commerce workflow with NoSQL DBMS: MongoDB document store
In today's open market, e-commerce has developed its strategy to process various business applications. There applications are developed with the platforms like Magento, Zen Cart, Prestashop, Spree etc., to run a successful online store. These are also used for customer e-commerce apps and require a scalable database for storing the data. The NoSQL database provides an efficient storage access and processing environment with horizontally scalable and replicable strategy over RDBMS. Because of its schema less structure, NoSQL databases provides access to the users to dynamically change the data model applications. Due to this characteristics, NoSQL databases are being used in real time analysis. In order to build a successful online and scalable store, in this study, we choose the database as Document-Oriented NoSQL database for the inclusion of e-commerce workflow. As a new finding, in this study, we will ascertain design and working strategy of e-commerce based on MongoDB, which is a widely used Document-Oriented NoSQL database to process large scale business applications.
The Search for Corporate Control
This paper analyzes the market for corporate control and acquisitions by explicitly modeling a typical firm's choice whether to become a potential acquirer or target. I add synergistic motives to a multitask principal-agent framework with moral hazard between managers and shareholders. I argue that the terms of an M&A deal are determined not in isolation but in a market equilibrium context, therefore the merger transaction is embedded in a dynamic general-equilibrium search model. This framework links explicit and implicit incentives in a novel way. By modeling the choice explicitly I reconcile the evidence that in mergers target shareholders gain whereas acquirer shareholders seem to lose or gain nothing, yet most of the time they do not block the acquisition. Apart from that, it is shown that Golden Parachutes are an optimal form of compensation regarding merger-related incentives. The model also explains financial intermediation in the M&A market. I establish efficiency results and explain how merger waves might arise, in addition to other (testable) implications.
Camera Processing With Chromatic Aberration
Since the refractive index of materials commonly used for lens depends on the wavelengths of light, practical camera optics fail to converge light to a single point on an image plane. Known as chromatic aberration, this phenomenon distorts image details by introducing magnification error, defocus blur, and color fringes. Though achromatic and apochromatic lens designs reduce chromatic aberration to a degree, they are complex and expensive and they do not offer a perfect correction. In this paper, we propose a new post-capture processing scheme designed to overcome these problems computationally. Specifically, the proposed solution is comprised of chromatic aberration-tolerant demosaicking algorithm and post-demosaicking chromatic aberration correction. Experiments with simulated and real sensor data verify that the chromatic aberration is effectively corrected.
How the relationship between the crisis life cycle and mass media content can better inform crisis communication .
Crises are unpredictable events that can impact on an organisation’s viability, credibility, and reputation, and few topics have generated greater interest in communication over the past 15 years. This paper builds on early theory such as Fink (1986), and extends the crisis life-cycle theoretical model to enable a better understanding and prediction of the changes and trends of mass media coverage during crises. This expanded model provides a framework to identify and understand the dynamic and multi-dimensional set of relationships that occurs during the crisis life cycle in a rapidly changing and challenging operational environment. Using the 2001 Ansett Airlines’ Easter groundings as a case study, this paper monitors mass media coverage during this organisational crisis. The analysis reinforces the view that, by using proactive strategies, public relations practitioners can better manage mass media crisis coverage. Further, the understanding gained by extending the crisis life cycle to track when and how mass media content changes may help public relations practitioners craft messages and supply information at the outset of each stage of the crisis, thereby maintaining control of the message.
A survey of security techniques for the border gateway protocol (BGP)
Web surfing is an example (and popular) Internet application where users desire services provided by servers that exist somewhere in the Internet. To provide the service, data must be routed between the user's system and the server. Local network routing (relative to the user) can not provide a complete route for the data. In the core Internet, a portion of the network controlled by a single administrative authority, called an autonomous system (AS), provides local network support and also exchanges routing information with other ASes using the border gateway protocol (BGP). Through the BGP route exchange, a complete route for the data is created. Security at this level in the Internet is challenging due to the lack of a single administration point and because there are numerous ASes which interact with one another using complex peering policies. This work reviews recent techniques to secure BGP. These security techniques are categorized as follows: 1) cryptographic/attestation, 2) database, 3) overlay/group protocols, 4) penalty, and 5) data-plane testing. The techniques are reviewed at a high level in a tutorial format, and shortcomings of the techniques are summarized as well. The depth of coverage for particular published works is intentionally kept minimal, so that the reader can quickly grasp the techniques. This survey provides a basis for evaluation of the techniques to understand coverage of published works as well as to determine the best avenues for future research.
Data Mining Approach For Subscription-Fraud Detection in Telecommunication Sector
This paper implements a probability based method for fraud detection in telecommunication sector. We used Naïve-Bayesian classification to calculate the probability and an adapted version of KL-divergence to identify the fraudulent customers on the basis of subscription. Each user’s data corresponds to one record in the database. Since, the data involves continuous numerical values, the NaïveBayesian classification for continuous values is used. This methodology overcomes the problem of existing system, which classifies the best customer as fraudulent customers, as it works on a threshold based method.