title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Recognizing textual entailment : Rational , evaluation and approaches | The goal of identifying textual entailment – whether one piece of text can be plausibly inferred from another – has emerged in recent years as a generic core problem in natural language understanding. Work in this area has been largely driven by the PASCAL Recognizing Textual Entailment (RTE) challenges, which are a series of annual competitive meetings. The current work exhibits strong ties to some earlier lines of research, particularly automatic acquisition of paraphrases and lexical semantic relationships and unsupervised inference in applications such as question answering, information extraction and summarization. It has also opened the way to newer lines of research on more involved inference methods, on knowledge representations needed to support this natural language understanding challenge and on the use of learning methods in this context. RTE has fostered an active and growing community of researchers focused on the problem of applied entailment. This special issue of the JNLE provides an opportunity to showcase some of the most important work in this emerging area. |
A Fuzzy Similarity Based Concept Mining Model for Text Classification | Text Classification is a challenging and a red hot field in the current scenario and has great importance in text categorization applications. A lot of research work has been done in this field but there is a need to categorize a collection of text documents into mutually exclusive categories by extracting the concepts or features using supervised learning paradigm and different classification algorithms. In this paper, a new Fuzzy Similarity Based Concept Mining Model (FSCMM) is proposed to classify a set of text documents into pre defined Category Groups (CG) by providing them training and preparing on the sentence, document and integrated corpora levels along with feature reduction, ambiguity removal on each level to achieve high system performance. Fuzzy Feature Category Similarity Analyzer (FFCSA) is used to analyze each extracted feature of Integrated Corpora Feature Vector (ICFV) with the corresponding categories or classes. This model uses Support Vector Machine Classifier (SVMC) to classify correctly the training data patterns into two groups; i. e., + 1 and – 1, thereby producing accurate and correct results. The proposed model works efficiently and effectively with great performance and high accuracy results. Keywords-Text Classification; Natural Language Processing; Feature Extraction; Concept Mining; Fuzzy Similarity Analyzer; Dimensionality Reduction; Sentence Level; Document Level; Integrated Corpora Level Processing. |
Redefining Context Windows for Word Embedding Models: An Experimental Study | Distributional semantic models learn vector representations of words through the contexts they occur in. Although the choice of context (which often takes the form of a sliding window) has a direct influence on the resulting embeddings, the exact role of this model component is still not fully understood. This paper presents a systematic analysis of context windows based on a set of four distinct hyperparameters. We train continuous SkipGram models on two English-language corpora for various combinations of these hyper-parameters, and evaluate them on both lexical similarity and analogy tasks. Notable experimental results are the positive impact of cross-sentential contexts and the surprisingly good performance of right-context windows. |
Physical appearance anxiety impedes the therapeutic effects of video feedback in high socially anxious individuals. | BACKGROUND
Video feedback (VF) interventions effectively reduce social anxiety symptoms and negative self-perception, particularly when they are preceded by cognitive preparation (CP) and followed by cognitive review.
AIMS
In the current study, we re-examined data from a study on the efficacy of a novel VF intervention for individuals high in social anxiety to test the hypothesis that physical appearance anxiety would moderate the effects of VF.
METHOD
Data were analyzed from 68 socially anxious participants who performed an initial public speech, and were randomly assigned to an Elaborated VF condition (VF plus cognitive preparation and cognitive review), a Standard VF condition (VF plus cognitive preparation) or a No VF condition (exposure alone), and then performed a second speech.
RESULTS
As hypothesized, when appearance concerns were low, both participants who received Elaborated and Standard VF were significantly less anxious during speech 2 than those in the No VF condition. However, when levels of appearance concern were high, neither Elaborated nor Standard VF reduced anxiety levels during speech 2 beyond the No VF condition.
CONCLUSIONS
Results from our analog sample suggest the importance of tailoring treatment protocols to accommodate the idiosyncratic concerns of socially anxious patients. |
Does local lavage influence functional recovery during lumber discectomy of disc herniation? | Lumbar disc herniation (LDH) is a common disease and lumbar discectomy is the most common surgical procedure carried out for patients with low back pain and leg symptoms. Although most researchers are focusing on the surgical techniques during operation, the aim of this study is to evaluate the effect of local intervertebral lavage during microdiscectomy.In this retrospective study, 410 patients were operated on by microdiscectomy for LDH during 2011 to 2014. Retrospectively, 213 of them (group A) accepted local intervertebral irrigation with saline water before wound closure and 197 patients (group B) only had their operative field irrigated with saline water. Systematic records of visual analog scores (VAS), Oswestry disability Index (ODI) questionnaire scale scores, use of analgesia, and hospital length of stay were done after hospitalization.The majority (80.49%) of the cases were diagnosed with lumber herniation at the levels of L4/5 and L5/S1. Fifty-one patients had herniations at 2 levels. There were significant decreases of VAS scores and ODI in both groups between preoperation and postoperation of different time points. VAS scores decreased more in group A than group B at early stage of postoperation follow-up. However, there were no statistically significant differences between 2 groups in using analgesia, VAS and ODI up to 1 month of follow-up.Microdiscectomy for LDH offers a marked improvement in back and radicular pain. Local irrigation of herniated lumber disc area could relief dick herniation-derived low back pain and leg radicular pain at early stage of post-operation. However, the pain relief of this intervention was not noticeable for a long period. |
Ontology mapping : the state of the art | Ontology mapping is seen as a solution provider in today’s landscape of ontology research. As the number of ontologies that are made publicly available and accessible on the Web increases steadily, so does the need for applications to use them. A single ontology is no longer enough to support the tasks envisaged by a distributed environment like the Semantic Web. Multiple ontologies need to be accessed from several applications. Mapping could provide a common layer from which several ontologies could be accessed and hence could exchange information in semantically sound manners. Developing such mappings has been the focus of a variety of works originating from diverse communities over a number of years. In this article we comprehensively review and present these works. We also provide insights on the pragmatics of ontology mapping and elaborate on a theoretical approach for defining ontology mapping. |
Does the IEEE 802.11 MAC protocol work well in multihop wireless ad hoc networks | The IEEE 802.11 MAC protocol is the standard for wireless LANs; it is widely used in testbeds and simulations for wireless multihop ad hoc networks. However, this protocol was not designed for multihop networks. Although it can support some ad hoc network architecture, it is not intended to support the wireless mobile ad hoc network, in which multihop connectivity is one of the most prominent features. In this article we focus on the following question: Can the IEEE 802.11 MAC protocol function well in multihop networks? By presenting several serious problems encountered in an IEEE 802.11-based multihop network and revealing the in-depth cause of these problems, we conclude that the current version of this wireless LAN protocol does not function well in multihop ad hoc networks. We thus doubt whether the WaveLAN-based system is workable as a mobile ad hoc testbed. |
Gamification for Engaging Computer Science Students in Learning Activities: A Case Study | Gamification is the use of game design elements in non-game settings to engage participants and encourage desired behaviors. It has been identified as a promising technique to improve students' engagement which could have a positive impact on learning. This study evaluated the learning effectiveness and engagement appeal of a gamified learning activity targeted at the learning of C-programming language. Furthermore, the study inquired into which gamified learning activities were more appealing to students. The study was conducted using the mixed-method sequential explanatory protocol. The data collected and analysed included logs, questionnaires, and pre- and post-tests. The results of the evaluation show positive effects on the engagement of students toward the gamified learning activities and a moderate improvement in learning outcomes. Students reported different motivations for continuing and stopping activities once they completed the mandatory assignment. The preferences for different gamified activities were also conditioned by academic milestones. |
Design and verification of smart and scalable DC microgrids for emerging regions | Roughly 1.3 billion people in developing countries still live without access to reliable electricity. As expanding access using current technologies will accelerate global climate change, there is a strong need for novel solutions that displace fossil fuels and are financially viable for developing regions. A novel DC microgrid solution that is geared at maximizing efficiency and reducing system installation cost is described in this paper. Relevant simulation and experimental results, as well as a proposal for undertaking field-testing of the technical and economic viability of the microgrid system are presented. |
Business Models and the Internet of Things | This article provides theoretically and practically grounded assistance to companies that are today engaged primarily in non‐digital industries in the development and implementation of business models that use the Internet of Things. To that end, we investigate the role of the Internet in business models in general in the first section. We conclude that the significance of the Internet in business model innovation has increased steadily since the 1990s, that each new Internet wave has given rise to new digital business model patterns, and that the biggest breakthroughs to date have been made in digital industries. In the second section, we show that digital business model patterns have now become relevant in physical industries as well. The separation between physical and digital industries is now consigned to the past. The key to this transformation is the Internet of Things which makes possible hybrid solutions that merge physical products and digital services. From this, we derive very general business model logic for the Internet of Things and some specific components and patterns for business models. Finally we sketch out the central challenges faced in implementing such hybrid business models and point to possible solutions. The Influence of the Internet on Business Models to Date |
Functional imaging of human crossmodal identification and object recognition | The perception of objects is a cognitive function of prime importance. In everyday life, object perception benefits from the coordinated interplay of vision, audition, and touch. The different sensory modalities provide both complementary and redundant information about objects, which may improve recognition speed and accuracy in many circumstances. We review crossmodal studies of object recognition in humans that mainly employed functional magnetic resonance imaging (fMRI). These studies show that visual, tactile, and auditory information about objects can activate cortical association areas that were once believed to be modality-specific. Processing converges either in multisensory zones or via direct crossmodal interaction of modality-specific cortices without relay through multisensory regions. We integrate these findings with existing theories about semantic processing and propose a general mechanism for crossmodal object recognition: The recruitment and location of multisensory convergence zones varies depending on the information content and the dominant modality. |
Improving measurement in health education and health behavior research using item response modeling: introducing item response modeling. | This paper is the first of several papers designed to demonstrate how the application of item response models in the behavioral sciences can be used to enhance the conceptual and technical toolkit of researchers and developers and to understand better the psychometric properties of psychosocial measures. The papers all use baseline data from the Behavior Change Consortium data archive. This paper begins with an introduction to item response models, including both dichotomous and polytomous versions. The concepts of respondent and item location, model interpretation, standard errors and testing model fit are introduced and described. A sample analysis based on data from the self-efficacy scale is used to illustrate the concepts and techniques. |
Timing of first cannulation and vascular access failure in haemodialysis: an analysis of practice patterns at dialysis facilities in the DOPPS. | BACKGROUND
Optimal waiting time before first use of vascular access is not known.
METHODS
Two practices-first cannulation time for fistulae and grafts, and blood flow rate-were examined as potential predictors of vascular access failure in the Dialysis Outcomes and Practice Patterns Study (DOPPS). Access failure (defined as time to first failure or first salvage intervention) was modelled using Cox regression.
RESULTS
Among 309 haemodialysis facilities, 2730 grafts and 2154 fistulae were studied. For grafts, first cannulation typically occurred within 2-4 weeks at 62% of US, 61% of European and 42% of Japanese facilities. For fistulae, first cannulation occurred <2 months after placement in 36% of US, 79% of European and 98% of Japanese facilities. Overall, the relative risk (RR) of graft failure in Europe was lower compared with the USA (RR = 0.69, P = 0.04). The RR of graft failure (reference group = first cannulation at 2-3 weeks) was 0.84 with first cannulation at <2 weeks (P = 0.11), 0.94 with first cannulation at 3-4 weeks (P = 0.48) and 0.93 with first cannulation at >4 weeks (P = 0.48). The RR of fistula failure was 0.72 with first cannulation at <4 weeks (P = 0.08), 0.91 at 2-3 months (P = 0.43) and 0.87 at >3 months (P = 0.31) (reference group = first cannulation at 1-2 months). Facility median blood flow rate was not a significant predictor of access failure.
CONCLUSIONS
Earlier cannulation of a newly placed vascular access at the haemodialysis facility level was not associated with increased risk of vascular access failure. Potential for confounding due to selection bias cannot be excluded, implying the importance of clinical judgement in determining time to first use of vascular access. |
Automatic fusion and classification using random forests and features extracted with deep learning | Fusion of different sensor modalities has proven very effective in numerous remote sensing applications. However, in order to benefit from fusion, advanced feature extraction mechanisms that rely on domain expertise are typically required. In this paper we present an automated feature extraction scheme based on deep learning. The feature extraction is unsupervised and hierarchical. Furthermore, computational efficiency (often a challenge for deep learning methods) is a primary goal in order to make certain that the method can be applied in large remote sensing datasets. Promising classification results show the applicability of the approach for both reducing the gap between naive feature extraction and methods relying on domain expertise, as well as further improving the performance of the latter in two challenging datasets. |
Effects of patient-centered communication on anxiety, negative affect, and trust in the physician in delivering a cancer diagnosis: A randomized, experimental study. | BACKGROUND
When bad news about a cancer diagnosis is being delivered, patient-centered communication (PCC) has been considered important for patients' adjustment and well-being. However, few studies have explored how interpersonal skills might help cancer patients cope with anxiety and distress during bad-news encounters.
METHODS
A prospective, experimental design was used to investigate the impact of the physician communication style during a bad-news encounter. Ninety-eight cancer patients and 92 unaffected subjects of both sexes were randomly assigned to view a video of a clinician delivering a first cancer diagnosis with either an enhanced patient-centered communication (E-PCC) style or a low patient-centered communication (L-PCC) style. Participants rated state anxiety and negative affect before and immediately after the video exposure, whereas trust in the physician was rated after the video exposure only. Main and interaction effects were analyzed with generalized linear models.
RESULTS
Viewing the disclosure of a cancer diagnosis resulted in a substantial increase in state anxiety and negative affect among all participants. This emotional response was moderated by the physician's communication style: Participants viewing an oncologist displaying an E-PCC style were significantly less anxious than those watching an oncologist displaying an L-PCC style. They also reported significantly higher trust in the physician.
CONCLUSIONS
Under a threatening, anxiety-provoking disclosure of bad news, a short sequence of empathic PCC influences subjects' psychological state, insofar that they report feeling less anxious and more trustful of the oncologist. Video exposure appears to be a valuable method for investigating the impact of a physician's communication style during critical encounters. Cancer 2017;123:3167-75. © 2017 American Cancer Society. |
Malware Detection in Adversarial Settings: Exploiting Feature Evolutions and Confusions in Android Apps | Existing techniques on adversarial malware generation employ feature mutations based on feature vectors extracted from malware. However, most (if not all) of these techniques suffer from a common limitation: feasibility of these attacks is unknown. The synthesized mutations may break the inherent constraints posed by code structures of the malware, causing either crashes or malfunctioning of malicious payloads. To address the limitation, we present Malware Recomposition Variation (MRV), an approach that conducts semantic analysis of existing malware to systematically construct new malware variants for malware detectors to test and strengthen their detection signatures/models. In particular, we use two variation strategies (i.e., malware evolution attack and malware confusion attack) following structures of existing malware to enhance feasibility of the attacks. Upon the given malware, we conduct semantic-feature mutation analysis and phylogenetic analysis to synthesize mutation strategies. Based on these strategies, we perform program transplantation to automatically mutate malware bytecode to generate new malware variants. We evaluate our MRV approach on actual malware variants, and our empirical evaluation on 1,935 Android benign apps and 1,917 malware shows that MRV produces malware variants that can have high likelihood to evade detection while still retaining their malicious behaviors. We also propose and evaluate three defense mechanisms to counter MRV. |
Dynamic Load Balancing in Geographically Distributed Heterogeneous Web Servers | With ever increasing Web traac, a distributed multi-server Web site can provide scalability and ex-ibility to cope with growing client demands. Load balancing algorithms to spread the requests across multiple Web servers are crucial to achieve the scalability. Various domain name server (DNS) based schedulers have been proposed in the literature, mainly for multiple homogeneous servers. The presence of heterogeneous Web servers not only increases the complexity of the DNS scheduling problem, but also makes previously proposed algorithms for homogeneous distributed systems not directly applicable. This leads us to propose new policies, called adaptive TTL algorithms, that take into account of both the uneven distribution of client request rates and heterogeneity of Web servers to adaptively set the time-to-live (TTL) value for each address mapping request. Extensive simulation results show that these strategies are robust and eeective in balancing load among geographically distributed heterogeneous Web servers. |
Product Adoption Rate Prediction in a Competitive Market | As the worlds of commerce and the Internet technology become more inextricably linked, a large number of user consumption series become available for online market intelligence analysis. A critical demand along this line is to predict the future product adoption state of each user, which enables a wide range of applications such as targeted marketing. Nevertheless, previous works only aimed at predicting if a user would adopt a particular product or not with a binary buy-or-not representation. The problem of tracking and predicting users’ adoption rates, i.e., the frequency and regularity of using each product over time, is still under-explored. To this end, we present a comprehensive study of product adoption rate prediction in a competitive market. This task is nontrivial as there are three major challenges in modeling users’ complex adoption states: the heterogeneous data sources around users, the unique user preference and the competitive product selection. To deal with these challenges, we first introduce a flexible factor-based decision function to capture the change of users’ product adoption rate over time, where various factors that may influence users’ decisions from heterogeneous data sources can be leveraged. Using this factor-based decision function, we then provide two corresponding models to learn the parameters of the decision function with both generalized and personalized assumptions of users’ preferences. We further study how to leverage the competition among different products and simultaneously learn product competition and users’ preferences with both generalized and personalized assumptions. Finally, extensive experiments on two real-world datasets show the superiority of our proposed models. |
Cost-effectiveness of cardiovascular risk management by practice nurses in primary care | BACKGROUND
Cardiovascular disease (CVD) is largely preventable and prevention expenditures are relatively low. The randomised controlled SPRING-trial (SPRING-RCT) shows that cardiovascular risk management by practice nurses in general practice with and without self-monitoring both decreases cardiovascular risk, with no additional effect of self-monitoring. For considering future approaches of cardiovascular risk reduction, cost effectiveness analyses of regular care and additional self-monitoring are performed from a societal perspective on data from the SPRING-RCT.
METHODS
Direct medical and productivity costs are analysed alongside the SPRING-RCT, studying 179 participants (men aged 50-75 years, women aged 55-75 years), with an elevated cardiovascular risk, in 20 general practices in the Netherlands. Standard cardiovascular treatment according to Dutch guidelines is compared with additional counselling based on self-monitoring at home (pedometer, weighing scale and/ or blood pressure device) both by trained practice nurses. Cost-effectiveness is evaluated for both treatment groups and patient categories (age, sex, education).
RESULTS
Costs are €98 and €187 per percentage decrease in 10-year cardiovascular mortality estimation, for the control and intervention group respectively. In both groups lost productivity causes the majority of the costs. The incremental cost-effectiveness ratio is approximately €1100 (95% CI: -5157 to 6150). Self-monitoring may be cost effective for females and higher educated participants, however confidence intervals are wide.
CONCLUSIONS
In this study population, regular treatment is more cost effective than counselling based on self-monitoring, with the majority of costs caused by lost productivity.
TRIAL REGISTRATION
Trialregister.nl identifier: http://NTR2188. |
Game Theory and Distributed Control | Game theory has been employed traditionally as a modeling tool for describing and influencing behavior in societal systems. Recently, game theory has emerged as a valuable tool for controlling or prescribing behavior in distributed engineered systems. The rationale for this new perspective stems from the parallels between the underlying decision making architectures in both societal systems and distributed engineered systems. In particular, both settings involve an interconnection of decision making elements whose collective behavior depends on a compilation of local decisions that are based on partial information about each other and the state of the world. Accordingly, there is extensive work in game theory that is relevant to the engineering agenda. Similarities notwithstanding, there remain important differences between the constraints and objectives in societal and engineered systems that require looking at game theoretic methods from a new perspective. This chapter provides an overview of selected recent developments of game theoretic methods in this role as a framework for distributed control in engineered systems. |
Nurse practitioners, certified nurse midwives, and physician assistants in physician offices. | The expansion of health insurance coverage through health care reform, along with the aging of the population, are expected to strain the capacity for providing health care. Projections of the future physician workforce predict declines in the supply of physicians and decreasing physician work hours for primary care. An expansion of care delivered by nurse practitioners (NPs), certified nurse midwives (CNMs), and physician assistants (PAs) is often cited as a solution to the predicted surge in demand for health care services and calls for an examination of current reliance on these providers. Using a nationally based physician survey, we have described the employment of NPs, CNMs, and PAs among office-based physicians by selected physician and practice characteristics. |
Clustering the Mixed Numerical and Categorical Dataset using Similarity Weight and Filter Method | Clustering is a challenging task in data mining technique. The aim of clustering is to group the similar data into number of clusters. Various clustering algorithms have been developed to group data into clusters. However, these clustering algorithms work effectively either on pure numeric data or on pure categorical data, most of them perform poorly on mixed categorical and numerical data types in previous k-means algorithm was used but it is not accurate for large datasets. In this paper we cluster the mixed numeric and categorical data set in efficient manner. In this paper we present a clustering algorithm based on similarity weight and filter method paradigm that works well for data with mixed numeric and categorical features. We propose a modified description of cluster center to overcome the numeric data only limitation and provide a better characterization of clusters. The performance of this algorithm has been studied on benchmark data sets. |
Multimodal MR-imaging reveals large-scale structural and functional connectivity changes in profound early blindness | In the setting of profound ocular blindness, numerous lines of evidence demonstrate the existence of dramatic anatomical and functional changes within the brain. However, previous studies based on a variety of distinct measures have often provided inconsistent findings. To help reconcile this issue, we used a multimodal magnetic resonance (MR)-based imaging approach to provide complementary structural and functional information regarding this neuroplastic reorganization. This included gray matter structural morphometry, high angular resolution diffusion imaging (HARDI) of white matter connectivity and integrity, and resting state functional connectivity MRI (rsfcMRI) analysis. When comparing the brains of early blind individuals to sighted controls, we found evidence of co-occurring decreases in cortical volume and cortical thickness within visual processing areas of the occipital and temporal cortices respectively. Increases in cortical volume in the early blind were evident within regions of parietal cortex. Investigating white matter connections using HARDI revealed patterns of increased and decreased connectivity when comparing both groups. In the blind, increased white matter connectivity (indexed by increased fiber number) was predominantly left-lateralized, including between frontal and temporal areas implicated with language processing. Decreases in structural connectivity were evident involving frontal and somatosensory regions as well as between occipital and cingulate cortices. Differences in white matter integrity (as indexed by quantitative anisotropy, or QA) were also in general agreement with observed pattern changes in the number of white matter fibers. Analysis of resting state sequences showed evidence of both increased and decreased functional connectivity in the blind compared to sighted controls. Specifically, increased connectivity was evident between temporal and inferior frontal areas. Decreases in functional connectivity were observed between occipital and frontal and somatosensory-motor areas and between temporal (mainly fusiform and parahippocampus) and parietal, frontal, and other temporal areas. Correlations in white matter connectivity and functional connectivity observed between early blind and sighted controls showed an overall high degree of association. However, comparing the relative changes in white matter and functional connectivity between early blind and sighted controls did not show a significant correlation. In summary, these findings provide complimentary evidence, as well as highlight potential contradictions, regarding the nature of regional and large scale neuroplastic reorganization resulting from early onset blindness. |
Personal Authentication Using Hand Vein Triangulation and Knuckle Shape | This paper presents a new approach to authenticate individuals using triangulation of hand vein images and simultaneous extraction of knuckle shape information. The proposed method is fully automated and employs palm dorsal hand vein images acquired from the low-cost, near infrared, contactless imaging. The knuckle tips are used as key points for the image normalization and extraction of region of interest. The matching scores are generated in two parallel stages: (i) hierarchical matching score from the four topologies of triangulation in the binarized vein structures and (ii) from the geometrical features consisting of knuckle point perimeter distances in the acquired images. The weighted score level combination from these two matching scores are used to authenticate the individuals. The achieved experimental results from the proposed system using contactless palm dorsal-hand vein images are promising (equal error rate of 1.14%) and suggest more user friendly alternative for user identification. |
DW-AES: A Domain-Wall Nanowire-Based AES for High Throughput and Energy-Efficient Data Encryption in Non-Volatile Memory | Big-data storage poses significant challenges to anonymization of sensitive information against data sniffing. Not only will the encryption bandwidth be limited by the I/O traffic, the transfer of data between the processor and the memory will also expose the input-output mapping of intermediate computations on I/O channels that are susceptible to semi-invasive and non-invasive attacks. Limited by the simplistic cell-level logic, existing logic-in-memory computing architectures are incapable of performing the complete encryption process within the memory at reasonable throughput and energy efficiency. In this paper, a block-level in-memory architecture for advanced encryption standard (AES) is proposed. The proposed technique, called DW-AES, maps all AES operations directly to the domain-wall nanowires. The entire encryption process can be completed within a homogeneous, high-density, and standby-power-free non-volatile spintronic-based memory array without exposing the intermediate results to external I/O interface. Domain-wall nanowire-based pipelining and multi-issue pipelining methods are also proposed to increase the throughput of the baseline DW-AES with an insignificant area overhead and negligible difference on leakage power and energy consumption. The experimental results show that DW-AES can reduce the leakage power and area by the orders of magnitude compared with existing CMOS ASIC accelerators. It has an energy efficiency of 22 pJ/b, which is 5× and 3× better than the CMOS ASIC and memristive CMOL-based implementations, respectively. Under the same area budget, the proposed DW-AES achieves 4.6× higher throughput than the latest CMOS ASIC AES with similar power consumption. The throughput improvement increases to 11× for pipelined DW-AES at the expense of doubling the power consumption. |
The Well-Played MOBA: How DotA 2 and League of Legends use Dramatic Dynamics | This paper will analyse the two most popular games within the MOBA genre, DotA 2 and League of Legends, as performance-designed spaces. By analysing MOBAs as performance and using Marc LeBlanc’s (2006) Tools for Creating Dramatic Game Dynamics as an aesthetic framework the aim is to posit a greater understanding of the ways in which e-Sports and MOBAs specifically can be designed in order to create dramatic tension within the increasing variety of available viewing platforms. In this way, this paper helps present new ways to think about how games can be designed/structured in order to be satisfyingly performed and consumed through increasingly diverse viewing methods. |
Towards a geometric unification of evolutionary algorithms | ix |
Study on Distinct Approaches for Sentiment Analysis | Now-a-days many researchers work on mining a content posted in natural language at different forums, blogs or social networking sites. Sentiment analysis is rapidly expanding topic with various applications. Previously a person collect response from any relatives previous to procuring an object, but today look is different, now person get reviews of many people on all sides of world. Blogs, e-commerce sites data consists number of implications, that expressing user opinions about specific object. Such data is pre-processed then classified into classes as positive, negative and irrelevant. Sentiment analysis allows us to determine view of public or general users feeling about any object. Two global techniques are used: Supervised Machine-Learning and Unsupervised machine-learning methods. In unsupervised learning use a lexicon with words scored for polarity values such as neutral, positive or negative. Whereas supervised methods require a training set of texts with manually assigned polarity values. This suggest one direction is make use of Fuzzy logic for sentiment analysis which may improve analysis results. |
The big and the small: challenges of imaging the brain's circuits. | The relation between the structure of the nervous system and its function is more poorly understood than the relation between structure and function in any other organ system. We explore why bridging the structure-function divide is uniquely difficult in the brain. These difficulties also explain the thrust behind the enormous amount of innovation centered on microscopy in neuroscience. We highlight some recent progress and the challenges that remain. |
Clinical spinal instability and low back pain. | Clinical instability is an important cause of low back pain. Although there is some controversy concerning its definition, it is most widely believed that the loss of normal pattern of spinal motion causes pain and/or neurologic dysfunction. The stabilizing system of the spine may be divided into three subsystems: (1) the spinal column; (2) the spinal muscles; and (3) the neural control unit. A large number of biomechanical studies of the spinal column have provided insight into the role of the various components of the spinal column in providing spinal stability. The neutral zone was found to be a more sensitive parameter than the range of motion in documenting the effects of mechanical destabilization of the spine caused by injury and restabilization of the spine by osteophyle formation, fusion or muscle stabilization. Clinical studies indicate that the application of an external fixator to the painful segment of the spine can significantly reduce the pain. Results of an in vitro simulation of the study found that it was most probably the decrease in the neutral zone, which was responsible for pain reduction. A hypothesis relating the neutral zone to pain has been presented. The spinal muscles provide significant stability to the spine as shown by both in vitro experiments and mathematical models. Concerning the role of neuromuscular control system, increased body sway has been found in patients with low back pain, indicating a less efficient muscle control system with decreased ability to provide the needed spinal stability. |
Online Learning for Latent Dirichlet Allocation | We develop an online variational Bayes (VB) algorithm for Latent Dirichlet Allocation (LDA). Online LDA is based on online stochastic optimization with a natural gradient step, which we show converges to a local optimum of the VB objective function. It can handily analyze massive document collections, including those arriving in a stream. We study the performance of online LDA in several ways, including by fitting a 100-topic topic model to 3.3M articles from Wikipedia in a single pass. We demonstrate that online LDA finds topic models as good or better than those found with batch VB, and in a fraction of the time. |
On Uncertainty, Ambiguity, and Complexity in Project Management | This article develops a model of a project as a payoff function that depends on the state of the world and the choice of a sequence of actions. A causal mapping, which may be incompletely known by the project team, represents the impact of possible actions on the states of the world. An underlying probability space represents available information about the state of the world. Interactions among actions and states of the world determine the complexity of the payoff function. Activities are endogenous, in that they are the result of a policy that maximizes the expected project payoff. A key concept is the adequacy of the available information about states of the world and action effects. We express uncertainty, ambiguity, and complexity in terms of information adequacy. We identify three fundamental project management strategies: instructionism, learning, and selectionism. We show that classic project management methods emphasize adequate information and instructionism, and demonstrate how modern methods fit into the three fundamental strategies. The appropriate strategy is contingent on the type of uncertainty present and the complexity of the project payoff function. Our model establishes a rigorous language that allows the project manager to judge the adequacy of the available project information at the outset, choose an appropriate combination of strategies, and set a supporting project infrastructure—that is, systems for planning, coordination and incentives, and monitoring. (Project Management; Uncertainty; Complexity; Instructionalism; Project Selection; Ambiguity ) |
THREE LAYERS APPROACH FOR NETWORK SCANNING DETECTION | Computer networks became one of the most important dimensions in any organization. This importance is due to the connectivity benefits that can be given by networks, such as computing power, data sharing and enhanced performance. However using networks comes with a cost, there are some threats and issues that need to be addressed, such as providing sufficient level of security. One of the most challenging issues in network security is network scanning. Network scanning is considered to be the initial step in any attacking process. Therefore, detecting networks scanning helps to protect networks resources, services, and data before the real attack happens. This paper proposes an approach that consists of three layers to detect Sequential and Random network scanning for both TCP and UDP protocols. The proposed Three Layers Approach aims to increase network scanning detection accuracy. The Three Layers Approach defines some packets to be used as signs of network scanning existence. Before applying the approach in a network, there is a Thresholds Generation Stage to that aims to determine descriptive set of thresholds. After that, the first layer of the approach aggregates sign packets in separated tables. Then the second layer of the approach analyzes these tables in new tables by counting packets generated by each IP. Finally, the last layer makes a decision of whether or not a network is being scanned. |
Patent Trolls and Technology Diffusion | Patent assertion entities, sometimes known as 'patent trolls,' do not manufacture goods themselves but profit from licensing agreements that they often enforce via the threat of litigation. This paper explores empirically how litigation by one such patent troll affected the sales of medical imaging technology. It finds evidence that relative to similar products, made by the same firm, but not covered by the patent, imaging software sales declined by one-third. This was not due to a suppression in demand by hospitals but instead is linked to a lack of incremental product innovation during the period of litigation. |
Numerical inverse kinematics for modular reconfigurable robots | The inverse kinematics solutions of a reconfigurable robot system built upon a collection of standardized components is difficult to obtain because of its varying configurations. This article addresses the formulation of a generic numerical inverse kinematics model and automatic generation of the model for arbitrary robot geometry including serial and tree-typed geometries. Both revolute and prismatic types of joints are considered. The inverse kinematics is obtained through the differential kinematics Ž . equations based on the product-of-exponential POE formulas. The Newton]Raphson iteration method is employed for solution. The automated model generation is accomplished by using the kinematic graph representation of a modular robot assembly configuration and the related accessibility matrix and path matrix. Examples of the inverse kinematics solutions for different types of modular robots are given to demonstrate the applicability and effectiveness of the proposed algorithm. Q 1999 John |
Microstrip bandstop filter using spurline and defected ground structures(DGS) | Microstrip filters are widely used in microwave circuit, This paper briefly describes the design principle of microstrip bandstop filter (BSF). A compact wide band high rejection BSF is presented. This filter consists of two parts: defected ground structures filter (DGS) and spurline filter. Due to the inherently compact characteristics of the spurline and DGS, the proposed filter shows a better rejection performance than open stub BSF in the same circuit size. The results of simulation and optimization given by HFSS12 prove the correctness of the design. |
Feature extraction and classification phishing websites based on URL | In this study we extracted websites' URL features and analyzed subset based feature selection methods and classification algorithms for phishing websites detection. |
Teacher attitudes toward dyslexia: effects on teacher expectations and the academic achievement of students with dyslexia. | The present study examined teacher attitudes toward dyslexia and the effects of these attitudes on teacher expectations and the academic achievement of students with dyslexia compared to students without learning disabilities. The attitudes of 30 regular education teachers toward dyslexia were determined using both an implicit measure and an explicit, self-report measure. Achievement scores for 307 students were also obtained. Implicit teacher attitudes toward dyslexia related to teacher ratings of student achievement on a writing task and also to student achievement on standardized tests of spelling but not math for those students with dyslexia. Self-reported attitudes of the teachers toward dyslexia did not relate to any of the outcome measures. Neither the implicit nor the explicit measures of teacher attitudes related to teacher expectations. The results show implicit attitude measures to be a more valuable predictor of the achievement of students with dyslexia than explicit, self-report attitude measures. |
Compilation and Delayed Evaluation in APL | Most existing APL implementations are interpretive in nature,that is, each time an APL statement is encountered it is executedby a body of code that is perfectly general, i.e. capable ofevaluating any APL expression, and is in no way tailored to thestatement on hand. This costly generality is said to be justifiedbecause APL variables are typeless and thus can vary arbitrarily intype, shape, and size during the execution of a program. What thisargument overlooks is that the operational semantics of an APLstatement are not modified by the varying storage requirements ofits variables.
The first proposal for a non fully interpretive implementationwas the thesis of P. Abrams [1], in which a high level interpretercan defer performing certain operations by compiling code which alow level interpreter must later be called upon to execute. Thebenefit thus gained is that intelligence gathered from a widercontext can be brought to bear on the evaluation of asubexpression. Thus on evaluating (A+B)[I],only the addition A[I]+B[I] will beperformed. More recently, A. Perlis and several of his students atYale [9,10] have presented a scheme by which a full-fledged APLcompiler can be written. The compiled code generated can then bevery efficiently executed on a specialized hardware processor. Asimilar scheme is used in the newly released HP/3000 APL [12].
This paper builds on and extends the above ideas in severaldirections. We start by studying in some depth the two key notionsall this work has in common, namely compilation anddelayed evaluation in the context of APL. By delayedevaluation we mean the strategy of deferring the computation ofintermediate results until the moment they are needed. Thus largeintermediate expressions are not built in storage; instead theirelements are "streamed" in time. Delayed evaluation for APL wasprobably first proposed by Barton (see [8]).
Many APL operators do not correspond to any real dataoperations. Instead their effect is to rename the elements of thearray they act upon. A wide class of such operators, which we willcall the grid selectors, can be handled by essentiallypushing them down the expression tree and incorporating theireffect into the leaf accessors. Semantically this is equivalent tothe drag-along transformations described by Abrams.Performing this optimization will be shown to be an integral partof delayed evaluation.
In order to focus our attention on the above issues, we make anumber of simplifying assumptions. We confine our attention to codecompilation for single APL expressions, such as might occur in an"APL Calculator", where user defined functions are not allowed. Ofcourse we will be critically concerned with the re-usability of thecompiled code for future evaluations. We also ignore thedistinctions among the various APL primitive types and assume thatall our arrays are of one uniform numeric type. We have studied thesituation without these simplifying assumptions, but plan to reporton this elsewhere.
The following is a list of the main contributions of thispaper.
" We present an algorithm for incorporating the selectoroperators into the accessors for the leaves of the expression tree.The algorithm runs in time proportional to the size of the tree, asopposed to its path length (which is the case for the algorithms of[10] and [12]).
Although arbitrary reshapes cannot be handled by the abovealgorithm, an especially important case can: that of aconforming reshape. The reshape AñB iscalled conforming if ñB is a suffix of A.
" By using conforming reshapes we can eliminate inner and outerproducts from the expression tree and replace them with scalaroperators and reductions along the last dimension. We do this byintroducing appropriate selectors on the product arguments, theneventually absorbing these selectors into the leaf accessors. Thesame mechanism handles scalar extension, the convention ofmaking scalar operands of scalar operators conform to arbitraryarrays.
" Once products, scalar extensions, and selectors have beeneliminated, what is left is an expression tree consisting entirelyof scalar operators and reductions along the last dimension. As aconsequence, during execution, the dimension currently being workedon obeys a strict stack-like discipline. This implies that we cangenerate extremely efficient code that is independent of theranks of the arguments.
Several APL operators use the elements of their operands severaltimes. A pure delayed evaluation strategy would require multiplereevaluations.
" We introduce a general buffering mechanism, calledslicing, which allows portions of a subexpression that willbe repeatedly needed to be saved, to avoid future recomputation.Slicing is well integrated with the evaluation on demand mechanism.For example, when operators that break the streaming areencountered, slicing is used to determine the minimum size bufferrequired between the order in which a subexpression can deliver itsresult, and the order in which the full expression needs it.
" The compiled code is very efficient. A minimal number of loopvariables is maintained and accessors are shared among as manyexpression atoms as possible. Finally, the code generated is wellsuited for execution by an ordinary minicomputer, such as a PDP-11,or a Data General Nova. We have implemented this compiler on theAlto computer at Xerox PARC.
The plan of the paper is this: We start with a generaldiscussion of compilation and delayed evaluation. Then we motivatethe structures and algorithms we need to introduce by showing howto handle a wider and wider class of the primitive APL operators.We discuss various ways of tailoring an evaluator for a particularexpression. Some of this tailoring is possible based only on theexpression itself, while other optimizations require knowledge ofthe (sizes of) the atom bindings in the expression. The readershould always be alert to the kind of knowledge being used, forthis affects the validity of the compiled code across reexecutionsof a statement. |
Model Based Architecting and Construction of Embedded Systems | This workshop brought together researchers and practitioners interested in model-based software engineering for real-time embedded systems, with a particular focus on the use of architecture description languages, domain-specific design and implementation languages, languages for capturing non-functional constraints, and component and system description languages. Ten presenters proposed contributions on model-based analysis, transformation and synthesis, as well as tools, applications and patterns. Three break-out groups discussed the transition from requirements to architecture, design languages, and platform (in)dependence. This report summarises the workshop results. |
The IA-64 Architecture at Work | 24 Computer with a predicate value of true executes normally. If the predicate is false, the associated instruction—although issued—does not write its results to registers or memory. Research has shown predication to be effective at removing branches and at decreasing penalties from branch mispredicts. 1 A simple code example with a difficult-to-predict branch illustrates how predication can remove the branch. Figure 1a shows the C code for a classic if-then-else statement. In a traditional architecture, the processor loads the data from memory, compares the value of a(i).ptr with zero, and uses the compare's (cmp's) result in a conditional-branch instruction. Because of the conditional branch, a traditional compiler structures this code into four basic blocks, as shown in Figure 1b. The processor must execute the instructions of all four blocks serially, and branch instructions are barriers to ILP. Predication is used to remove the difficult to predict branch in the first basic block. In the IA-64 architecture, compare instructions generate two predicates, as shown in Figure 1c: O ver the past several years, strategies to increase microprocessor performance have focused on finding more instruction level parallelism. ILP is basically the idea of finding several instructions to execute at the same time. By providing multiple functional units on which to execute instructions, computer architects expect to improve performance. However, two difficult problems limit ILP: • branch instructions, which introduce control dependencies, and • memory latency, the time it takes to retrieve data from memory. In the absence of new programming languages that are explicitly parallel, the task of " exposing " ILP falls to the compiler. In IA-64, Intel's upcoming 64-bit architecture (see the " IA-64 to Date " sidebar for current information), the compiler will play a pivotal role in using pred-ication and control speculation to expose more ILP. Two code fragments are used to illustrate predication and control speculation. The fragments are scheduled with actual IA-64 instructions and are representative of general-purpose integer code, such as that found in computer aided design and database applications. A comparison of performance with and without the two features demonstrates how predication and control speculation can reduce the number of cycles required to execute an instruction and improve performance. PREDICATION The IA-64 architecture uses a full predication model, in which a compiler can append a predicate to all instructions. Predicates are simply tags that permit a program to execute instructions conditionally, depending on the … |
Effect of Desensitising Laser Treatment on the Bond Strength of Full Metal Crowns: An In Vitro Comparative Study | BACKGROUND
Dentinal hypersensitivity is a very common complaint of patients undergoing crown and bridge restorations on vital teeth. Of the many desensitizing agents used to counter this issue, desensitizing laser treatment is emerging as one of the most successful treatment modality. However, the dentinal changes brought about by the desensitizing laser application could affect the bond strength of luting cements.
MATERIALS AND METHODS
Freshly extracted 48 maxillary first premolars, which were intact and morphologically similar were selected for the study. The specimens were divided into two groups, an untreated the control group and a desensitizing laser-treated group, which were exposed to Erbium, Chromium: Yttrium, Selenium, Galium, Garnet laser at 0.5 W potency for 15 s. Each of the above two groups were again randomly divided into two subgroups, on to which full veneer metal crowns, which were custom fabricated were luted using glass-ionomer and resin luting cements, respectively. Tensile bond strength of the luting cements was evaluated with the help of a Universal Testing Machine. Statistical analysis of the values were done using descriptive, independent samples' test, and two-way ANOVA test.
RESULTS
The tensile bond strength of crowns luted on desensitizing laser treated specimens using self-adhesive resin cement showed a marginal increase in bond strength though it was not statistically significant.
CONCLUSION
The self-adhesive resin cements could be recommended as the luting agent of choice for desensitizing laser treated abutment teeth, as it showed better bond strength. |
Customizable Multidiscipline Environments for Heat Transfer and Fluid Flow Modeling | Thankfully, the age of stand-alone fixed-input simulation tools is fading away in favor of more flexible and integrated solutions. “Concurrent engineering” once meant automating data translations between monolithic codes, but sophisticated users have demanded more native integration and more automated tools for designing, and not just evaluating point designs. Improvements in both interprocess communications technology and numerical solutions have gone a long way towards meeting those demands. This paper describes a small slice of a larger on-going effort to satisfy current and future demands for integrated multidisciplinary tools that can be highly customized by end-users or by third parties. Specifically, the ability to integrate fully featured thermal/fluid simulations into Microsoft’s Excel™ and other software is detailed. Users are now able not only to prepare custom user interfaces, they can use these codes as portals that allow integration activities at a larger scale. Previous enabling technologies are first described, then examples and repercussions of current capabilities are presented, and finally in-progress and future technologies are listed. BACKGROUND AND HISTORY |
Patterns of Care and Course of Symptoms in Palliative Radiotherapy | To evaluate patterns of care as well as effectiveness and side effects of palliative treatment in four German radiation oncology departments. All referrals in four German radiation oncology departments (two university hospitals, one academic hospital, one private practice) were prospective documented for 1 month in 2008 (2 months at one of the university hospitals). In palliatively irradiated patients, treatment aims and indications as well as treated sites and fractionation schedules were recorded. In addition, symptoms and side effects were analyzed with standardized questionnaires before and at the end of radiotherapy. During the observation period, 603 patients underwent radiation therapy in the four centers and 153 (24%, study popu-lation) were treated with palliative intent. Within the study, patients were most frequently treated for bone (34%) or brain (27%) metastases. 62 patients reported severe or very severe pain, 12 patients reported severe or very severe dyspnea, 27 patients reported neurological deficits or signs of cranial pressure, and 43 patients reported a poor or very poor sense of well-being. The most frequent goals were symptom relief (53%) or prevention of symptoms (46%). Life prolongation was intended in 37% of cases. A wide range of fractionation schedules was applied with total doses ranging from 3–61.2 Gy. Of the patients, 73% received a slightly hypofractionated treatment schedule with doses of > 2.0 Gy to ≤ 3.0 Gy per fraction and 12% received moderate to highly hypofractionated therapy with doses of > 3.0 Gy to 8.0 Gy. Radiation therapy led to a significant improvement of well-being (35% of patients) and reduction of symptoms, especially with regard to pain (66%), dyspnea (61%), and neurological deficits (60%). Therapy was very well tolerated with only 4.5% grade I or II acute toxicities being observed. Unscheduled termination was observed in 19 patients (12%). Palliative radiation therapy is effective in reducing symptoms, increases subjective well-being, and has minimal side effects. More studies are necessary for subgroup analyses and for clarifying the different goals in palliative radiotherapy. Evaluation der Alltagspraxis, des Symptomverlaufs und akuter Nebenwirkungen bei palliativer Strahlentherapie in vier strahlentherapeutischen Einrichtungen. Alle Erstvorstellungen in den vier Einrichtungen (zwei Universitätskliniken, ein Lehrkrankenhaus und eine private Praxis) wurden einen Monat lang im Jahr 2008 prospektiv dokumentiert und ausgewertet (über 2 Monate in einer der Universitätskliniken). Bei den palliativ bestrahlten Patienten wurden die Indikationen, Behandlungsziele, die bestrahlten Regionen, die Therapiekonzepte und der Behandlungsverlauf dokumentiert. Klinische Symptome und akute Nebenwirkungen wurden zu Beginn und bei Abschluss der Strahlentherapie standardisiert erfasst. Im Beobachtungszeitraum erhielten 603 Patienten eine Strahlentherapie. 153 Patienten (24%) wurden unter palliativer Zielsetzung bestrahlt, zumeist wegen Knochenmetastasen (34%) oder Hirnmetastasen (27%). Die häufigsten Behandlungsziele waren Symptomlinderung (53%) oder die Prävention klinischer Beschwerden (46%). 66 Patienten berichteten über mittlere oder starke Schmerzen zu Beginn, 12 Patienten berichteten über mittlere oder starke Dyspnoe, 27 Patienten über mittlere oder starke neurologische Ausfälle bzw. Hirndrucksymptomatik, und 43 Patienten berichteten über weniger gutes oder schlechtes Wohlbefinden. In 37% zielte die Strahlentherapie auch auf Lebensverlängerung. Die applizierte Gesamtdosis betrug 3–61,2 Gy. 73% der Patienten wurden mäßig hypofraktioniert bestrahlt (Einzeldosen > 2,0 Gy bis ≤ 3,0 Gy), und 12% der Strahlentherapien wurden mit Einzeldosen > 3,0 Gy–8,0 Gy durchgeführt. Das Allgemeinbefinden war bei 34% der Patienten bei Abschluss der Strahlentherapie signifikant gebessert. 66% der Patienten berichteten eine signifikante Schmerzlinderung, bei 61% konnte Dys-pnoe signifikant gelindert werden, und bei 60% besserten sich neurologische Defizite oder Hirndrucksymptomatik. Die Behand-lung wurde mit 4,5% Grd-I- bis -II-Toxizitäten gut vertragen. 19 Behandlungen wurden vorzeitig abgebrochen (12%). Die Alltagspraxis und der Symptomverlauf wurden erfolgreich in der Routine dokumentiert. Die palliative Strahlentherapie war gut verträglich, nebenwirkungsarm und wirksam in Bezug auf das Allgemeinbefinden und die Symptomlinderung. Für Subgruppenanalysen und zur Abgrenzung der verschiedenen Endpunkte palliativer Strahlentherapie sind weitere Untersuchungen mit größeren Fallzahlen erforderlich. |
Why Are Companies Offshoring Innovation ? The Emerging Global Race for Talent * | This paper empirically studies determinants of decision by companies to offshore innovation activities. It uses survey data from the international Offshoring Research Network project to estimate impact of managerial intentionality, past experience, and environmental factors on the probability of offshoring innovation projects. The results show that emerging shortage of high skilled science and engineering talent in the US and more generally need to access qualified personnel are important explanatory factors for offshoring innovation decisions. Moreover, contrary to drivers of many other functions, labor arbitrage is less important than other forms of cost savings. The paper concludes with a discussion of the changing dynamics underlying offshoring of innovation activities, suggesting that companies are entering a global race for talent. |
Text Detection, Tracking and Recognition in Video: A Comprehensive Survey | The intelligent analysis of video data is currently in wide demand because a video is a major source of sensory data in our lives. Text is a prominent and direct source of information in video, while the recent surveys of text detection and recognition in imagery focus mainly on text extraction from scene images. Here, this paper presents a comprehensive survey of text detection, tracking, and recognition in video with three major contributions. First, a generic framework is proposed for video text extraction that uniformly describes detection, tracking, recognition, and their relations and interactions. Second, within this framework, a variety of methods, systems, and evaluation protocols of video text extraction are summarized, compared, and analyzed. Existing text tracking techniques, tracking-based detection and recognition techniques are specifically highlighted. Third, related applications, prominent challenges, and future directions for video text extraction (especially from scene videos and web videos) are also thoroughly discussed. |
Deep Metric Learning with BIER: Boosting Independent Embeddings Robustly | Learning similarity functions between image pairs with deep neural networks yields highly correlated activations of embeddings. In this work, we show how to improve the robustness of such embeddings by exploiting the independence within ensembles. To this end, we divide the last embedding layer of a deep network into an embedding ensemble and formulate the task of training this ensemble as an online gradient boosting problem. Each learner receives a reweighted training sample from the previous learners. Further, we propose two loss functions which increase the diversity in our ensemble. These loss functions can be applied either for weight initialization or during training. Together, our contributions leverage large embedding sizes more effectively by significantly reducing correlation of the embedding and consequently increase retrieval accuracy of the embedding. Our method works with any differentiable loss function and does not introduce any additional parameters during test time. We evaluate our metric learning method on image retrieval tasks and show that it improves over state-of-the-art methods on the CUB-200-2011, Cars-196, Stanford Online Products, In-Shop Clothes Retrieval and VehicleID datasets. Therefore, our findings suggest that by dividing deep networks at the end into several smaller and diverse networks, we can significantly reduce overfitting. |
Using a novel computational drug-repositioning approach (DrugPredict) to rapidly identify potent drug candidates for cancer treatment | Computation-based drug-repurposing/repositioning approaches can greatly speed up the traditional drug discovery process. To date, systematic and comprehensive computation-based approaches to identify and validate drug-repositioning candidates for epithelial ovarian cancer (EOC) have not been undertaken. Here, we present a novel drug discovery strategy that combines a computational drug-repositioning system (DrugPredict) with biological testing in cell lines in order to rapidly identify novel drug candidates for EOC. DrugPredict exploited unique repositioning opportunities rendered by a vast amount of disease genomics, phenomics, drug treatment, and genetic pathway and uniquely revealed that non-steroidal anti-inflammatories (NSAIDs) rank just as high as currently used ovarian cancer drugs. As epidemiological studies have reported decreased incidence of ovarian cancer associated with regular intake of NSAIDs, we assessed whether NSAIDs could have chemoadjuvant applications in EOC and found that (i) NSAID Indomethacin induces robust cell death in primary patient-derived platinum-sensitive and platinum- resistant ovarian cancer cells and ovarian cancer stem cells and (ii) downregulation of β-catenin is partially driving effects of Indomethacin in cisplatin-resistant cells. In summary, we demonstrate that DrugPredict represents an innovative computational drug- discovery strategy to uncover drugs that are routinely used for other indications that could be effective in treating various cancers, thus introducing a potentially rapid and cost-effective translational opportunity. As NSAIDs are already in routine use in gynecological treatment regimens and have acceptable safety profile, our results will provide with a rationale for testing NSAIDs as potential chemoadjuvants in EOC patient trials. |
Portable Option Discovery for Automated Learning Transfer in Object-Oriented Markov Decision Processes | We introduce a novel framework for option discovery and learning transfer in complex domains that are represented as object-oriented Markov decision processes (OO-MDPs) [Diuk et al., 2008]. Our framework, Portable Option Discovery (POD), extends existing option discovery methods, and enables transfer across related but different domains by providing an unsupervised method for finding a mapping between object-oriented domains with different state spaces. The framework also includes heuristic approaches for increasing the efficiency of the mapping process. We present the results of applying POD to Pickett and Barto’s [2002] PolicyBlocks and MacGlashan’s [2013] Option-Based Policy Transfer in two application domains. We show that our approach can discover options effectively, transfer options among different domains, and improve learning performance with low computational overhead. |
Production Constraints and the NAIRU | This paper argues that the production constraints in the basic NAIRU model should be distinguished by type: capital constraints and labour constraints. It notes the failure to incorporate this phenomenon in standard macro models. Using panel data for UK manufacturing over 80 quarters we show that capital constraints became relatively more important during the 1980s as industry failed to match the increase in labour flexibility with rising capital investment. |
Role of Vascular Oxidative Stress in Obesity and Metabolic Syndrome | Obesity is associated with vascular diseases that are often attributed to vascular oxidative stress. We tested the hypothesis that vascular oxidative stress could induce obesity. We previously developed mice that overexpress p22phox in vascular smooth muscle, tg(sm/p22phox), which have increased vascular ROS production. At baseline, tg(sm/p22phox) mice have a modest increase in body weight. With high-fat feeding, tg(sm/p22phox) mice developed exaggerated obesity and increased fat mass. Body weight increased from 32.16 ± 2.34 g to 43.03 ± 1.44 g in tg(sm/p22phox) mice (vs. 30.81 ± 0.71 g to 37.89 ± 1.16 g in the WT mice). This was associated with development of glucose intolerance, reduced HDL cholesterol, and increased levels of leptin and MCP-1. Tg(sm/p22phox) mice displayed impaired spontaneous activity and increased mitochondrial ROS production and mitochondrial dysfunction in skeletal muscle. In mice with vascular smooth muscle-targeted deletion of p22phox (p22phox(loxp/loxp)/tg(smmhc/cre) mice), high-fat feeding did not induce weight gain or leptin resistance. These mice also had reduced T-cell infiltration of perivascular fat. In conclusion, these data indicate that vascular oxidative stress induces obesity and metabolic syndrome, accompanied by and likely due to exercise intolerance, vascular inflammation, and augmented adipogenesis. These data indicate that vascular ROS may play a causal role in the development of obesity and metabolic syndrome. |
Examinations, Inequality, and Curriculum Reform: An Essay Review of Richard Teese's Academic Success and Social Power: Examinations and Inequality | All educational reforms have particular histories. And all of them are driven not only by technical considerations but also profoundly by cultural, political, and economic projects and by ideological visions of what schools should do. In this contribution to my Reviewing Policy section of Educational Policy, I discuss a significant contribution to our understanding of these projects. “Academic Success and Social Power” focuses on the growth of crucial aspects of a number of these visions, on the conflicts that they often entail, and on who continues to benefit the most from them over time. Richard Teese directs most of his attention to the reform of curriculum and to |
An Advanced External Compensation System for Active Matrix Organic Light-Emitting Diode Displays With Poly-Si Thin-Film Transistor Backplane | An advanced method for externally compensating the nonuniform electrical characteristics of polycrystalline silicon thin-film transistors (TFTs) and the degradation of organic light-emitting diode (OLED) devices is proposed, and the method is verified using a 14.1-in active matrix OLED (AMOLED) panel. The proposed method provides an effective solution for high-image-quality AMOLED displays by removing IR-drop and temperature effects during the sensing and displaying operations of the external compensation method. Experimental results show that the electrical characteristics of TFTs and OLEDs are successfully sensed, and that the stained image pattern due to the nonuniform luminance error and the differential aging of the OLED is removed. The luminance error range without compensation is from -6.1% to 9.0%, but it is from -1.1% to 1.2% using the external compensation at the luminance level of 120 cd/m2 in a 14.1-inch AMOLED panel. |
What counts as effective communication in nursing? Evidence from nurse educators' and clinicians' feedback on nurse interactions with simulated patients. | AIM
To examine the feedback given by nurse educators and clinicians on the quality of communication skills of nurses in interactions with simulated patients.
BACKGROUND
The quality of communication in interactions between nurses and patients has a major influence on patient outcomes. To support the development of effective nursing communication in clinical practice, a good understanding of what constitutes effective communication is helpful.
DESIGN
An exploratory design was used involving individual interviews, focus groups and written notes from participants and field notes from researchers to investigate perspectives on nurse-patient communication.
METHODS
Focus groups and individual interviews were held between August 2010-September 2011 with a purposive sample of 15 nurse educators and clinicians who observed videos of interactions between nurses and simulated patients. These participants were asked to give oral feedback on the quality and content of these interactions. Verbatim transcriptions were undertaken of all data collected. All written notes and field notes were also transcribed. Thematic analysis of the data was undertaken.
FINDINGS
Four major themes related to nurse-patient communication were derived from the educators' and clinicians' feedback: approach to patients and patient care, manner towards patients, techniques used for interacting with patients and generic aspects of communication.
CONCLUSION
This study has added to previous research by contributing grounded evidence from a group of nurse educators and clinicians on the aspects of communication that are relevant for effective nurse-patient interactions in clinical practice. |
Closed Multidimensional Sequential Pattern Mining | We propose a new method, called closed multidimensional sequential pattern mining, for mining multidimensional sequential patterns. The new method is an integration of closed sequential pattern mining and closed itemset pattern mining. Based on this method, we show that (1) the number of complete closed multidimensional sequential patterns is not larger than the number of complete multidimensional sequential patterns (2) the set of complete closed multidimensional sequential patterns covers the complete resulting set of multidimensional sequential patterns. In addition, mining using closed itemset pattern mining on multidimensional information would mine only multidimensional information associated with mined closed sequential patterns, and mining using closed sequential pattern mining on sequences would mine only sequences associated with mined closed itemset patterns |
A Cascaded Inception of Inception Network With Attention Modulated Feature Fusion for Human Pose Estimation | Accurate keypoint localization of human pose needs diversified features: the high level for contextual dependencies and the low level for detailed refinement of joints. However, the importance of the two factors varies from case to case, but how to efficiently use the features is still an open problem. Existing methods have limitations in preserving low level features, adaptively adjusting the importance of different levels of features, and modeling the human perception process. This paper presents three novel techniques step by step to efficiently utilize different levels of features for human pose estimation. Firstly, an inception of inception (IOI) block is designed to emphasize the low level features. Secondly, an attention mechanism is proposed to adjust the importance of individual levels according to the context. Thirdly, a cascaded network is proposed to sequentially localize the joints to enforce message passing from joints of stand-alone parts like head and torso to remote joints like wrist or ankle. Experimental results demonstrate that the proposed method achieves the state-of-the-art performance on both MPII and |
Planning Safe and Legible Hand-over Motions for Human-Robot Interaction | Human-Robot interaction brings new challenges to motion planning. The human, who is generally considered as an obstacle for the robot, needs to be considered as a separate entity that has a position, a posture, a field of view and an activity. These properties can be represented as new constraints to the motion generation mechanisms. In this paper we present three human related constraints to the motion planning for object hand over scenarios. We also describe a new planning method to consider these constraints. The resulting system automatically computes where the object should be transferred to the human, and the motion of the whole robot considering human’s comfort. |
Regional Growth in the "New" Economy | There is a large body of knowledge about the regional economic growth process that some would argue contributes little to the analysis of such growth in today's "new" economy. This argument is examined in some detail and shown to exaggerate the shortcomings of what might now be considered the mainstream view of how to study the economic growth process of regions. There are nevertheless some shortcomings in the mainstream view for which new paradigms might be needed. Possible directions for future research that will remedy some of the current deficiencies in the mainstream models are explored. |
Fear the REAPER: A System for Automatic Multi-Document Summarization with Reinforcement Learning | This paper explores alternate algorithms, reward functions and feature sets for performing multi-document summarization using reinforcement learning with a high focus on reproducibility. We show that ROUGE results can be improved using a unigram and bigram similarity metric when training a learner to select sentences for summarization. Learners are trained to summarize document clusters based on various algorithms and reward functions and then evaluated using ROUGE. Our experiments show a statistically significant improvement of 1.33%, 1.58%, and 2.25% for ROUGE-1, ROUGE-2 and ROUGEL scores, respectively, when compared with the performance of the state of the art in automatic summarization with reinforcement learning on the DUC2004 dataset. Furthermore query focused extensions of our approach show an improvement of 1.37% and 2.31% for ROUGE-2 and ROUGE-SU4 respectively over query focused extensions of the state of the art with reinforcement learning on the DUC2006 dataset. |
A protection scheme against DC faults VSC based DC systems with bus capacitors | This paper describes a novel protection method to limit the fault currents caused by short circuit in a voltage source converter (VSC) based DC systems with capacitors connected to the DC bus. Analyzing the development of DC fault current in such DC systems shows that capacitor discharge dominates the very rapid fault current rising at the instance of faults. If this high slope fault current surge is not limited, stored energy in the capacitor will cause hazard to personals as well as connected equipment. If the discharging current from the DC bus capacitor is not limited, it will require much higher break capability of the DC breakers. Therefore a novel protection method is proposed to limit the DC fault currents by detecting and interrupting the DC capacitor discharge using a solid state circuit breaker (capacitor SSCB) which is connected in series with the DC bus capacitor. Implementing this new method will effectively reduce peak value of the fault current, alleviate current stress of diodes in VSC bridges, in a way that only local current information is required. It needs no external control signals, and power loss of this capacitor SSCB is relatively low. Both simulations and tests have proved this method to be effective. |
A robust feature extraction algorithm based on class-Modular Image Principal Component Analysis for face verification | Face verification systems reach good performance on ideal environmental conditions. Conversely, they are very sensitive to non-controlled environments. This work proposes the class-Modular Image Principal Component Analysis (cMIMPCA) algorithm for face verification. It extracts local and global information of the user faces aiming to reduce the effects caused by illumination, facial expression and head pose changes. Experimental results performed over three well-known face databases showed that cMIMPCA obtains promising results for the face verification task. |
Learning and Teaching Styles In Foreign and Second Language Education | The ways in which an individual characteristically acquires, retains, and retrieves information are collectively termed the individual’s learning style. Mismatches often occur between the learning styles of students in a language class and the teaching style of the instructor, with unfortunate effects on the quality of the students’ learning and on their attitudes toward the class and the subject. This paper defines several dimensions of learning style thought to be particularly relevant to foreign and second language education, outlines ways in which certain learning styles are favored by the teaching styles of most language instructors, and suggests steps to address the educational needs of all students in foreign language classes. Students learn in many ways—by seeing and hearing; reflecting and acting; reasoning logically and intuitively; memorizing and visualizing. Teaching methods also vary. Some instructors lecture, others demonstrate or discuss; some focus on rules and others on examples; some emphasize memory and others understanding. How much a given student learns in a class is governed in part by that student’s native ability and prior preparation but also by the compatibility of his or her characteristic approach to learning and the instructor’s characteristic approach to teaching. The ways in which an individual characteristically acquires, retains, and retrieves information are collectively termed the individual’s learning style. Learning styles have been extensively discussed in the educational psychology literature (Claxton & Murrell 1987; Schmeck 1988) and specifically in the context Richard M. Felder (Ph.D., Princeton University) is the Hoechst Celanese Professor of Chemical Engineering at North Carolina State University, |
Comparative Deep Learning of Hybrid Representations for Image Recommendations | In many image-related tasks, learning expressive and discriminative representations of images is essential, and deep learning has been studied for automating the learning of such representations. Some user-centric tasks, such as image recommendations, call for effective representations of not only images but also preferences and intents of users over images. Such representations are termed hybrid and addressed via a deep learning approach in this paper. We design a dual-net deep network, in which the two sub-networks map input images and preferences of users into a same latent semantic space, and then the distances between images and users in the latent space are calculated to make decisions. We further propose a comparative deep learning (CDL) method to train the deep network, using a pair of images compared against one user to learn the pattern of their relative distances. The CDL embraces much more training data than naive deep learning, and thus achieves superior performance than the latter, with no cost of increasing network complexity. Experimental results with real-world data sets for image recommendations have shown the proposed dual-net network and CDL greatly outperform other state-of-the-art image recommendation solutions. |
A PCell design methodology for automatic layout generation of spiral inductor using skill script | With advancement in technology, higher level integration is coming to fore which demands components with high performance and efficiency in time to market. In VLSI domain, layout design is quite a lengthy process. Spiral inductor being one of the crucial components in RFIC design needs to depict high performance and design flexibility. Implementation of parameterized cells (PCells) for modeling on-chip spiral inductors overcomes this issue. PCells design proves quite flexible and less time consuming compared to classical design approach of inductor. In this paper, an efficient PCell design technique for automatic layout generation is presented for on chip spiral inductor using Cadence SKILL scripts. To support the concept defined, PCell design for automatic layout generation of square, hexagonal and octagonal spiral inductor are designed using SKILL script in Virtuoso. The spiral inductor characteristics are further validated through EM simulations using ADS Momentum. |
Multiobjective Modulated Model Predictive Control for a Multilevel Solid-State Transformer | Finite control set model predictive control (FCS-MPC) offers many advantages over more traditional control techniques, such as the ability to avoid cascaded control loops, easy inclusion of constraint, and fast transient response of the control system. This control scheme has been recently applied to several power conversion systems, such as two, three, or more level converters, matrix converters, etc. Unfortunately, because of the lack of the presence of a modulation strategy, this approach produces spread spectrum harmonics which are difficult to filter effectively. This may result in a degraded power quality when compared to more traditional control schemes. Furthermore, high switching frequencies may be needed, considering the limited number of switching states in the converter. This paper presents a novel multiobjective modulated predictive control strategy, which preserves the desired characteristics of FCS-MPC but produces superior waveform quality. The proposed method is validated by experimental tests on a seven-level cascaded H-bridge back-to-back converter and compared to a classic MPC scheme. |
Production and Optimization of Physicochemical Parameters of Cellulase Using Untreated Orange Waste by Newly Isolated Emericella variecolor NS3. | Cellulase enzymes have versatile industrial applications. This study was directed towards the isolation, production, and characterization of cellulase enzyme system. Among the five isolated fungal cultures, Emericella variecolor NS3 showed maximum cellulase production using untreated orange peel waste as substrate using solid-state fermentation (SSF). Maximum enzyme production of 31 IU/gds (per gram of dry substrate) was noticed at 6.0 g concentration of orange peel. Further, 50 °C was recorded as the optimum temperature for cellulase activity and the thermal stability for 240 min was observed at this temperature. In addition, the crude enzyme was stable at pH 5.0 and held its complete relative activity in presence of Mn2+ and Fe3+. This study explored the production of crude enzyme system using biological waste with future potential for research and industrial applications. |
Foundations of Measurement Theory Applied to the Evaluation of Dependability Attributes | Increasing interest is being paid to quantitative evaluation based on measurements of dependability attributes and metrics of computer systems and infrastructures. Despite measurands are generally sensibly identified, different approaches make it difficult to compare different results. Moreover, measurement tools are seldom recognized for what they are: measuring instruments. In this paper, many measurement tools, present in the literature, are critically evaluated at the light of metrology concepts and rules. With no claim of being exhaustive, the paper (i) investigates if and how deeply such tools have been validated in accordance to measurement theory, and (ii) tries to evaluate (if possible) their measurement properties. The intention is to take advantage of knowledge available in a recognized discipline such as metrology and to propose criteria and indicators taken from such discipline to improve the quality of measurements performed in evaluation of dependability attributes. |
2D Human Pose Estimation: New Benchmark and State of the Art Analysis | Human pose estimation has made significant progress during the last years. However current datasets are limited in their coverage of the overall pose estimation challenges. Still these serve as the common sources to evaluate, train and compare different models on. In this paper we introduce a novel benchmark "MPII Human Pose" that makes a significant advance in terms of diversity and difficulty, a contribution that we feel is required for future developments in human body models. This comprehensive dataset was collected using an established taxonomy of over 800 human activities [1]. The collected images cover a wider variety of human activities than previous datasets including various recreational, occupational and householding activities, and capture people from a wider range of viewpoints. We provide a rich set of labels including positions of body joints, full 3D torso and head orientation, occlusion labels for joints and body parts, and activity labels. For each image we provide adjacent video frames to facilitate the use of motion information. Given these rich annotations we perform a detailed analysis of leading human pose estimation approaches and gaining insights for the success and failures of these methods. |
Metamorphic Testing and Testing with Special Values | The problem of testing programs without test oracles is well known. A commonly used approach is to use special values in testing but this is often insufficient to ensure program correctness. This paper demonstrates the use of metamorphic testing to uncover faults in programs, which could not be detected by special test values. Metamorphic testing can be used as a complementary test method to special value testing. In this paper, the sine function and a search function are used as examples to demonstrate the usefulness of metamorphic testing. This paper also examines metamorphic relationships and the extent of their usefulness in program testing. |
Validation of National Institutes of Health global scoring system for chronic graft-versus-host disease (GVHD) according to overall and GVHD-specific survival. | A new severity grading system for graft-versus-host disease (GVHD) was established by the National Institutes of Health (NIH) consensus criteria (NCC). However, its prognostic value still needs to be validated. Four hundred twenty-five consecutive patients who survived beyond 100 days after allogeneic stem cell transplantation were reviewed and reclassified using NCC. GVHD-specific survival (GSS) and cumulative incidence of relapse were compared according to the NIH global score at the onset and peak of chronic GVHD (cGVHD). Of 346 patients with cGVHD diagnosed by the Revised Seattle Criteria, 317 patients were reclassified according to the NCC as classic cGVHD (n = 144) and overlap syndrome (n = 173). The NIH global scores at onset were mild (43.2%), moderate (42.3%), and severe (14.5%), whereas more moderate (55.5%) and severe (31.6%) cGVHD was observed at the peak of cGVHD. With a median follow-up duration of 34 months, the 5-year GSS was significantly worse for the severe group than the moderate/mild groups at onset and at peak: 50.9% ± 7.8% versus 89.7% ± 3.2% versus 93.5% ± 2.4% at onset (P < .001) and 69.1% ± 5.2% versus 93.2% ± 2.1% versus 97.3% ± 2.7% at peak (P < .001). Severe NIH global score at onset and peak were confirmed as a poor prognostic factor for GSS in multivariate analysis. The cumulative incidence of relapse did not differ among the severity groups at onset or peak. In conclusion, the new NIH global scoring system was shown to differentiate a high-risk group of patients (with severe grade cGVHD) in terms of long-term transplant outcomes. |
The role of benzodiazepines in breathlessness: a single site, open label pilot of sustained release morphine together with clonazepam. | BACKGROUND
Breathlessness at rest or on minimal exertion despite optimal treatment of underlying cause(s) is distressing and prevalent. Opioids can reduce the intensity of chronic refractory breathlessness and an anxiolytic may be of benefit. This pilot aimed to determine the safety and feasibility of conducting a phase III study on the intensity of breathlessness by adding regular benzodiazepine to low-dose opioid.
METHODS
This is a single site, open label phase II study of the addition of regular clonazepam 0.5 mg nocte orally to Kapanol(R) 10 mg (sustained release morphine sulphate) orally mane together with docusate/sennosides in people with modified Medical Research Council Scale ≥2. Breathlessness intensity on day four was the efficacy outcome. Participants could extend for another 10 days if they achieved >15% reduction over their own baseline breathlessness intensity.
RESULTS
Eleven people had trial medication (eight males, median age 78 years (68 to 89); all had COPD; median Karnofsky 70 (50 to 80); six were on long-term home oxygen. Ten people completed day four. One person withdrew because of unsteadiness on day four. Five participants reached the 15% reduction, but only three went on to the extension study, all completing without toxicity.
CONCLUSION
This study was safe, feasible and there appears to be a group who derive benefits comparable to titrated opioids. Given the widespread use of benzodiazepines for the symptomatic treatment of chronic refractory breathlessness and its poor evidence base, there is justification for a definitive phase III study. |
Process Integration of Model-Based Design and Production-Code Generation in the Multi-User / Multi-Project Development Environment at Continental Teves – Part 1 | Model-Based Design (MBD) and Production-Code Generation (PCG) are in widespread use throughout the automotive industry. Their usage has shown remarkable results in recent years and was mainly instigated by the need for efficiency and quality. This first paper focuses on the improvement potential of MDB tools, highlighting their integration into a company-specific software development process. The integration will be considered on the basis of Continental Teves’ worldwide development environment. The relationship to standards and formal aspects about software development and process improvement in general will be discussed. Furthermore, the paper deals with questions about practical issues, such as tool integration, process safety, reproducibility, and tool certification. In this context it addresses the impact of tool integration and automation on efficiency and quality. |
A graph based approach to scientific paper recommendation | When looking for recently published scientific papers, a researcher usually focuses on the topics related to her/his scientific interests. The task of a recommender system is to provide a list of unseen papers that match these topics. The core idea of this paper is to leverage the latent topics of interest in the publications of the researchers, and to take advantage of the social structure of the researchers (relations among researchers in the same field) as reliable sources of knowledge to improve the recommendation effectiveness. In particular, we introduce a hybrid approach to the task of scientific papers recommendation, which combines content analysis based on probabilistic topic modeling and ideas from collaborative filtering based on a relevance-based language model. We conducted an experimental study on DBLP, which demonstrates that our approach is promising. |
Recruitment of Seemingly Overeducated Personnel: Insider-Outsider Effects of Fair Employee Selection Practices | Fair employment policies constrain employee selection: specifically, applicants’ professional experience can substitute for formal education. However, reflecting firm-specific job requirements, this substitution rule applies less strictly to applicants from outside the firm. Further, setting low educational job requirements decreases the risk of disparate impact charges. Using data from a large US public employer, we show that successful outsider candidates exhibit higher levels of formal education than insiders. Also, this gap in educational attainments between outsiders and insiders widens with lower advertised degree requirements. More generally, we find strong insider-outsider effects on hiring decisions. |
Operant conditioning in invertebrates | Learning to anticipate future events on the basis of past experience with the consequences of one's own behavior (operant conditioning) is a simple form of learning that humans share with most other animals, including invertebrates. Three model organisms have recently made significant contributions towards a mechanistic model of operant conditioning, because of their special technical advantages. Research using the fruit fly Drosophila melanogaster implicated the ignorant gene in operant conditioning in the heat-box, research on the sea slug Aplysia californica contributed a cellular mechanism of behavior selection at a convergence point of operant behavior and reward, and research on the pond snail Lymnaea stagnalis elucidated the role of a behavior-initiating neuron in operant conditioning. These insights demonstrate the usefulness of a variety of invertebrate model systems to complement and stimulate research in vertebrates. |
Using deep learning to detect price change indications in financial markets | Forecasting financial time-series has long been among the most challenging problems in financial market analysis. In order to recognize the correct circumstances to enter or exit the markets investors usually employ statistical models (or even simple qualitative methods). However, the inherently noisy and stochastic nature of markets severely limits the forecasting accuracy of the used models. The introduction of electronic trading and the availability of large amounts of data allow for developing novel machine learning techniques that address some of the difficulties faced by the aforementioned methods. In this work we propose a deep learning methodology, based on recurrent neural networks, that can be used for predicting future price movements from large-scale high-frequency time-series data on Limit Order Books. The proposed method is evaluated using a large-scale dataset of limit order book events. |
Parametric study of microstrip patch antenna on LCP substrate for 70 GHz applications | Analysis and design of a microstrip patch antenna on LCP substrate with optimum performance at 70 GHz is presented. Different design techniques such as analytical modeling, numerical optimization, and numerical variation of dimensions have been considered and an optimal design is presented. The design is a trade-off between different antenna parameters and finding the inset position of the feed line at higher frequencies is a challenge. It is observed that the patch length of antenna is a critical parameter which determines the resonant frequency than its width. A patch width of 1.8 times the patch length can be selected to achieve optimal performance at resonant frequency. A -10 dB bandwidth of 2 GHz has been achieved which makes it suitable for broadband communication applications. Preliminary theoretical results have been presented. |
Addressing Sample Inefficiency and Reward Bias in Inverse Reinforcement Learning | We identify two issues with the family of algorithms based on the Adversarial Imitation Learning framework. The first problem is implicit bias present in the reward functions used in these algorithms. While these biases might work well for some environments, they can also lead to sub-optimal behavior in others. Secondly, even though these algorithms can learn from few expert demonstrations, they require a prohibitively large number of interactions with the environment in order to imitate the expert for many real-world applications. In order to address these issues, we propose a new algorithm called Discriminator-Actor-Critic that uses off-policy Reinforcement Learning to reduce policy-environment interaction sample complexity by an average factor of 10. Furthermore, since our reward function is designed to be unbiased, we can apply our algorithm to many problems without making any task-specific adjustments. |
Object Detection, Tracking, and Motion Segmentation for Object-level Video Segmentation | We present an approach for object segmentation in videos that combines frame-level object detection with concepts from object tracking and motion segmentation. The approach extracts temporally consistent object tubes based on an off-the-shelf detector. Besides the class label for each tube, this provides a location prior that is independent of motion. For the final video segmentation, we combine this information with motion cues. The method overcomes the typical problems of weakly supervised/unsupervised video segmentation, such as scenes with no motion, dominant camera motion, and objects that move as a unit. In contrast to most tracking methods, it provides an accurate, temporally consistent segmentation of each object. We report results on four video segmentation datasets: YouTube Objects, SegTrackv2, egoMotion, and FBMS. |
Oral misoprostol versus intramuscular oxytocin in the active management of the third stage of labour. | INTRODUCTION
Although the third stage of labour is usually uneventful, several significant complications may be encountered that may lead to maternal morbidity and mortality, especially primary postpartum haemorrhage. The objective of this study was to compare 400 ug oral misoprostol with 10 IU intramuscular oxytocin in the active management of the third stage of labour.
METHODS
This was a prospective randomised controlled clinical trial in which 200 parturients at term who had vaginal delivery were randomly assigned into two groups: oral misoprostol and intramuscular oxytocin, after the delivery of the baby and the clamping of the umbilical cord. The primary outcome was the incidence of primary postpartum haemorrhage. Secondary outcomes included a drop in haemoglobin concentration 48 hours after delivery, the need for extra oxytocics, duration of the third stage of labour and side effects of the oxytocics. These results were subjected to statistical analysis using chi-square test or student's t-test.
RESULTS
No occurrence of primary postpartum haemorrhage or significant difference in the drop in haemoglobin concentration levels was reported after delivery (p-value is 0.49), and no significant differences were observed in other secondary outcome measures with the exception of nausea, which occurred solely in the misoprostol group (4 percent, p-value is 0.04).
CONCLUSION
Oral misoprostol appeared to be as effective and as safe as intramuscular oxytocin in the active management of the third stage of labour. |
A Network-Centric Hardware/Algorithm Co-Design to Accelerate Distributed Training of Deep Neural Networks | Training real-world Deep Neural Networks (DNNs) can take an eon (i.e., weeks or months) without leveraging distributed systems. Even distributed training takes inordinate time, of which a large fraction is spent in communicating weights and gradients over the network. State-of-the-art distributed training algorithms use a hierarchy of worker-aggregator nodes. The aggregators repeatedly receive gradient updates from their allocated group of the workers, and send back the updated weights. This paper sets out to reduce this significant communication cost by embedding data compression accelerators in the Network Interface Cards (NICs). To maximize the benefits of in-network acceleration, the proposed solution, named INCEPTIONN (In-Network Computing to Exchange and Process Training Information Of Neural Networks), uniquely combines hardware and algorithmic innovations by exploiting the following three observations. (1) Gradients are significantly more tolerant to precision loss than weights and as such lend themselves better to aggressive compression without the need for the complex mechanisms to avert any loss. (2) The existing training algorithms only communicate gradients in one leg of the communication, which reduces the opportunities for in-network acceleration of compression. (3) The aggregators can become a bottleneck with compression as they need to compress/decompress multiple streams from their allocated worker group. To this end, we first propose a lightweight and hardware-friendly lossy-compression algorithm for floating-point gradients, which exploits their unique value characteristics. This compression not only enables significantly reducing the gradient communication with practically no loss of accuracy, but also comes with low complexity for direct implementation as a hardware block in the NIC. To maximize the opportunities for compression and avoid the bottleneck at aggregators, we also propose an aggregator-free training algorithm that exchanges gradients in both legs of communication in the group, while the workers collectively perform the aggregation in a distributed manner. Without changing the mathematics of training, this algorithm leverages the associative property of the aggregation operator and enables our in-network accelerators to (1) apply compression for all communications, and (2) prevent the aggregator nodes from becoming bottlenecks. Our experiments demonstrate that INCEPTIONN reduces the communication time by 70.9~80.7% and offers 2.2~3.1x speedup over the conventional training system, while achieving the same level of accuracy. |
Geospatial Databases and Augmented Reality Visualization for Improving Safety in Urban Excavation Operations | The U.S. has more than 14 million miles of buried pipelines and utilities, many of which are in congested urban environments where several lines share the underground space. Errors in locating excavations for new installation or for repair/rehabilitation of existing utilities can result in significant costs, delays, loss of life, and damage to property (Sterling 2000). There is thus a clear need for new solutions to accurately locate buried infrastructure and improve excavation safety. This paper presents ongoing research being collaboratively conducted by the University of Michigan and DTE Energy (Michigan’s largest electric and gas utility company) that is investigating the use of Real-Time Kinematic GPS, combined with Geospatial Databases of subsurface utilities to design a new visual excavator-utility collision avoidance technology. 3D models of buried utilities are created from available geospatial data, and then superimposed over an excavator’s work space using geo-referenced Augmented Reality (AR) to provide the operator and the spotter(s) with visual information on the location and type of utilities that exist in the excavator’s vicinity. This paper describes the overall methodology and the first results of the research. |
Issues in Cloud Computing | s Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges. GJCST Classification : C.1.4, C.2.1 Research Issues in Cloud Computing Strictly as per the compliance and regulations of: Research Issues in Cloud Computing V. Krishna Reddy , B. Thirumala Rao , Dr. L.S.S. Reddy , P. Sai Kiran ABSTRACT : Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges. Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges. |
Inductive completion with retracts | SummaryIn this paper we give countably infinitely many extensions of Huet and Hullot's inductive completion procedure. Also we try and throw some light on the problem of functions which are only partially defined by some set of rewrite rules. We also give a procedure which attempts to show that two derived F-algebras are isomorphic when both of the algebras are realised as retracts. |
Pustulobullous variant of SDRIFE (symmetrical drug-related intertriginous and flexural exanthema). | We report on a 64-year-old patient from Sri Lanka who presented to the emergency room with a 24-hour history of pruritic, erythematous patches – with occasional pustules and blisters – in the groins, axillae, and antecubital fossae. The patient reported no pain or general symptoms. Six days earlier, he had started taking cefuroxime (500 mg BID) for the first time. The antibiotic had been prescribed following temporary ureteral stent placement due to nephrolithiasis. He denied any other medical or dermatological conditions, and no other drugs had been taken. At initial presentation, he exhibited sharply demarcated, symmetrically disseminated, dark-red patches in the groins, axillae, and antecubital fossae. Some lesions showed pustules and blisters (Figure 1a–c). The physical exam was otherwise unremarkable; in particular, there was no fever. Histology revealed a superficial perivascular, spongiform dermatitis with lymphocytes, eosinophils, and neutrophils. Focally, necrotic keratinocytes and neutrophils were noted in the epidermis. Overall, the findings were consistent with a drug reaction (Figure 2). Apart from slightly elevated CRP levels (5.7 mg/dL, normal range: < 0.5) and mild thrombocytopenia (111,000, normal range: 166,000–308,000), laboratory tests were within normal limits. The detection of Escherichia coli in a swab taken from a pustule in the inguinal region was thought to represent contamination; the pathological lab results were considered to be consistent with an inflammatory response to nephrolithiasis. Taking the history, clinical symptoms, and histological findings into account, the patient was diagnosed with a pustulobullous variant of SDRIFE (symmetrical drug-related intertriginous and flexural exanthema). Since 2004, the acronym SDRIFE has been used for the condition previously known as baboon syndrome. First reported in 1984, the term baboon syndrome referred to the acute development of erythematous lesions in the gluteal area following contact with mercury from broken fever thermometers [1, 2]. In particular, the new term SDRIFE takes into account that systemic drugs, too, may induce flexural skin lesions without prior sensitization [3–6]. In the present case, the development of skin lesions six days after the intake of cefuroxime suggests first exposure to the drug. The diagnosis of SDRIFE includes five clinical criteria that are summarized in Table 1. Atypical disease courses with pustules, papules, and blisters have also been described [4]. The onset of SDRIFE is independent of patient age, and occurs a few hours or even up to eight days after the administration/application of the triggering factor. Apart from systemic antibiotics – in particular β-lactam antibiotics – corticosteroids, psychopharmaceuticals, biologics, and many other drug classes are |
Combining domain knowledge and machine learning for robust fall detection | This paper presents a method for combining domain knowledge and machine learning (CDKML) for classifier generation and online adaptation. The method exploits advantages in domain knowledge and machine learning as complementary information sources. While machine learning may discover patterns in interest domains that are too subtle for humans to detect, domain knowledge may contain information on a domain not present in the available domain dataset. CDKML has three steps. First, prior domain knowledge is enriched with relevant patterns obtained by machine learning to create an initial classifier. Second, genetic algorithms refine the classifier. Third, the classifier is adapted online based on user feedback using the Markov decision process. CDKML was applied in fall detection. Tests showed that the classifiers developed by CDKML have better performance than ML classifiers generated on a one-sided training dataset. The accuracy of the initial classifier was 10 percentage points higher than the best machine learning classifier and the refinement added 3 percentage points. The online adaptation improved the accuracy of the refined classifier by additional 15 percentage points. |
A methodology for power characterization of associative memories | Content Addressable Memories (CAM) have become increasingly more important in applications requiring high speed memory search due to their inherent massively parallel processing architecture. We present a complete power analysis methodology for CAM systems to aid the exploration of their power-performance trade-offs in future systems. Our proposed methodology uses detailed transistor level circuit simulation of power behavior and a handful of input data types to simulate full chip power consumption. Furthermore, we applied our power analysis methodology on a custom designed associative memory test chip. This chip was developed by Fermilab for the purpose of developing high performance real-time pattern recognition on high volume data produced by a future large-scale scientific experiment. We applied our methodology to configure a power model for this test chip. Our model is capable of predicting the total average power within 4% of actual power measurements. Our power analysis methodology can be generalized and applied to other CAM-like memory systems and accurately characterize their power behavior. |
Deep Unsupervised Learning of Visual Similarities | Exemplar learning of visual similarities in an unsupervised manner is a problem of paramount importance to Computer Vision. In this context, however, the recent breakthrough in deep learning could not yet unfold its full potential. With only a single positive sample, a great imbalance between one positive and many negatives, and unreliable relationships between most samples, training of Convolutional Neural networks is impaired. In this paper we use weak estimates of local similarities and propose a single optimization problem to extract batches of samples with mutually consistent relations. Conflicting relations are distributed over different batches and similar samples are grouped into compact groups. Learning visual similarities is then framed as a sequence of categorization tasks. The CNN then consolidates transitivity relations within and between groups and learns a single representation for all samples without the need for labels. The proposed unsupervised approach has shown competitive performance on detailed posture analysis and object classification. |
FSO-Based Vertical Backhaul/Fronthaul Framework for 5G+ Wireless Networks | The presence of a super high rate, but also cost-efficient, easy-to-deploy, and scalable, backhaul/ fronthaul framework, is essential in the upcoming 5G wireless networks and beyond. Motivated by the mounting interest in unmanned flying platforms of various types, including UAVs, drones, balloons, and HAPs/MAPs/LAPs, which we refer to as networked flying platforms (NFPs), for providing communications services, and by the recent advances in free space optics (FSO), this article investigates the feasibility of a novel vertical backhaul/fronthaul framework where the NFPs transport the backhaul/fronthaul traffic between the access and core networks via pointto- point FSO links. The performance of the proposed innovative approach is investigated under different weather conditions and a broad range of system parameters. Simulation results demonstrate that the FSO-based vertical backhaul/ fronthaul framework can offer data rates higher than the baseline alternatives, and thus can be considered a promising solution to the emerging backhaul/fronthaul requirements of the 5G+ wireless networks, particularly in the presence of ultra-dense heterogeneous small cells. This article also presents the challenges that accompany such a novel framework and provides some key ideas toward overcoming these challenges. |
Influential Neighbours Selection for Information Diffusion in Online Social Networks | The problem of maximizing information diffusion through a network is a topic of considerable recent interest. A conventional problem is to select a set of any arbitrary k nodes as the initial influenced nodes so that they can effectively disseminate the information to the rest of the network. However, this model is usually unrealistic in online social networks since we cannot typically choose arbitrary nodes in the network as the initial influenced nodes. From the point of view of an individual user who wants to spread information as much as possible, a more reasonable model is to try to initially share the information with only some of its neighbours rather than a set of any arbitrary nodes; but how can these neighbours be effectively chosen? We empirically study how to design more effective neighbours selection strategies to maximize information diffusion. Our experimental results through intensive simulation on several real- world network topologies show that an effective neighbours selection strategy is to use node degree information for short-term propagation while a naive random selection is also adequate for long-term propagation to cover more than half of a network. We also discuss the effects of the number of initial activated neighbours. If we particularly select the highest degree nodes as initial activated neighbours, the number of initial activated neighbours is not an important factor at least for long-term propagation of information. |
Multimodal Neuroimaging Feature Learning With Multimodal Stacked Deep Polynomial Networks for Diagnosis of Alzheimer's Disease | The accurate diagnosis of Alzheimer's disease (AD) and its early stage, i.e., mild cognitive impairment, is essential for timely treatment and possible delay of AD. Fusion of multimodal neuroimaging data, such as magnetic resonance imaging (MRI) and positron emission tomography (PET), has shown its effectiveness for AD diagnosis. The deep polynomial networks (DPN) is a recently proposed deep learning algorithm, which performs well on both large-scale and small-size datasets. In this study, a multimodal stacked DPN (MM-SDPN) algorithm, which MM-SDPN consists of two-stage SDPNs, is proposed to fuse and learn feature representation from multimodal neuroimaging data for AD diagnosis. Specifically speaking, two SDPNs are first used to learn high-level features of MRI and PET, respectively, which are then fed to another SDPN to fuse multimodal neuroimaging information. The proposed MM-SDPN algorithm is applied to the ADNI dataset to conduct both binary classification and multiclass classification tasks. Experimental results indicate that MM-SDPN is superior over the state-of-the-art multimodal feature-learning-based algorithms for AD diagnosis. |
Process Transformation: Limitations to Radical Organizational Change within Public Service Organizations | This paper draws on a study of the implementation of business process reengineering (BPR) in a UK National Health Service (NHS) hospital to examine the challenge of effecting a transformatory shift to a new form of process organization in a large and complex public service organization. The paper’s theoretical and empirical interests go beyond BPR by bringing together literatures about organizational transformation, new organizational forms and the new public management (NPM) in a novel way. Data reveal important limits to intended organizational transformation and develop findings about sedimented rather than transformational change and the limitations of radical top-down change strategies in professionalized public service organizations. Within the domain of public service organizations, the paper also advances a new argument about why intended moves to post-NPM forms may remain contained in scope. |
Intravenous aminophylline in patients admitted to hospital with non-acidotic exacerbations of chronic obstructive pulmonary disease: a prospective randomised controlled trial. | BACKGROUND
Intravenous aminophylline is commonly used in the treatment of exacerbations of chronic obstructive pulmonary disease (COPD), despite limited evidence for its efficacy and known risks of toxicity. We hypothesised that adding intravenous aminophylline to conventional treatment would not produce clinically important changes in the speed of spirometric or symptomatic recovery or shorten hospital stay in patients with exacerbations of COPD.
METHODS
Eighty patients admitted to hospital with non-acidotic exacerbations of COPD were recruited at admission to a randomised, double blind, placebo controlled study comparing intravenous aminophylline 0.5 mg/kg/hour after an appropriate loading dose with an equivalent volume of 0.9% saline. The primary outcome was the change in post-bronchodilator forced expiratory volume in 1 second (FEV(1)) over the first 5 days of the admission. Secondary end points were changes in self-reported breathlessness, arterial blood gas tensions, forced vital capacity (FVC), and length of hospital stay.
RESULTS
There was no difference in the post-bronchodilator FEV(1) over the first 5 days between the aminophylline and placebo groups. In the aminophylline group, 2 hours of treatment produced a small but significant rise in arterial pH (p = 0.001) and a fall in arterial carbon dioxide tension (p = 0.01) compared with placebo treatment. There were no differences in the severity of breathlessness, post-bronchodilator FVC, or length of hospital stay between the groups. Nausea was a more frequent side effect in the aminophylline group (46% v 22%; p<0.05), but palpitations and headache were noted equally in both groups.
CONCLUSIONS
Although intravenous aminophylline produced small improvements in acid-base balance, these did not influence the subsequent clinical course. No evidence was found for any clinically important additional effect of aminophylline treatment when used with high dose nebulised bronchodilators and oral corticosteroids. Given its known toxicity, we cannot therefore recommend the use of intravenous aminophylline in the treatment of non-acidotic COPD exacerbations. |
Elementary school staff knowledge about dental injuries. | Elementary school staff can play a crucial role in managing traumatic dental injuries (TDIs) because they are often in proximity to children and are frequently called upon to assist with children's accidents. International studies reveal that elementary school personnel have little knowledge about emergency dental care and management. The purpose of this study was to assess the knowledge, practice and experience regarding TDIs among a sample of elementary school personnel in the USA. Assessment was performed using a demographic questionnaire and a newly developed TDI survey instrument. Results revealed a wide distribution of responses. Overall, dental trauma knowledge among this group was poor. The majority of respondents were not well-versed regarding TDIs, their management, the benefits of timely care or treatment costs. However, staff reported a keen interest in receiving more TDI information and training. TDI education and management are needed among all elementary school staff members to improve the prognosis of these accidents when they occur. |
Dialogue Natural Language Inference | Consistency is a long standing issue faced by dialogue models. In this paper, we frame the consistency of dialogue agents as natural language inference (NLI) and create a new natural language inference dataset called Dialogue NLI. We propose a method which demonstrates that a model trained on Dialogue NLI can be used to improve the consistency of a dialogue model, and evaluate the method with human evaluation and with automatic metrics on a suite of evaluation sets designed to measure a dialogue model’s consistency. |
DexLego: Reassembleable Bytecode Extraction for Aiding Static Analysis | The scale of Android applications in the market is growing rapidly. To efficiently detect the malicious behavior in these applications, an array of static analysis tools are proposed. However, static analysis tools suffer from code hiding techniques like packing, dynamic loading, self modifying, and reflection. In this paper, we thus present DexLego, a novel system that performs a reassembleable bytecode extraction for aiding static analysis tools to reveal the malicious behavior of Android applications. DexLego leverages just-in-time collection to extract data and bytecode from an application at runtime, and reassembles them to a new Dalvik Executable (DEX) file offline. The experiments on DroidBench and real-world applications show that DexLego precisely reconstructs the behavior of an application in the reassembled DEX file, and significantly improves analysis result of the existing static analysis systems. |
Advanced Analytics for Train Delay Prediction Systems by Including Exogenous Weather Data | State-of-the-art train delay prediction systems neither exploit historical data about train movements, nor exogenous data about phenomena that can affect railway operations. They rely, instead, on static rules built by experts of the railway infrastructure based on classical univariate statistics. The purpose of this paper is to build a data-driven train delay prediction system that exploits the most recent analytics tools. The train delay prediction problem has been mapped into a multivariate regression problem and the performance of kernel methods, ensemble methods and feed-forward neural networks have been compared. Firstly, it is shown that it is possible to build a reliable and robust data-driven model based only on the historical data about the train movements. Additionally, the model can be further improved by including data coming from exogenous sources, in particular the weather information provided by national weather services. Results on real world data coming from the Italian railway network show that the proposal of this paper is able to remarkably improve the current state-of-the-art train delay prediction systems. Moreover, the performed simulations show that the inclusion of weather data into the model has a significant positive impact on its performance. |
Chapter 7 Computers and Aviation | Although animal flight has a history of 300 million years, serious thought about human flight has a history of a few hundred years, dating from Leonardo da Vinci, 1 and successful human flight has only been achieved during the last 110 years. This is summarized in the attached figures 7.1-7.4. To some extent, this parallels the history of computing. Serious thought about computing dates back to Pascal and Leibnitz. While there was a notable attempt by Babbage to build a working computer in the 19 th century, successful electronic computers were finally achieved in the 40s, almost exactly contemporaneously with the development of the first successful jet aircraft. The early history of computers is summarized in figures 7.5-7.8. Tables 7.1 and 7.2 summarize the more recent progress in the development of supercomputers and microprocessors. Although airplane design had reached quite an advanced level by the 30s, exemplified by aircraft such as the DC-3 (Douglas Commercial-3) and the Spitfire (figure 7.2), the design of high speed aircraft requires an entirely new level of sophistication. This has led to a fusion of engineering, mathematics and computing, as indicated in figure 7.9. |
Individual user characteristics and information visualization: connecting the dots through eye tracking | There is increasing evidence that users' characteristics such as cognitive abilities and personality have an impact on the effectiveness of information visualization techniques. This paper investigates the relationship between such characteristics and fine-grained user attention patterns. In particular, we present results from an eye tracking user study involving bar graphs and radar graphs, showing that a user's cognitive abilities such as perceptual speed and verbal working memory have a significant impact on gaze behavior, both in general and in relation to task difficulty and visualization type. These results are discussed in view of our long-term goal of designing information visualisation systems that can dynamically adapt to individual user characteristics. |
Jerk-bounded manipulator trajectory planning: design for real-time applications | An online method for obtaining smooth, jerk-bounded trajectories has been developed and implemented. Jerk limitation is important in industrial robot applications, since it results in improved path tracking and reduced wear on the robot. The method described herein uses a concatenation of fifth-order polynomials to provide a smooth trajectory between two way points. The trajectory approximates a linear segment with parabolic blends trajectory. A sine wave template is used to calculate the end conditions (control points) for ramps from zero acceleration to nonzero acceleration. Joining these control points with quintic polynomials results in a controlled quintic trajectory that does not oscillate, and is near time optimal for the jerk and acceleration limits specified. The method requires only the computation of the quintic control points, up to a maximum of eight points per trajectory way point. This provides hard bounds for online motion algorithm computation time. A method for blending these straight-line trajectories over a series of way points is also discussed. Simulations and experimental results on an industrial robot are presented. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.