title
stringlengths
8
300
abstract
stringlengths
0
10k
The self system in reciprocal determinism.
Explanations of human behavior have generally favored unidirectional causal models emphasizing either environmental or internal determinants of behavior. In social learning theory, causal processes are conceptualized in terms of reciprocal determinism. Viewed from this perspective, psychological functioning involves a continuous reciprocal interaction between behavioral, cognitive, and environmental influences. The major controversies between unidirectional and reciprocal models of human behavior center on the issue of self influences. A self system within the framework of social learning theory comprises cognitive structures and subjunctions for perceiving, evaluating, and regulating behavior, not a psychic agent that controls action. The influential role of the self system in reciprocal determinism is documented through a reciprocal analysis of self-regulatory processes. Reciprocal determinism is proposed as a basic analytic principle for analyzing psychosocial phenomena at the level of intrapersonal development, interpersonal transactions, and interactive functioning of organizational and social systems. Recent years have witnessed a heightened interest in the basic conceptions of human nature underlying different psychological theories. This interest stems in part from growing recognition of how such conceptions delimit research to selected processes and are in turn shaped by findings of paradigms embodying the particular view. As psychological knowledge is converted to behavioral technologies, the models of human behavior on which research is premised have important social as well as theoretical implications (Bandura, 1974). Explanations of human behavior have generally been couched in terms of a limited set of determinants, usually portrayed as operating in a unidirectional manner. Exponents of environmental determinism study and theorize about how behavior is controlled by situational influences. Those favoring personal determinism seek the causes of human behavior in dispositional sources in the form of instincts, drives, traits, and other motivational forces within the individual. Interactionists attempt to accommodate both situational 344 • APRIL 1978 • AMERICAN PSYCHOLOGIST Copyright 1978 by the American Psychological Association, Inc. 0003-066X/78/3304-0344$00.7S and dispositional factors, but within an essentially unidirectional view of behavioral processes. The present article analyzes the various causal models and the role of self influences in behavior from the perspective of reciprocal determinism. Unidirectional environmental determinism is carried to its extreme in the more radical forms of behaviorism. It is not that the interdependence of personal and environmental influences is never acknowledged by advocates of this point of view. Indeed, Skinner (1971) has often commented on the capacity for countercontrol. However, the notion of countercontrol portrays the environment as the instigating force to which individuals can counteract. As will be shown later, people create and activate environments as well as rebut them. A further conceptual problem is that having been acknowledged, the reality of reciprocal interdependence is negated and the preeminent control of behavior by the environment is repeatedly reasserted (e.g., "A person does not act upon the world, the world acts upon him," Skinner, 1971, p. 211). The environment thus becomes an autonomous force that automatically shapes, orchestrates, and controls behavior. Whatever allusions are made to two-way processes, environmental rule clearly emerges as the reigning metaphor in the operant view of reality. There exists no shortage of advocates of alternative theories emphasizing the personal determination of environments. Humanists and existentialists, who stress the human capacity for conscious judgment and intentional action, contend that individuals determine what they become by their own free choices. Most psychologists find conceptions of human behavior in terms of unidirectional personal determinism as unsatisfying as those espousing unidirectional environmental determinism. Preparation of this article was facilitated by Public Health Research Grant M-S162 from the National Institute of Mental Health and by the James McKeen Cattell Award. Requests for reprints should be sent to Albert Bandura, Department of Psychology, Stanford University, Stanford, California 9430S. To contend that mind creates reality fails to acknowledge that environmental influences partly determine what people attend to, perceive, and think. To contend further that the methods of natural science are incapable of dealing with personal determinants of behavior does not enlist many supporters from the ranks of those who are moved more by empirical evidence than by philosophic discourse. Social learning theory (Bandura, 1974, 1977b) analyzes behavior in terms of reciprocal determinism. The term determinism is used here to signify the production of effects by events, rather than in the doctrinal sense that actions are completely determined by a prior sequence of causes independent of the individual. Because of the complexity of interacting factors, events produce effects probabilistically rather than inevitably. In their transactions with the environment, people are not simply reactors to external stimulation. Most external influences affect behavior through intermediary cognitive processes. Cognitive factors partly determine which external events will be observed, how they will be perceived, whether they have any lasting effects, what valence and efficacy they have, and how the information they convey will be organized for future use. The extraordinary capacity of humans to use symbols enables them to engage in reflective thought, to create, and to plan foresightful courses of action in thought rather than having to perform possible options and suffer the consequences of thoughtless action. By altering their immediate environment, by creating cognitive self-inducements, and by arranging conditional incentives for themselves, people can exercise some influence over their own behavior. An act therefore includes among its determinants self-produced influences. It is true that behavior is influenced by the environment, but the environment is partly of a person's own making. By their actions, people play a role in creating the social milieu and other circumstances that arise in their daily transactions. Thus, from the social learning perspective, psychological functioning involves a continuous reciprocal interaction between behavioral, cognitive, and environmental influences. Reciprocal Determinism and Interactionism Over the years the locus of the causes of behavior has been debated in personality and social psychology in terms of dispositional and situational UNIDIRECTIONAL
Three-Dimensional Container Loading : A Simulated Annealing Approach
High utilization of cargo volume is an essential factor in the success of modern enterprises in the market. Although mathematical models have been presented for container loading problems in the literature, there is still a lack of studies that consider practical constraints. In this paper, a Mixed Integer Linear Programming is developed for the problem of packing a subset of rectangular boxes inside a container such that the total value of the packed boxes is maximized while some realistic constraints, such as vertical stability, are considered. The packing is orthogonal, and the boxes can be freely rotated into any of the six orientations. Moreover, a sequence triple-based solution methodology is proposed, simulated annealing is used as modeling technique, and the situation where some boxes are preplaced in the container is investigated. These preplaced boxes represent potential obstacles. Numerical experiments are conducted for containers with and without obstacles. The results show that the simulated annealing approach is successful and can handle large number of packing instances.
Microbial contamination of computer user interfaces ( keyboard , mouse ) in a tertiary care centre under conditions of practice
coming into contact with hands are often contaminated with nosocomial pathogens and can serve as vehicles for infection transmission [3,4,5]. In addition to various other types of surfaces, some authors have also suggested that computer user interfaces were also implicated here, albeit using very different methods and with very different results [6,7,8,9,10]. The aim of our study was therefore to investigate the extent of microbial contamination found on computer user interfaces (keyboard, mouse) under everyday use conditions in a university hospital and, as far as possible, to identify predictors of elevated contamination rates.
Deep Learning 3D Shape Surfaces Using Geometry Images
1. Established relevance of authalic spherical parametrization for creating geometry images used subsequently in CNN. 2. Robust authalic parametrization of arbitrary shapes using area restoring diffeomorphic flow and barycentric mapping. 3. Creation of geometry images (a) with appropriate shape feature for rigid/non-rigid shape analysis, (b) which are robust to cut and amenable to learn using CNNs. Experiments Cuts & Data Augmentation
Deconstructing Dynamic Symbolic Execution
Dynamic symbolic execution (DSE) is a well-known technique for automatically generating tests to achieve higher levels of coverage in a program. Two keys ideas of DSE are to: (1) seed symbolic execution by executing a program on an initial input; (2) using concrete values from the program execution in place of symbolic expressions whenever symbolic reasoning is hard or not desired. We describe DSE for a simple core language and then present a minimalist implementation of DSE for Python (in Python) that follows this basic recipe. The code is available at https://www.github.com/thomasjball/PyExZ3/ (tagged “v1.0”) and has been designed to make it easy to experiment with and extend.
Combined, Minimally Invasive, Thread-based Facelift
Aging is a natural, biological process. Patients want to overcome the aesthetic problems associated with facial aging. Plastic surgeons have tried facelifts to meet patients’ expectations [1,2]. With the development of society and technology, patients prefer simple, lessinvasive procedures for rapid recovery. A barbed suture lift can meet the requirement for a minimally invasive facial rejuvenation procedure. Former thread lifts have used either a floating or fixed type thread [1]. But when we consider the functional anatomy of the face in terms of facial expressions and mastication, and a natural looking appearance after surgery, a floating-type thread lift is preferred for the mobile area of the face and a fixed-type thread lift is recommended for use in relatively fixed areas. The authors call this method the combined-type thread lift. The authors have performed this type of facelift in 28 patients with satisfactory results.
Depression: a new animal model sensitive to antidepressant treatments
A MAJOR problem in the search for new antidepressant drugs is the lack of animal models which both resemble depressive illness and are selectively sensitive to clinically effective antidepressant treatments. We have been working on a new behavioural model in the rat which attempts to meet these two requirements. The method is based on the observation that a rat, when forced to swim in a situation from which there is no escape, will, after an initial period of vigorous activity, eventually cease to move altogether making only those movements necessary to keep its head above water. We think that this characteristic and readily identifiable behavioural immobility indicates a state of despair in which the rat has learned that escape is impossible and resigns itself to the experimental conditions. This hypothesis receives support from results presented below which indicate that immobility is reduced by different treatments known to be therapeutic in depression including three drugs, iprindole, mianserin and viloxazine which although clinically active1–3 show little or no ‘antidepressant’ activity in the usual animal tests4–6.
Monitoring of Grouting Compactness in a Post-Tensioning Tendon Duct Using Piezoceramic Transducers
A post-tensioning tendon duct filled with grout can effectively prevent corrosion of the reinforcement, maintain bonding behavior between the reinforcement and concrete, and enhance the load bearing capacity of concrete structures. In practice, grouting of the post-tensioning tendon ducts always causes quality problems, which may reduce structural integrity and service life, and even cause accidents. However, monitoring of the grouting compactness is still a challenge due to the invisibility of the grout in the duct during the grouting process. This paper presents a stress wave-based active sensing approach using piezoceramic transducers to monitor the grouting compactness in real time. A segment of a commercial tendon duct was used as research object in this study. One lead zirconate titanate (PZT) piezoceramic transducer with marble protection, called a smart aggregate (SA), was bonded on the tendon and installed in the tendon duct. Two PZT patch sensors were mounted on the top outside surface of the duct, and one PZT patch sensor was bonded on the bottom outside surface of the tendon duct. In the active sensing approach, the SA was used as an actuator to generate a stress wave and the PZT sensors were utilized to detect the wave response. Cement or grout in the duct functions as a wave conduit, which can propagate the stress wave. If the cement or grout is not fully filled in the tendon duct, the top PZT sensors cannot receive much stress wave energy. The experimental procedures simulated four stages during the grout pouring process, which includes empty status, half grouting, 90% grouting, and full grouting of the duct. Experimental results show that the bottom PZT sensor can detect the signal when the grout level increases towards 50%, when a conduit between the SA and PZT sensor is formed. The top PZT sensors cannot receive any signal until the grout process is completely finished. The wavelet packet-based energy analysis was adopted in this research to compute the total signal energy received by PZT sensors. Experimental results show that the energy levels of the PZT sensors can reflect the degree of grouting compactness in the duct. The proposed method has the potential to be implemented to monitor the tendon duct grouting compactness of the reinforced concrete structures with post tensioning.
Deep Learning for Imbalanced Multimedia Data Classification
Classification of imbalanced data is an important research problem as lots of real-world data sets have skewed class distributions in which the majority of data instances (examples) belong to one class and far fewer instances belong to others. While in many applications, the minority instances actually represent the concept of interest (e.g., fraud in banking operations, abnormal cell in medical data, etc.), a classifier induced from an imbalanced data set is more likely to be biased towards the majority class and show very poor classification accuracy on the minority class. Despite extensive research efforts, imbalanced data classification remains one of the most challenging problems in data mining and machine learning, especially for multimedia data. To tackle this challenge, in this paper, we propose an extended deep learning approach to achieve promising performance in classifying skewed multimedia data sets. Specifically, we investigate the integration of bootstrapping methods and a state-of-the-art deep learning approach, Convolutional Neural Networks (CNNs), with extensive empirical studies. Considering the fact that deep learning approaches such as CNNs are usually computationally expensive, we propose to feed low-level features to CNNs and prove its feasibility in achieving promising performance while saving a lot of training time. The experimental results show the effectiveness of our framework in classifying severely imbalanced data in the TRECVID data set.
Building LinkedIn's Real-time Activity Data Pipeline
One trend in the implementation of modern web systems is the use of activity data in the form of log or event messages that capture user and server activity. This data is at the heart of many internet systems in the domains of advertising, relevance, search, recommendation systems, and security, as well as continuing to fulfill its traditional role in analytics and reporting. Many of these uses place real-time demands on data feeds. Activity data is extremely high volume and real-time pipelines present new design challenges. This paper discusses the design and engineering problems we encountered in moving LinkedIn’s data pipeline from a batch-oriented file aggregation mechanism to a real-time publish-subscribe system called Kafka. This pipeline currently runs in production at LinkedIn and handles more than 10 billion message writes each day with a sustained peak of over 172,000 messages per second. Kafka supports dozens of subscribing systems and delivers more than 55 billion messages to these consumer processing each day. We discuss the origins of this systems, missteps on the path to real-time, and the design and engineering problems we encountered along the way.
Crime Prevention through Environmental Design Crime Prevention through Environmental Design
Criminology and Penology Theories Anomie Differential Association Theory Deviance Labeling Theory Psychopathy Rational Choice Theory Social Control Theory Social Disorganization Theory Social Learning Theory Strain Theory Subcultural Theory Symbolic Interactionism · Victimology Types of crimes Blue-collar crime · Corporate crime Juvenile crime Organized crime Political crime · Public order crime Public order case law in the U.S. State crime · State-corporate crime White-collar crime · Victimless crime Plaid-collar crime Penology Deterrence · Prison Prison reform · Prisoner abuse Prisoners' rights · Rehabilitation Recidivism · Retribution Utilitarianism
Match-SRNN: Modeling the Recursive Matching Structure with Spatial RNN
Semantic matching, which aims to determine the matching degree between two texts, is a fundamental problem for many NLP applications. Recently, deep learning approach has been applied to this problem and significant improvements have been achieved. In this paper, we propose to view the generation of the global interaction between two texts as a recursive process: i.e. the interaction of two texts at each position is a composition of the interactions between their prefixes as well as the word level interaction at the current position. Based on this idea, we propose a novel deep architecture, namely Match-SRNN, to model the recursive matching structure. Firstly, a tensor is constructed to capture the word level interactions. Then a spatial RNN is applied to integrate the local interactions recursively, with importance determined by four types of gates. Finally, the matching score is calculated based on the global interaction. We show that, after degenerated to the exact matching scenario, Match-SRNN can approximate the dynamic programming process of longest common subsequence. Thus, there exists a clear interpretation for Match-SRNN. Our experiments on two semantic matching tasks showed the effectiveness of Match-SRNN, and its ability of visualizing the learned matching structure.
Surround view based parking lot detection and tracking
Parking Assistance System (PAS) provides useful help to beginners or less experienced drivers in complicated urban parking scenarios. In recent years, ultrasonic sensor based PAS and rear-view camera based PAS have been proposed from different car manufacturers. However, ultrasonic sensors detection distance is less than 3 meters and results cannot be used to extract further information like obstacle recognition. Rear-view camera based systems cannot provide assistance to the circumstances like parallel parking which need a wider view. In this paper, we proposed a surround view based parking lot detection algorithm. An efficient tracking algorithm was proposed to solve the tracking problem when detected parking slots were falling out of the surround view. Experimental results on simulation and real outdoor environment showed the effectiveness of the proposed algorithm.
DisturbLabel: Regularizing CNN on the Loss Layer
During a long period of time we are combating overfitting in the CNN training process with model regularization, including weight decay, model averaging, data augmentation, etc. In this paper, we present DisturbLabel, an extremely simple algorithm which randomly replaces a part of labels as incorrect values in each iteration. Although it seems weird to intentionally generate incorrect training labels, we show that DisturbLabel prevents the network training from over-fitting by implicitly averaging over exponentially many networks which are trained with different label sets. To the best of our knowledge, DisturbLabel serves as the first work which adds noises on the loss layer. Meanwhile, DisturbLabel cooperates well with Dropout to provide complementary regularization functions. Experiments demonstrate competitive recognition results on several popular image recognition datasets.
Corporate Governance and Corporate Social Responsibility Synergies and Interrelationships
Manuscript Type: Empirical Research Questions/Issue: This paper seeks to explore the interrelationships between corporate governance (CG) and corporate social responsibility (CSR): first, theoretically, by reviewing the literature and surveying various postulations on offer; second, empirically, by investigating the conception and interpretation of this relationship in the context of a sample of firms operating in Lebanon. Accordingly, the paper seeks to highlight the increasing cross-connects or interfaces between CG and CSR, capitalizing on fresh insights from a developing country perspective. Research Findings/Results: A qualitative interpretive research methodology was adopted, drawing on in-depth interviews with the top managers of eight corporations operating in Lebanon, with the findings suggesting that the majority of managers conceive of CG as a necessary pillar for sustainable CSR. These findings are significant and interesting, implying that recent preoccupation with CG in developing countries is starting to be counterbalanced by some interest/attention to CSR, with growing appreciation of their interdependencies and the need to move beyond CG conformance toward voluntary CSR performance. Theoretical Implications: This study makes two important contributions. First, it suggests that there is a salient two-way relationship and increasing overlap between CG and CSR. While much previous literature has researched CG and CSR independently, this paper makes the case for considering them jointly and systematically. Second, the paper outlines a number of theoretical propositions that can serve as the basis for future research on the topic, particularly in developing countries, given that the data and theoretical propositions are both derived from and tailored to developing country contexts. Practical Implications: This study can potentially alert managers to the increasing overlap between the CG and CSR agendas and the need to exert diligent systematic efforts on both fronts. CG and CSR share more in common than previously assumed, and this needs to be accounted for by practitioners. The research can also alert policy makers in developing countries to the need to increase the vigilance and capacity of the regulatory and judicial systems in the context of CG reform and to increase institutional pressures, particularly of the coercive and normative variety to enhance CSR adoption.
Early skin-to-skin contact for mothers and their healthy newborn infants.
BACKGROUND Mother-infant separation postbirth is common in Western culture. Early skin-to-skin contact (SSC) begins ideally at birth and involves placing the naked baby, head covered with a dry cap and a warm blanket across the back, prone on the mother's bare chest. According to mammalian neuroscience, the intimate contact inherent in this place (habitat) evokes neurobehaviors ensuring fulfillment of basic biological needs. This time may represent a psychophysiologically 'sensitive period' for programming future physiology and behavior. OBJECTIVES To assess the effects of early SSC on breastfeeding, physiological adaptation, and behavior in healthy mother-newborn dyads. SEARCH METHODS We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (30 November 2011), made personal contact with trialists, and consulted the bibliography on kangaroo mother care (KMC) maintained by Dr. Susan Ludington. SELECTION CRITERIA Randomized controlled trials comparing early SSC with usual hospital care. DATA COLLECTION AND ANALYSIS We independently assessed trial quality and extracted data. Study authors were contacted for additional information. MAIN RESULTS Thirty-four randomized controlled trials were included involving 2177 participants (mother-infant dyads). Data from more than two trials were available for only eight outcome measures. For primary outcomes, we found a statistically significant positive effect of early SSC on breastfeeding at one to four months postbirth (13 trials; 702 participants) (risk ratio (RR) 1.27, 95% confidence interval (CI) 1.06 to 1.53, and SSC increased breastfeeding duration (seven trials; 324 participants) (mean difference (MD) 42.55 days, 95% CI -1.69 to 86.79) but the results did not quite reach statistical significance (P = 0.06). Late preterm infants had better cardio-respiratory stability with early SSC (one trial; 31 participants) (MD 2.88, 95% CI 0.53 to 5.23). Blood glucose 75 to 90 minutes following the birth was significantly higher in SSC infants (two trials, 94 infants) (MD 10.56 mg/dL, 95% CI 8.40 to 12.72).The overall methodological quality of trials was mixed, and there was high heterogeneity for some outcomes. AUTHORS' CONCLUSIONS Limitations included methodological quality, variations in intervention implementation, and outcomes. The intervention appears to benefit breastfeeding outcomes, and cardio-respiratory stability and decrease infant crying, and has no apparent short- or long-term negative effects. Further investigation is recommended. To facilitate meta-analysis, future research should be done using outcome measures consistent with those in the studies included here. Published reports should clearly indicate if the intervention was SSC with time of initiation and duration and include means, standard deviations and exact probability values.
PaDNet: Pan-Density Crowd Counting
Crowd counting in varying density scenes is a challenging problem in artificial intelligence (AI) and pattern recognition. Recently, deep convolutional neural networks (CNNs) are used to tackle this problem. However, the single-column CNN cannot achieve high accuracy and robustness in diverse density scenes. Meanwhile, multi-column CNNs lack effective way to accurately learn the features of different scales for estimating crowd density. To address these issues, we propose a novel pan-density level deep learning model, named as Pan-Density Network (PaDNet). Specifically, the PaDNet learns multi-scale features by three steps. First, several sub-networks are pre-trained on crowd images with different density-levels. Then, a Scale Reinforcement Net (SRN) is utilized to reinforce the scale features. Finally, a Fusion Net fuses all of the scale features to generate the final density map. Experiments on four crowd counting benchmark datasets, the ShanghaiTech, the UCF CC 50, the UCSD, and the UCF-QRNF, indicate that the PaDNet achieves the best performance and has high robustness in pan-density crowd counting compared with other state-of-the-art algorithms.
Efficient SVM Regression Training with SMO
The sequential minimal optimization algorithm (SMO) has been shown to be an effective method for training support vector machines (SVMs) on classification tasks defined on sparse data sets. SMO differs from most SVM algorithms in that it does not require a quadratic programming solver. In this work, we generalize SMO so that it can handle regression problems. However, one problem with SMO is that its rate of convergence slows down dramatically when data is non-sparse and when there are many support vectors in the solution—as is often the case in regression—because kernel function evaluations tend to dominate the runtime in this case. Moreover, caching kernel function outputs can easily degrade SMO's performance even more because SMO tends to access kernel function outputs in an unstructured manner. We address these problems with several modifications that enable caching to be effectively used with SMO. For regression problems, our modifications improve convergence time by over an order of magnitude.
Baseline Mechanisms for Enterprise Governance of IT in SMEs
Information Technology (IT) has become fundamental for most organizations since it is vital to their sustainability, development and success. This pervasive use led organizations to a critical dependency on IT. Despite the benefits, it exposes organizations to several risks. Hence, a significant focus on Enterprise Governance of IT (EGIT) is required. EGIT involve the definition and implementation of processes, structures and relational mechanisms to support the business/IT alignment and the creation of business value from IT investments. These mechanisms act as a mean to direct and operationalize IT-related decision-making. However, identifying the appropriate mechanisms is a complex task since there are internal and external contingency factors that influence the design and implementation of EGIT. Small and Medium Enterprises (SMEs) are considered key elements to promote economic growth, job creation, social integration and innovation. The use of IT in these organizations can imply severe consequences on survival and growth in highly competitive markets, becoming critical in this globalization era. Several studies were developed to investigate and identify EGIT mechanisms in different contingencies but very few focused on the organization' size criterion. However, SMEs have particular characteristics that require further investigation. Therefore, our goal in this research is to evaluate EGIT mechanisms in a SME's context and identify a baseline of mechanisms for SMEs using semi-structured interviews with IT experts that have experience in SMEs. The article ends by presenting contributions, limitations and future work.
From Lifestyle Vlogs to Everyday Interactions
A major stumbling block to progress in understanding basic human interactions, such as getting out of bed or opening a refrigerator, is lack of good training data. Most past efforts have gathered this data explicitly: starting with a laundry list of action labels, and then querying search engines for videos tagged with each label. In this work, we do the reverse and search implicitly: we start with a large collection of interaction-rich video data and then annotate and analyze it. We use Internet Lifestyle Vlogs as the source of surprisingly large and diverse interaction data. We show that by collecting the data first, we are able to achieve greater scale and far greater diversity in terms of actions and actors. Additionally, our data exposes biases built into common explicitly gathered data. We make sense of our data by analyzing the central component of interaction - hands. We benchmark two tasks: identifying semantic object contact at the video level and non-semantic contact state at the frame level. We additionally demonstrate future prediction of hands.
Wireless Recording in the Peripheral Nervous System with Ultrasonic Neural Dust
The emerging field of bioelectronic medicine seeks methods for deciphering and modulating electrophysiological activity in the body to attain therapeutic effects at target organs. Current approaches to interfacing with peripheral nerves and muscles rely heavily on wires, creating problems for chronic use, while emerging wireless approaches lack the size scalability necessary to interrogate small-diameter nerves. Furthermore, conventional electrode-based technologies lack the capability to record from nerves with high spatial resolution or to record independently from many discrete sites within a nerve bundle. Here, we demonstrate neural dust, a wireless and scalable ultrasonic backscatter system for powering and communicating with implanted bioelectronics. We show that ultrasound is effective at delivering power to mm-scale devices in tissue; likewise, passive, battery-less communication using backscatter enables high-fidelity transmission of electromyogram (EMG) and electroneurogram (ENG) signals from anesthetized rats. These results highlight the potential for an ultrasound-based neural interface system for advancing future bioelectronics-based therapies.
MCEIL: An Improved Scoring Function for Overlapping Community Detection using Seed Expansion Methods
Community detection is one of the most well known problems in complex network analysis. In real-world networks, communities often overlap. Various approaches have been proposed in the literature to detect overlapping communities in networks. Local Expansion and optimization approaches have gained popularity due to their scalability and robustness. In a method based on local expansion, the seeding strategy and scoring function employed are crucial to the performance of the algorithm. In this paper, a scoring function called CEIL score is used with ground-truth seeds in local expansion and optimization algorithm. Using CEIL score has significantly improved performance of the algorithms with respect to evaluation metrics NMI and F1 score. However, CEIL has lower coverage than conductance. An extension to CEIL score, called MCEIL score is proposed. Using MCEIL score returns communities with coverage as high as conductance, and NMI and F1 scores higher than conductance on different kinds of datasets. Experiments on datasets of different types with different seeding strategies show that the improvements in NMI and F1 score obtained by MCEIL score are substantial.
Rapid colorimetric assay for cellular growth and survival: application to proliferation and cytotoxicity assays.
A tetrazolium salt has been used to develop a quantitative colorimetric assay for mammalian cell survival and proliferation. The assay detects living, but not dead cells and the signal generated is dependent on the degree of activation of the cells. This method can therefore be used to measure cytotoxicity, proliferation or activation. The results can be read on a multiwell scanning spectrophotometer (ELISA reader) and show a high degree of precision. No washing steps are used in the assay. The main advantages of the colorimetric assay are its rapidity and precision, and the lack of any radioisotope. We have used the assay to measure proliferative lymphokines, mitogen stimulations and complement-mediated lysis.
A deep CNN based multi-class classification of Alzheimer's disease using MRI
In the recent years, deep learning has gained huge fame in solving problems from various fields including medical image analysis. This work proposes a deep convolutional neural network based pipeline for the diagnosis of Alzheimer's disease and its stages using magnetic resonance imaging (MRI) scans. Alzheimer's disease causes permanent damage to the brain cells associated with memory and thinking skills. The diagnosis of Alzheimer's in elderly people is quite difficult and requires a highly discriminative feature representation for classification due to similar brain patterns and pixel intensities. Deep learning techniques are capable of learning such representations from data. In this paper, a 4-way classifier is implemented to classify Alzheimer's (AD), mild cognitive impairment (MCI), late mild cognitive impairment (LMCI) and healthy persons. Experiments are performed using ADNI dataset on a high performance graphical processing unit based system and new state-of-the-art results are obtained for multiclass classification of the disease. The proposed technique results in a prediction accuracy of 98.8%, which is a noticeable increase in accuracy as compared to the previous studies and clearly reveals the effectiveness of the proposed method.
Coordinated Reinforcement Learning
We present several new algorithms for multiagent reinforcement learning. A common feature of these algorithms i a parameterized, structured representation of a policy or value function. This structure is leveraged in an approach we call coordinated reinforcement learning, by which agents coordinate both their action selection activities and their parameter updates. Within the limits of our parametric representations, the agents will determine a jointly optimal action without explicitly considering every possible action in their exponentially large joint action space. Our methods iffer from many previous reinforcement learning approaches to multiagent coordination in that structured communication and coordination between agents appears at the core of both the learning algorithm and the execution architecture.
TRIZ: The Enlightenment of the Training Mode of Excellent Chinese Engineers
China has been the largest country in terms of the scale of higher engineering education in the world. The training plan of excellent engineers (TPEE) that launched in 2010 is to train various types of high-quality engineers and technical personnel who have strong innovative abilities and meet the need of economic and social development. The theory of inventive problem solving (TRIZ), as a scientific methodology to solve innovation problems, provides a theoretical basis and practical guideline for the training of excellent engineers. This paper analyzes the teaching philosophy and methods for the training of excellent engineers within TRIZ framework from the demand perspective, and then discusses the enlightenment of disseminating TRIZ in the following aspects, such as TRIZ application in science, technology, engineering, and mathematics (STEM) education, faculty development, platform development of innovative education and so on.
Six Degree-of-Freedom Measurements of Human Mild Traumatic Brain Injury
This preliminary study investigated whether direct measurement of head rotation improves prediction of mild traumatic brain injury (mTBI). Although many studies have implicated rotation as a primary cause of mTBI, regulatory safety standards use 3 degree-of-freedom (3DOF) translation-only kinematic criteria to predict injury. Direct 6DOF measurements of human head rotation (3DOF) and translation (3DOF) have not been previously available to examine whether additional DOFs improve injury prediction. We measured head impacts in American football, boxing, and mixed martial arts using 6DOF instrumented mouthguards, and predicted clinician-diagnosed injury using 12 existing kinematic criteria and 6 existing brain finite element (FE) criteria. Among 513 measured impacts were the first two 6DOF measurements of clinically diagnosed mTBI. For this dataset, 6DOF criteria were the most predictive of injury, more than 3DOF translation-only and 3DOF rotation-only criteria. Peak principal strain in the corpus callosum, a 6DOF FE criteria, was the strongest predictor, followed by two criteria that included rotation measurements, peak rotational acceleration magnitude and Head Impact Power (HIP). These results suggest head rotation measurements may improve injury prediction. However, more 6DOF data is needed to confirm this evaluation of existing injury criteria, and to develop new criteria that considers directional sensitivity to injury.
Machine Learning and Inference Laboratory An Optimized Design of Finned-Tube Evaporators Using the Learnable Evolution Model
Optimizing the refrigerant circuitry for a finned-tube evaporator is a daunting task for traditional exhaustive search techniques due to the extremely large number of circuitry possibilities. For this reason, more intelligent search techniques are needed. This paper presents and evaluates a novel optimization system, called ISHED1 (Intelligent System for Heat Exchanger Design). This system uses a recently developed non-Darwinian evolutionary computation method to seek evaporator circuit designs that maximize the capacity of the evaporator under given technical and environmental constraints. Circuitries were developed for an evaporator with three depth rows of 12 tubes each, based on optimizing the performance with uniform and non-uniform airflow profiles. ISHED1 demonstrated the capability to design an optimized circuitry for a non-uniform air distribution so the capacity showed no degradation over the traditional balanced circuitry design working with a uniform airflow.
Dynamic Nonlinear Inverse-Model Based Control of a Twin Rotor System Using Adaptive Neuro-fuzzy Inference System
A dynamic control system design has been a great demand in the control engineering community, with many applications particularly in the field of flight control. This paper presents investigations into the development of a dynamic nonlinear inverse-model based control of a twin rotor multi-input multi-output system (TRMS). The TRMS is an aerodynamic test rig representing the control challenges of modern air vehicle. A model inversion control with the developed adaptive model is applied to the system. An adaptive neuro-fuzzy inference system (ANFIS) is augmented with the control system to improve the control response. To demonstrate the applicability of the methods, a simulated hovering motion of the TRMS, derived from experimental data is considered in order to evaluate the tracking properties and robustness capacities of the inverse- model control technique.
Knee-extension exercise's lack of immediate effect on maximal voluntary quadriceps torque and activation in individuals with anterior knee pain.
CONTEXT Weight-bearing (WB) and non-weight-bearing (NWB) exercises are commonly used in rehabilitation programs for patients with anterior knee pain (AKP). OBJECTIVE To determine the immediate effects of isolated WB or NWB knee-extension exercises on quadriceps torque output and activation in individuals with AKP. DESIGN A single-blind randomized controlled trial. SETTING Laboratory. PARTICIPANTS 30 subjects with self-reported AKP. INTERVENTIONS Subjects performed a maximal voluntary isometric contraction (MVIC) of the quadriceps (knee at 90°). Maximal voluntary quadriceps activation was quantified using the central activation ratio (CAR): CAR = MVIC/(MVIC + superimposed burst torque). After baseline testing, subjects were randomized to 1 of 3 intervention groups: WB knee extension, NWB knee extension, or control. WB knee-extension exercise was performed as a sling-based exercise, and NWB knee-extension exercise was performed on the Biodex dynamometer. Exercises were performed in 3 sets of 5 repetitions at approximately 55% MVIC. Measurements were obtained at 4 times: baseline and immediately and 15 and 30 min postexercise. MAIN OUTCOME MEASURES Quadriceps torque output (MVIC: N·m/Kg) and quadriceps activation (CAR). RESULTS No significant differences in the maximal voluntary quadriceps torque output (F2,27 = 0.592, P = .56) or activation (F2,27 = 0.069, P = .93) were observed among the 3 treatment groups. CONCLUSIONS WB and NWB knee-extension exercises did not acutely change quadriceps torque output or activation. It may be necessary to perform exercises over a number of sessions and incorporate other disinhibitory interventions (eg, cryotherapy) to observe acute changes in quadriceps torque and activation.
An expert advisory system for the ISO 9001 quality system
The ISO 9000 quality management system has been widely accepted and adapted as a national standard by most industrial countries. Despite its high popularity and the urgent demand from customers to implement ISO 9000, some major concerns for those organizations that are seeking registration to ISO 9000 include the expensive cost and the lengthy time to implement. The purpose of this paper is to describe an expert advisory system for ISO 9001 implementation by using an expert system shell called Visual Rules Studio. This expert advisory system integrated the ISO 9001 quality system guidelines and an evaluation approach based on the Malcolm Baldrige National Quality Award (MBNQA) criteria into a knowledge-based expert system. By identifying the critical ISO elements and comparing the company’s current quality performance with ISO standards, this advisory system provides assessment results and implementation suggestions to the organization. The advisory system has been validated by a group of quality professionals. The following contains a description of the system and a discussion of the validation results. Limitations of the system and recommendations for future research are also discussed. q 2004 Elsevier Ltd. All rights reserved.
Cloud Automation: Precomputing Roadmaps for Flexible Manipulation
The goal of this article is to highlight the benefits of cloud automation for industrial adopters and some of the research challenges that must be addressed in this process. The focus is on the use of cloud computing for efficiently planning the motion of new robot manipulators designed for flexible manufacturing floors. In particular, different ways that a robot can interact with a computing cloud are considered, where an architecture that splits computation between the remote cloud and the robot appears advantageous. Given this synergistic robot-cloud architecture, this article describes how solutions from the recent literature can be employed on the cloud during a periodically updated preprocessing phase to efficiently answer manipulation queries on the robot given changes in the workspace. In this setup, interesting tradeoffs arise between path quality and computational efficiency, which are evaluated through simulation. These tradeoffs motivate further research on how motion planning should be executed given access to a computing cloud.
A PID Compensator Control for the Synchronous Rectifier Buck DC-DC Converter
This paper aims to design a PID compensator for the control of Synchronous Rectifier (SR) Buck Converter to improve its conversion efficiency under different load conditions. Since the diode rectifier is replaced by a high frequency MOSFET switch, the SR control method itself will be sufficient under heavy load condition, to attain better normal mode performance. However, this technique does not hold well in light load condition, due to increased switching losses. A new control technique accompanied with PID compensator is introduced in the paper will enable synchronous buck converter to realize ZVS, while feeding light loads. This is also least cost and highly efficient easy technique without use of extra auxiliary switches and RLC components. This control technique also proved to be efficient under input voltage variations. Simulation is done in MATLAB Simulink for proving stabilization provided by the PID compensator for synchronous rectifier (SR) buck converter. Keywords—Synchronous Rectifier (SR), Continuous Condu-ction Mode (CCM), Discontinuous Conduction Mode (DCM), Zero Voltage Switching (ZVS), Proportional-IntegralDerivative (PID).
Beneficial effects of intravenously administered N-3 fatty acids for the prevention of atrial fibrillation after coronary artery bypass surgery: a prospective randomized study.
BACKGROUND Atrial fibrillation (AF) is a common complication after coronary artery bypass grafting operation (CABG). Experimental data have shown antiarrhythmic effects of n-3 polyunsaturated fatty acids (PUFA) on myocardial cells. Orally administered PUFA could significantly reduce the rate of postoperative AF. We assessed the efficacy of PUFA for the prevention of AF after CABG. PUFA were given intravenously to prevent variation in bioavailability. METHODS AND RESULTS 52 patients were randomized to the interventional group, 50 served as controls. In the control group free fatty acids (100 mg soya oil/kg body weight/day) were infused via perfusion pump, starting on admission to hospital and ending at discharge from intensive care. In the interventional group PUFA were given at a dosage of 100 mg fish oil/kg body weight/day. Primary end point was the postoperative development of AF, documented by surface ECG. Secondary end point was the length of stay in the ICU. The demographic, clinical and surgical characteristics of the patients in the two groups were similar. Postoperative AF occurred in 15 patients (30.6 %) in the control and in 9 (17.3 %) in the PUFA group ( P < 0.05). After CABG, the PUFA patients had to be treated in the ICU for a shorter time than the control patients. No adverse effects were observed. CONCLUSIONS Perioperative intravenous infusion of PUFA reduces the incidence of AF after CABG and leads to a shorter stay in the ICU and in hospital. Our data suggest that perioperative intravenous infusion of PUFA should be recommended for patients undergoing CABG.
26-42 GHz SOI CMOS low noise amplifier
A complementary metal-oxide semiconductor (CMOS) single-stage cascode low-noise amplifier (LNA) is presented in this paper. The microwave monolithic integrated circuit (MMIC) is fabricated using digital 90-nm silicon-on-insulator (SOI) technology. All impedance matching and bias elements are implemented on the compact chip, which has a size of 0.6 mm /spl times/ 0.3 mm. The supply voltage and supply current are 2.4 V and 17 mA, respectively. At 35 GHz and 50 /spl Omega/ source/load impedances, a gain of 11.9 dB, a noise figure of 3.6 dB, an output compression point of 4 dBm, an input return loss of 6 dB, and an output return loss of 18 dB are measured. The -3-dB frequency bandwidth ranges from 26 to 42 GHz. All results include the pad parasitics. To the knowledge of the author, the results are by far the best for a silicon-based millimeter-wave LNA reported to date. The LNA is well suited for systems operating in accordance to the local multipoint distribution service (LMDS) standards at 28 and 38 GHz and the multipoint video distribution system (MVDS) standard at 42 GHz.
Phase I-II clinical trial of hyaluronan-cisplatin nanoconjugate in dogs with naturally occurring malignant tumors.
OBJECTIVE To conduct a phase I-II clinical trial of hyaluronan-cisplatin nanoconjugate (HA-Pt) in dogs with naturally occurring malignant tumors. ANIMALS 18 healthy rats, 9 healthy mice, and 16 dogs with cancer. PROCEDURES HA-Pt was prepared and tested by inductively coupled plasma mass spectrometry; DNA-platinum adduct formation and antiproliferation effects of cisplatin and HA-Pt were compared in vitro. Effects of cisplatin (IV) and HA-Pt (SC) in rodents were tested by clinicopathologic assays. In the clinical trial, dogs with cancer received 1 to 4 injections of HA-Pt (10 to 30 mg/m(2), intratumoral or peritumoral, q 3 wk). Blood samples were collected for pharmacokinetic analysis; CBC, serum BUN and creatinine concentration measurement, and urinalysis were conducted before and 1 week after each treatment. Some dogs underwent hepatic enzyme testing. Tumors were measured before the first treatment and 3 weeks after each treatment to assess response. RESULTS No adverse drug effects were detected in pretrial assessments in rodents. Seven of 16 dogs completed the study; 3 had complete tumor responses, 3 had stable disease, and 1 had progressive disease. Three of 7 dogs with oral and nasal squamous cell carcinoma (SCC) that completed the study had complete responses. Myelosuppression and cardiotoxicosis were identified in 6 and 2 dogs, respectively; none had nephrotoxicosis. Four of 5 dogs with hepatic enzymes assessed had increased ALT activities, attributed to diaquated cisplatin products in the HA-Pt. Pharmacokinetic data fit a 3-compartment model. CONCLUSIONS AND CLINICAL RELEVANCE HA-Pt treatment resulted in positive tumor responses in some dogs, primarily those with SCC. The adverse effect rate was high. IMPACT FOR HUMAN MEDICINE Oral SCC in dogs has characteristics similar to human head and neck SCC; these results could be useful in developing human treatments.
Maxillary sinus lift without grafting, and simultaneous implant placement: a prospective clinical study with a 51-month follow-up.
A prospective clinical study of maxillary sinus lift procedures in the posterior region of the maxilla, using only blood clot as filling material, was conducted. Seventeen patients underwent a maxillary sinus lift procedure; 20 maxillary sinus regions were operated on and a total of 25 implants were placed. The sinus mucosa was lifted together with the anterior wall of the osteotomized maxilla and supported by the implants placed. Computed tomography (CT) scans were obtained immediately postoperative (T initial) and at 3 (T1) and 51 (T2) months postoperative for the measurement of linear bone height and bone density (by grey tones). Only one implant was lost in the first stage (96% success). After dental prosthesis placement and during up to 51 months of follow-up, no implant was lost (100% success, second stage). The difference in mean bone height between T initial (5.94 mm) and T1 (13.14 mm), and between T initial and T2 (11.57 mm), was statistically significant (both P<0.001); comparison between T1 and T2 also presented a statistical difference (P<0.001). Bone density had increased at the end of the period analyzed, but this was not statistically significant (P>0.05). Thus, the maxillary sinus lift technique with immediate implant placement, filling with blood clot only, may be performed with a high success rate.
Multimedia Lab $@$ ACL WNUT NER Shared Task: Named Entity Recognition for Twitter Microposts using Distributed Word Representations
Due to the short and noisy nature of Twitter microposts, detecting named entities is often a cumbersome task. As part of the ACL2015 Named Entity Recognition (NER) shared task, we present a semisupervised system that detects 10 types of named entities. To that end, we leverage 400 million Twitter microposts to generate powerful word embeddings as input features and use a neural network to execute the classification. To further boost the performance, we employ dropout to train the network and leaky Rectified Linear Units (ReLUs). Our system achieved the fourth position in the final ranking, without using any kind of hand-crafted features such as lexical features or gazetteers.
Design of a commercial hybrid VTOL UAV system
For the last 40 years high - capacity Unmanned Air Vehicles have been use mostly for military services such as tracking, surveillance, engagement with active weapon or in the simplest term for data acquisition purpose. Unmanned Air Vehicles are also demanded commercially because of their advantages in comparison to manned vehicles such as their low manufacturing and operating cost, configuration flexibility depending on customer request, not risking pilot in the difficult missions. Nevertheless, they have still open issues such as integration to the manned flight air space, reliability and airworthiness. Although Civil Unmanned Air Vehicles comprise 3% of the UAV market, it is estimated that they will reach 10% level within the next 5 years. UAV systems with their useful equipment (camera, hyper spectral imager, air data sensors and with similar equipment) have been in use more and more for civil applications: Tracking and monitoring in the event of agriculture / forest / marine pollution / waste / emergency and disaster situations; Mapping for land registry and cadastre; Wildlife and ecologic monitoring; Traffic Monitoring and; Geology and mine researches. They can bring minimal risk and cost advantage to many civil applications, in which it was risky and costly to use manned air vehicles before. When the cost of Unmanned Air Vehicles designed and produced for military service is taken into account, civil market demands lower cost and original products which are suitable for civil applications. Most of civil applications which are mentioned above require UAVs that are able to take off and land on limited runway, and moreover move quickly in the operation region for mobile applications but hover for immobile measurement and tracking when necessary. This points to a hybrid unmanned vehicle concept optimally, namely the Vertical Take Off and Landing (VTOL) UAVs. At the same time, this system requires an efficient cost solution for applicability / convertibility for different civil applications. It means an Air Vehicle having easily portability of payload depending on application concept and programmability of operation (hover and cruise flight time) specific to the application. The main topic of this project is designing, producing and testing the TURAC VTOL UAV that have the following features : Vertical takeoff and landing, and hovering like helicopter ; High cruise speed and fixed-wing ; Multi-functional and designed for civil purpose ; The project involves two different variants ; The TURAC A variant is a fully electrical platform which includes 2 tilt electric motors in the front, and a fixed electric motor and ducted fan in the rear ; The TURAC B variant uses fuel cells.
Mining activity clusters from low-level event logs
Process mining techniques have proven to be a valuable tool for analyzing the execution of business processes. They rely on logs that identify events at an activity level, i.e., most process mining techniques assume that the information system explicitly supports the notion of activities/tasks. This is often not the case and only low-level events are being supported and logged. For example, users may provide different pieces of data which together constitute a single activity. The technique introduced in this paper uses clustering algorithms to derive activity logs from lower-level data modification logs, as produced by virtually every information system. This approach was implemented in the context of the ProM framework and its goal is to widen the scope of processes that can be analyzed using existing process mining techniques.
Male Sexual Behavior
This article was originally published in Encyclopedia of Behavioral Neuroscience, published by Elsevier, and the attached copy is provided by Elsevier for the author’s benefit and for the benefit of the author’s institution, for non-commercial research and educational use including without limitation use in instruction at your institution, distribution to specific colleagues, and providing a copy to your institution’s administrator.
Outcome with the hyper-CVAD regimens in lymphoblastic lymphoma.
Therapy of lymphoblastic lymphoma (LL) has evolved with use of chemotherapy regimens modeled after those for acute lymphocytic leukemia (ALL). We treated 33 patients with LL with the intensive chemotherapy regimens hyper-CVAD (fractionated cyclophosphamide, vincristine, Adriamycin, and dexamethasone) or modified hyper-CVAD used for ALL at our institution. Induction consolidation was administered with 8 or 9 alternating cycles of chemotherapy over 5 to 6 months with intrathecal chemotherapy prophylaxis, followed by maintenance therapy. Consolidative radiation therapy was given to patients with mediastinal disease at presentation. No consolidation with autologous or allogeneic stem cell transplantation was performed. At diagnosis, 80% were T-cell immunophenotype, 70% were stages III to IV, 70% had mediastinal involvement, and 9% had central nervous system (CNS) disease. Of the patients, 30 (91%) achieved complete remission, and 3 (9%) achieved partial response. Within a median of 13 months, 10 patients (30%) relapsed or progressed. Estimates for 3-year progression-free and overall survival for the 33 patients were 66% and 70%, respectively. Estimates for the patients with known T-cell immunophenotype were 62% and 67%, respectively. No parameters (eg, age, stage, serum lactate dehydrogenase [LDH], beta(2) microglobulin) appeared to influence outcome except for CNS disease at presentation. Modification of the hyper-CVAD regimen with anthracycline intensification did not improve outcome. Other modifications of the program could include incorporation of monoclonal antibodies and/or nucleoside analogs, particularly for slow responders or those with residual mediastinal disease.
Semantic Web Search based on Ontology Modeling using Protege Reasoner
The Semantic Web works on the existing Web which presents the meaning of information as well-defined vocabularies understood by the people. Semantic Search, at the same time, works on improving the accuracy of a search by understanding the intent of the search and providing contextually relevant results. The paper describes a semantic approach towards web search through a PHP application. The goal was to parse through a user’s browsing history and return semantically relevant web pages for the search query provided. The browser used for this purpose was Mozilla Firefox. The user’s history was stored in a MySQL database, which, in turn, was accessed using PHP. The ontology, created from the browsing history, was then parsed for the entered search query and the corresponding results were returned to the user providing a semantically organized and relevant output.
2 How to Make Cognitive Illusions Disappear : Beyond “ Heuristics and Biases ”
Most so-called “errors” in probabilistic reasoning are in fact not violations of probability theory. Examples of such “errors” include overconfi dence bias, conjunction fallacy, and base-rate neglect. Researchers have relied on a very narrow normative view, and have ignored conceptual distinctions—for example, single case versus relative frequency—fundamental to probability theory. By recognizing and using these distinctions, however, we can make apparently stable “errors” disappear, reappear, or even invert. I suggest what a reformed understanding of judgments under uncertainty might look like.
AZIMUT, a leg-track-wheel robot
AZIMUT is a mobile robotic platform that combines wheels, legs and tracks to move in three-dimensional environments. The robot is symmetrical and is made of four independent leg-track-wheel articulations. It can move with its articulations up, down or straight, or to move sideways without changing the robot's orientation. To validate the concept, the rst prototype developed measures 70.5 cm 70.5 cm with the articulations up. It has a body clearance of 8.4 cm to 40.6 cm depending on the position of the articulations. The design of the robot is highly modular, with distributed embedded systems to control the di erent components of the robot.
Query-based summarization using MDL principle
Query-based text summarization is aimed at extracting essential information that answers the query from original text. The answer is presented in a minimal, often predefined, number of words. In this paper we introduce a new unsupervised approach for query-based extractive summarization, based on the minimum description length (MDL) principle that employs Krimp compression algorithm (Vreeken et al., 2011). The key idea of our approach is to select frequent word sets related to a given query that compress document sentences better and therefore describe the document better. A summary is extracted by selecting sentences that best cover query-related frequent word sets. The approach is evaluated based on the DUC 2005 and DUC 2006 datasets which are specifically designed for query-based summarization (DUC, 2005 2006). It competes with the best results.
Phone Merging For Code-Switched Speech Recognition
Speakers in multilingual communities often switch between or mix multiple languages in the same conversation. Automatic Speech Recognition (ASR) of codeswitched speech faces many challenges including the influence of phones of different languages on each other. This paper shows evidence that phone sharing between languages improves the Acoustic Model performance for Hindi-English code-switched speech. We compare baseline system built with separate phones for Hindi and English with systems where the phones were manually merged based on linguistic knowledge. Encouraged by the improved ASR performance after manually merging the phones, we further investigate multiple data-driven methods to identify phones to be merged across the languages. We show detailed analysis of automatic phone merging in this language pair and the impact it has on individual phone accuracies and WER. Though the best performance gain of 1.2% WER was observed with manually merged phones, we show experimentally that the manual phone merge is not optimal.
Privacy-Preserving Deep Inference for Rich User Data on The Cloud
Deep neural networks are increasingly being used in a variety of machine learning applications applied to rich user data on the cloud. However, this approach introduces a number of privacy and efficiency challenges, as the cloud operator can perform secondary inferences on the available data. Recently, advances in edge processing have paved the way for more efficient, and private, data processing at the source for simple tasks and lighter models, though they remain a challenge for larger, and more complicated models. In this paper, we present a hybrid approach for breaking down large, complex deep models for cooperative, privacy-preserving analytics. We do this by breaking down the popular deep architectures and fine-tune them in a particular way. We then evaluate the privacy benefits of this approach based on the information exposed to the cloud service. We also asses the local inference cost of different layers on a modern handset for mobile applications. Our evaluations show that by using certain kind of fine-tuning and embedding techniques and at a small processing costs, we can greatly reduce the level of information available to unintended tasks applied to the data feature on the cloud, and hence achieving the desired tradeoff between privacy and performance.
The effect of narrowed segment length on the degree of early postoperative dysphagia in laparoscopic Nissen fundoplication.
BACKGROUND/AIMS Although laparoscopic Nissen fundoplication is the gold standard in the surgical treatment of gastroesophageal reflux disease, it may cause troublesome complications like dysphagia. In this study, we demonstrated the effect of narrowed segment length on early dysphagia in patients. MATERIALS AND METHODS Forty-one patients who underwent laparoscopic Nissen fundoplication by a single surgeon between January 2007 and November 2008 were reviewed. Dysphagia scores were assessed by a question in the Gastrointestinal Quality of Life Index questionnaire and recorded preoperatively and at 1 month and 6 months. Barium esophagogram was performed for all patients at 1 month. Narrowed segment length was measured on esophagogram. Patients were divided into two groups (Group 1, ≤30 mm; Group 2, >30 mm). Dysphagia scores preoperatively and at 1 month and 6 months were compared between the two groups. RESULTS The two groups were homogeneous in age, gender, body mass index, and preoperative dysphagia score. We were unable to demonstrate any difference in preoperative and postoperative dysphagia scores between the two groups. CONCLUSIONS In this study, we used subjective data for grade of dysphagia and esophagogram for wrap length instead of manometric data. In our opinion, there is no effect of narrowed segment length on the degree of early postoperative dysphagia in patients undergoing laparoscopic Nissen fundoplication.
My science tutor: A conversational multimedia virtual tutor for elementary school science
This article describes My Science Tutor (MyST), an intelligent tutoring system designed to improve science learning by students in 3rd, 4th, and 5th grades (7 to 11 years old) through conversational dialogs with a virtual science tutor. In our study, individual students engage in spoken dialogs with the virtual tutor Marni during 15 to 20 minute sessions following classroom science investigations to discuss and extend concepts embedded in the investigations. The spoken dialogs in MyST are designed to scaffold learning by presenting open-ended questions accompanied by illustrations or animations related to the classroom investigations and the science concepts being learned. The focus of the interactions is to elicit self-expression from students. To this end, Marni applies some of the principles of Questioning the Author, a proven approach to classroom conversations, to challenge students to think about and integrate new concepts with prior knowledge to construct enriched mental models that can be used to explain and predict scientific phenomena. In this article, we describe how spoken dialogs using Automatic Speech Recognition (ASR) and natural language processing were developed to stimulate students' thinking, reasoning and self explanations. We describe the MyST system architecture and Wizard of Oz procedure that was used to collect data from tutorial sessions with elementary school students. Using data collected with the procedure, we present evaluations of the ASR and semantic parsing components. A formal evaluation of learning gains resulting from system use is currently being conducted. This paper presents survey results of teachers' and children's impressions of MyST.
Dynamic Data Selection for Neural Machine Translation
Intelligent selection of training data has proven a successful technique to simultaneously increase training efficiency and translation performance for phrase-based machine translation (PBMT). With the recent increase in popularity of neural machine translation (NMT), we explore in this paper to what extent and how NMT can also benefit from data selection. While state-of-the-art data selection (Axelrod et al., 2011) consistently performs well for PBMT, we show that gains are substantially lower for NMT. Next, we introduce dynamic data selection for NMT, a method in which we vary the selected subset of training data between different training epochs. Our experiments show that the best results are achieved when applying a technique we call gradual fine-tuning, with improvements up to +2.6 BLEU over the original data selection approach and up to +3.1 BLEU over a general baseline.
ASED ON LDA T OPIC M ODELLING
A Large number of digital text information is generated every day. Effectively searching, managing and exploring the text data has become a main task. In this paper, we first represent an introduction to text mining and a probabilistic topic model Latent Dirichlet allocation. Then two experiments are proposed Wikipedia articles and users’ tweets topic modelling. The former one builds up a document topic model, aiming to a topic perspective solution on searching, exploring and recommending articles. The latter one sets up a user topic model, providing a full research and analysis over Twitter users’ interest. The experiment process including data collecting, data pre-processing and model training is fully documented and commented. Further more, the conclusion and application of this paper could be a useful computation tool for social and business research.
On flowering patterns of alien species: 2. Robinia pseudoacacia, R. × ambigua, and R. neomexicana
A comparative study of the structure of the flower in three species of Robinia L., R. pseudoacacia, R. × ambigua, and R. neomexicana, was carried out. The widely naturalized R. pseudoacacia, as compared to the two other species, has the smallest sizes of flower organs at all stages of development. Qualitative traits that describe each phase of the flower development were identified. A set of microscopic morphological traits of the flower (both quantitative and qualitative) was analyzed. Additional taxonomic traits were identified: shape of anthers, size and shape of pollen grains, and the extent of pollen fertility.
Information Leaks Without Memory Disclosures: Remote Side Channel Attacks on Diversified Code
Code diversification has been proposed as a technique to mitigate code reuse attacks, which have recently become the predominant way for attackers to exploit memory corruption vulnerabilities. As code reuse attacks require detailed knowledge of where code is in memory, diversification techniques attempt to mitigate these attacks by randomizing what instructions are executed and where code is located in memory. As an attacker cannot read the diversified code, it is assumed he cannot reliably exploit the code. In this paper, we show that the fundamental assumption behind code diversity can be broken, as executing the code reveals information about the code. Thus, we can leak information without needing to read the code. We demonstrate how an attacker can utilize a memory corruption vulnerability to create side channels that leak information in novel ways, removing the need for a memory disclosure vulnerability. We introduce seven new classes of attacks that involve fault analysis and timing side channels, where each allows a remote attacker to learn how code has been diversified.
Population ecology and the new economics: guidelines for a steady-state economy.
Abstract This article is concerned with the necessity of modifying the well known classical theory of demographic transition because of recent developments, particularly in the worldwide ecological situation. It affects statements about developing as well as industrialized countries and is urgently needed in view of the dominant role transition theory plays in practice, planning and forecasting. The basic ideas of a demoecological transition theory is developed. The final part is a draft, with necessary consequences, of the new guidelines for economic action, that is alternatives and room to manoeuvre within a steady-state economy.
Genetic analysis of abiotic stress tolerance in crops.
Abiotic stress tolerance is complex, but as phenotyping technologies improve, components that contribute to abiotic stress tolerance can be quantified with increasing ease. In parallel with these phenomics advances, genetic approaches with more complex genomes are becoming increasingly tractable as genomic information in non-model crops increases and even whole crop genomes can be re-sequenced. Thus, genetic approaches to elucidating the molecular basis to abiotic stress tolerance in crops are becoming more easily achievable.
Toward a new generation of agricultural system data, models, and knowledge products: State of agricultural systems science
We review the current state of agricultural systems science, focusing in particular on the capabilities and limitations of agricultural systems models. We discuss the state of models relative to five different Use Cases spanning field, farm, landscape, regional, and global spatial scales and engaging questions in past, current, and future time periods. Contributions from multiple disciplines have made major advances relevant to a wide range of agricultural system model applications at various spatial and temporal scales. Although current agricultural systems models have features that are needed for the Use Cases, we found that all of them have limitations and need to be improved. We identified common limitations across all Use Cases, namely 1) a scarcity of data for developing, evaluating, and applying agricultural system models and 2) inadequate knowledge systems that effectively communicate model results to society. We argue that these limitations are greater obstacles to progress than gaps in conceptual theory or available methods for using system models. New initiatives on open data show promise for addressing the data problem, but there also needs to be a cultural change among agricultural researchers to ensure that data for addressing the range of Use Cases are available for future model improvements and applications. We conclude that multiple platforms and multiple models are needed for model applications for different purposes. The Use Cases provide a useful framework for considering capabilities and limitations of existing models and data.
Online Ensemble Learning of Data Streams with Gradually Evolved Classes
Class evolution, the phenomenon of class emergence and disappearance, is an important research topic for data stream mining. All previous studies implicitly regard class evolution as a transient change, which is not true for many real-world problems. This paper concerns the scenario where classes emerge or disappear gradually. A class-based ensemble approach, namely Class-Based ensemble for Class Evolution (CBCE), is proposed. By maintaining a base learner for each class and dynamically updating the base learners with new data, CBCE can rapidly adjust to class evolution. A novel under-sampling method for the base learners is also proposed to handle the dynamic class-imbalance problem caused by the gradual evolution of classes. Empirical studies demonstrate the effectiveness of CBCE in various class evolution scenarios in comparison to existing class evolution adaptation methods.
Intravitreal bevacizumab with or without triamcinolone for refractory diabetic macular edema; a placebo-controlled, randomized clinical trial
To evaluate the effect of three intravitreal injections of bevacizumab (IVB) alone or combined with triamcinolone (IVT) in the first injection for treatment of refractory diabetic macular edema (DME). In this prospective, placebo-controlled, randomized clinical trial, 115 eyes of 101 patients with refractory DME were included. Subjects were randomly assigned to one of the three study arms: 1) three injections of IVB (1.25 mg/0.05 ml) at 6-week intervals, 2) combined IVB and IVT (1.25 mg/0.05 ml and 2 mg/0.05 ml respectively) followed by two injections of IVB at 6-week intervals, and 3) sham injection (control group). The primary outcome measure was change in central macular thickness (CMT). Secondary outcome measures were change in best-corrected logMAR visual acuity (BCVA ) and incidence of potential adverse events. Central macular thickness was reduced significantly in both the IVB and IVB/IVT groups. At week 24, CMT change compared to the baseline was −95.7 μm (95% CI, −172.2 to −19.26) in the IVB group, −92.1 μm (95% CI, −154.4 to −29.7) in the IVB/IVT group, and 34.9 μm (95% CI, 7.9 to 61.9) in the control group. There was a significant difference between the IVB and control groups (P = 0.012) and between the IVB/IVT and control groups (P = 0.022). Improvement of BCVA was initiated at weeks 6 and 12 in the IVB/IVT and IVB groups respectively. In terms of BCVA change compared to the baseline at 24 weeks, the differences between the IVB and control groups (P = 0.01) and also between the IVB/IVT and control groups (P = 0.006) were significant. No significant differences were detected in the changes of CMT and BCVA between the IVB and IVB/IVT groups (P = 0.99). Anterior chamber reaction was noticed in eight (19.5%) and seven (18.9%) eyes respectively in the IVB and IVB/IVT groups the day after injection, and it resolved with no sequel. Elevation of IOP occurred in three eyes (8.1%) in the IVB/IVT group. Three consecutive intravitreal injections of bevacizumab had a beneficial effect on refractory DME in terms of CMT reduction and BCVA improvement. Addition of triamcinolone in the first injection seemed to induce earlier visual improvement; however, it did not show any significant additive effect later during follow-up.
Emotion recognition system using brain and peripheral signals: Using correlation dimension to improve the results of EEG
This paper proposed a multimodal fusion between brain and peripheral signals for emotion detection. The input signals were electroencephalogram, galvanic skin resistance, temperature, blood pressure and respiration, which can reflect the influence of emotion on the central nervous system and autonomic nervous system respectively. The acquisition protocol is based on a subset of pictures which correspond to three specific areas of valance-arousal emotional space (positively excited, negatively excited, and calm). The features extracted from input signals, and to improve the results, correlation dimension as a strong nonlinear feature is used for brain signals. The performance of the Quadratic Discriminant Classifier has been evaluated on different feature sets: peripheral signals, EEG's, and both. In comparison among the results of different feature sets, EEG signals seem to perform better than other physiological signals, and the results confirm the interest of using brain signals as peripherals in emotion assessment. According to the improvement in EEG results compare in each raw of the table, it seems that nonlinear features would lead to better understanding of how emotional activities work.
The Precinematic Novel: Zola's La Beêête humaine
ABSTRACT This essay is a study of Zola9s 1890 crime novel as it anticipates the aesthetics of cinema. Theoretical reflections on technology and violence are drawn from the works of Walter Benjamin and Reneee Girard.
Continuous Wave Potassium Titanyl Phosphate Laser Treatment is Safe and Effective for Xanthelasma Palpebrarum.
BACKGROUND Although not an accepted standard treatment, the 532-nm continuous wave potassium titanyl phosphate (CW-KTP) laser might be a powerful device to treat xanthelasma palpebrarum (XP). OBJECTIVE To determine the safety and efficacy of CW-KTP laser treatment for XP. MATERIALS AND METHODS Between January 2013 and January 2015, 30 consecutive patients with XP were treated with a 532-nm CW-KTP laser (spot size: 0.9 mm, power: 5.0 W, fluence: 36-38 J/cm, pulse width: 46 milliseconds, frequency: 2.0 Hz, passes per session: 3). In a retrospective study design, safety and efficacy data were collected and analyzed. RESULTS Overall, 29/30 (97%) of patients had an excellent cosmetical result. Downtime was 1 week with crusted lesions. Although slight hypopigmentation was common, only 1/30 (3%) patients had hypopigmentation that was more than expected. Recurrences (13/30; 43%) were frequent, so that yearly maintenance therapy was warranted. No major side effects were noticed. CONCLUSION Continuous wave KTP laser therapy is safe and highly effective for XP, although regular follow-up treatments are often necessary to maintain the achieved cosmetic results.
Stoichiometric active site modification observed by alkali ion titrations of Sn-Beta
Sn-Beta zeolite can convert carbohydrate feedstocks through different pathways into a variety of chemical building blocks. Alkali salts influence the selectivity between these pathways, but the details of the alkali ion effect on the catalyst have remained unclear. Here, we combine the systematic variation of tin content in Sn-Beta zeolite with alkali ion titrations and functional assays to assess the stochiometry of alkali binding and the prospect of predicting operation optima from catalyst properties. The approach is used to evaluate the product selectivity of defined catalyst states for the conversion of glucose to methyl lactate formation and to characterise the catalytic behavior of the active site with respect to the degree of titration. The optimum selectivity to methyl lactate was found at similar ratios of alkali and active tin for catalyst of different tin loadings, indicating a stochiometric correlation between added alkali ions and tin content in the Sn-Beta zeolite. The observations also indicate that a double dissociation of the active site occurs and that titration between three states is possible. The proton form of Sn-Beta has a poor methyl lactate selectivity, whereas a single exchange of a proton by potassium at the active site leads to a catalytic form with a very high selectivity, while double exchange leads to a catalytically inactive state of the active site. Exchange phenomena at the active site were corroborated by FT-IR spectroscopy, which showed that potassium interacts with hydroxyl groups in the vicinity of Sn.
Baltic foreign policy making establishments of the 1990s: Influential institutional and individual actors
Abstract This article examines, evaluates, and compares the role of political institutions in the foreign policy making process of Estonia, Latvia, and Lithuania in the 1990s. The central claim advanced is that the extent of influence exercised by political institutions in transitional states was largely conditioned by, and depended on, the individuals who headed these institutions. It is argued that the stronger the personality at the top of a political institution, the greater and more influential role it played in the foreign policy making process of the country.
Black-Box Calibration for ADCs With Hard Nonlinear Errors Using a Novel INL-Based Additive Code: A Pipeline ADC Case Study
This paper presents a digital nonlinearity calibration technique for ADCs with strong input–output discontinuities between adjacent codes, such as pipeline, algorithmic, and SAR ADCs with redundancy. In this kind of converter, the ADC transfer function often involves multivalued regions, where conventional integral-nonlinearity (INL)-based calibration methods tend to miscalibrate, negatively affecting the ADC’s performance. As a solution to this problem, this paper proposes a novel INL-based calibration which incorporates information from the ADC’s internal signals to provide a robust estimation of static nonlinear errors for multivalued ADCs. The method is fully generalizable and can be applied to any existing design as long as there is access to internal digital signals. In pipeline or subranging ADCs, this implies access to partial subcodes before digital correction; for algorithmic or SAR ADCs, conversion bit/bits per cycle are used. As a proof-of-concept demonstrator, the experimental results for a 1.2 V 23 mW 130 nm-CMOS pipeline ADC with a SINAD of 58.4 dBc (in nominal conditions without calibration) is considered. In a stressed situation with 0.95 V of supply, the ADC has SINAD values of 47.8 dBc and 56.1 dBc, respectively, before and after calibration (total power consumption, including the calibration logic, being 15.4 mW).
Smart Interactive Comprehensive Learning Aid: Practical Application of Bruner's Theories in Primary Education
H.K.T.C. Halloluwa, A.P. Kaushalya, P.K.B.P.S. Bandara, K.D.S. Yapa, S. S. Sumanadasa Abstract— Smart Interactive Comprehensive Learning Aid (SICLA) is learning and teaching aid, implemented based on Bruner’s theory on the development of the children (1996), in education. The main objective is to provide an automated interactive learning tool for children who can learn and acquire knowledge with minimum teacher support. According to the Bruner’s theory, for effective teaching and learning can be done through three modes of representation: enactive representation (action based), iconic representation (image based), and symbolic representation (language based) and that is the main focus for this automated tool. This interactive solution focuses mainly the development of language skills, enactive skills and cognitive skills of a child through some software based activities. Activities selected carefully by focusing above underline skill and the level of knowledge that the child has and the age child is in. Therefore the selected activities leads to acquire new knowledge within interactive environment and the performance of the child evaluate with the aid of intelligent components of the software application. Some of the activities includes are the Letter identifier by object tracking, speech capturing, story builder and a smart story board, memory activities and innovative object designing. The implemented solution capable of expanding to capture new knowledge of parent/ teacher and that can be acquired by the child with or without their support. Advanced neural network techniques help to capture child knowledge and evaluate them with the constructive feedbacks in the execution time. One of the key intentions of this automated tool is to deliver a user friendly automated learning tool grounded with proven effective teaching techniques in affordable cost.
Breeding technologies to increase crop production in a changing world.
To feed the several billion people living on this planet, the production of high-quality food must increase with reduced inputs, but this accomplishment will be particularly challenging in the face of global environmental change. Plant breeders need to focus on traits with the greatest potential to increase yield. Hence, new technologies must be developed to accelerate breeding through improving genotyping and phenotyping methods and by increasing the available genetic diversity in breeding germplasm. The most gain will come from delivering these technologies in developing countries, but the technologies will have to be economically accessible and readily disseminated. Crop improvement through breeding brings immense value relative to investment and offers an effective approach to improving food security.
Ontology-based information extraction for subject-focussed automatic essay evaluation
Automatic essay evaluation (AEE) systems are designed to assist a teacher in the task of classroom assessment in order to alleviate the demands of manual subject evaluation. However, although numerous AEE systems are available, most of these systems do not use elaborate domain knowledge for evaluation, which limits their ability to give informative feedback to students and also their ability to constructively grade a student based on a particular domain of study. This paper is aimed at improving on the achievements of previous studies by providing a subject-focussed evaluation system that considers the domain knowledge while scoring and provides informative feedback to its user. The study employs a combination of techniques such as system design and modelling using Unified Modelling Language (UML), information extraction, ontology development, data management, and semantic matching in order to develop a prototype subject-focussed AEE system. The developed system was evaluated to determine its level of performance and usability. The result of the usability evaluation showed that the system has an overall mean rating of 4.17 out of maximum of 5, which indicates ‘good usability’. In terms of performance, the assessment done by the system was also found to have sufficiently high correlation with those done by domain experts, in addition to providing appropriate feedback to the user.
CONNECT: re-examining conventional wisdom for designing nocs in the context of FPGAs
An FPGA is a peculiar hardware realization substrate in terms of the relative speed and cost of logic vs. wires vs. memory. In this paper, we present a Network-on-Chip (NoC) design study from the mindset of NoC as a synthesizable infrastructural element to support emerging System-on-Chip (SoC) applications on FPGAs. To support our study, we developed CONNECT, an NoC generator that can produce synthesizable RTL designs of FPGA-tuned multi-node NoCs of arbitrary topology. The CONNECT NoC architecture embodies a set of FPGA-motivated design principles that uniquely influence key NoC design decisions, such as topology, link width, router pipeline depth, network buffer sizing, and flow control. We evaluate CONNECT against a high-quality publicly available synthesizable RTL-level NoC design intended for ASICs. Our evaluation shows a significant gain in specializing NoC design decisions to FPGAs' unique mapping and operating characteristics. For example, in the case of a 4x4 mesh configuration evaluated using a set of synthetic traffic patterns, we obtain comparable or better performance than the state-of-the-art NoC while reducing logic resource cost by 58%, or alternatively, achieve 3-4x better performance for approximately the same logic resource usage. Finally, to demonstrate CONNECT's flexibility and extensive design space coverage, we also report synthesis and network performance results for several router configurations and for entire CONNECT networks.
Mixed-initiative co-creativity
Creating and designing with a machine: do we merely create together (co-create) or can a machine truly foster our creativity as human creators? When does such co-creation foster the co-creativity of both humans and machines? This paper investigates the simultaneous and/or iterative process of human and computational creators in a mixed-initiative fashion within the context of game design and attempts to draw from both theory and praxis towards answering the above questions. For this purpose, we first discuss the strong links between mixed-initiative co-creation and theories of human and computational creativity. We then introduce an assessment methodology of mixed-initiative co-creativity and, as a proof of concept, evaluate Sentient Sketchbook as a co-creation tool for game design. Core findings suggest that tools such as Sentient Sketchbook are not mere game authoring systems or mere enablers of creation but, instead, foster human creativity and realize mixed-initiative co-creativity.
Double Embeddings and CNN-based Sequence Labeling for Aspect Extraction
One key task of fine-grained sentiment analysis of product reviews is to extract product aspects or features that users have expressed opinions on. This paper focuses on supervised aspect extraction using deep learning. Unlike other highly sophisticated supervised deep learning models, this paper proposes a novel and yet simple CNN model 1 employing two types of pre-trained embeddings for aspect extraction: general-purpose embeddings and domain-specific embeddings. Without using any additional supervision, this model achieves surprisingly good results, outperforming state-of-the-art sophisticated existing methods. To our knowledge, this paper is the first to report such double embeddings based CNN model for aspect extraction and achieve very good results.
Particle Swarm Optimization for Nonlinear Model Predictive Control
The paper proposes two Nonlinear Model Predictive Control schemes that uncover a synergistic relationship between on-line receding horizon style computation and Particle Swarm Optimization, thus benefiting from both the performance advantages of on-line computation and the desirable properties of Particle Swarm Optimization. After developing these techniques for the unconstrained nonlinear optimal control problem, the entire design methodology is illustrated by a simulated inverted pendulum on a cart, and compared with a particular numerical linearization technique exploiting conventional convex optimization methods. This is then extended to input constrained nonlinear systems, offering a promising new paradigm for nonlinear optimal control design.
Are Accuracy and Robustness Correlated
Machine learning models are vulnerable to adversarial examples formed by applying small carefully chosen perturbations to inputs that cause unexpected classification errors. In this paper, we perform experiments on various adversarial example generation approaches with multiple deep convolutional neural networks including Residual Networks, the best performing models on ImageNet Large-Scale Visual Recognition Challenge 2015. We compare the adversarial example generation techniques with respect to the quality of the produced images, and measure the robustness of the tested machine learning models to adversarial examples. Finally, we conduct large-scale experiments on cross-model adversarial portability. We find that adversarial examples are mostly transferable across similar network topologies, and we demonstrate that better machine learning models are less vulnerable to adversarial examples.
Concordant lymphoma of cutaneous anaplastic large cell lymphoma and systemic B-cell leukaemia.
SIR, Arndt–Gottron scleromyxoedema is a rare fibromucinous disorder regarded as a variant of the lichen myxoedematosus. The diagnostic criteria are a generalized papular and sclerodermoid eruption, a microscopic triad of mucin deposition, fibroblast proliferation and fibrosis, a monoclonal gammopathy (mostly IgG-k paraproteinaemia) and the absence of a thyroid disorder. This disease initially presents with sclerosis of the skin and clusters of small lichenoid papules with a predilection for the face, neck and the forearm. Progressively, the skin lesions can become more widespread and the induration of skin can result in a scleroderma-like condition with sclerodactyly and microstomia, reduced mobility and disability. Systemic involvement is common, e.g. upper gastrointestinal dysmotility, proximal myopathy, joint contractures, neurological complications such as psychic disturbances and encephalopathy, obstructive ⁄restrictive lung disease, as well as renal and cardiovascular involvement. Numerous treatment options have been described in the literature. These include corticosteroids, retinoids, thalidomide, extracorporeal photopheresis (ECP), psoralen plus ultraviolet A radiation, ciclosporin, cyclophosphamide, melphalan or autologous stem cell transplantation. In September 1999, a 48-year-old white female first noticed an erythematous induration with a lichenoid papular eruption on her forehead. Three months later the lesions became more widespread including her face (Fig. 1a), neck, shoulders, forearms (Fig. 2a) and legs. When the patient first presented in our department in June 2000, she had problems opening her mouth fully as well as clenching both hands or moving her wrist. The histological examination of the skin biopsy was highly characteristic of Arndt–Gottron scleromyxoedema. Full blood count, blood morphology, bone marrow biopsy, bone scintigraphy and thyroid function tests were normal. Serum immunoelectrophoresis revealed an IgG-k paraproteinaemia. Urinary Bence-Jones proteins were negative. No systemic involvement was disclosed. We initiated ECP therapy in August 2000, initially at 2-week intervals (later monthly) on two succeeding days. When there was no improvement after 3 months, we also administered cyclophosphamide (Endoxana ; Baxter Healthcare Ltd, Newbury, U.K.) at a daily dose of 100 mg with mesna 400 mg (Uromitexan ; Baxter) prophylaxis. The response to this therapy was rather moderate. In February 2003 the patient developed a change of personality and loss of orientation and was admitted to hospital. The extensive neurological, radiological and microbiological diagnostics were unremarkable at that time. A few hours later the patient had seizures and was put on artificial ventilation in an intensive care unit. The patient was comatose for several days. A repeated magnetic resonance imaging scan was still normal, but the cerebrospinal fluid tap showed a dysfunction of the blood–cerebrospinal fluid barrier. A bilateral loss of somatosensory evoked potentials was noticeable. The neurological symptoms were classified as a ‘dermatoneuro’ syndrome, a rare extracutaneous manifestation of scleromyxoedema. After initiation of treatment with methylprednisolone (Urbason ; Aventis, Frankfurt, Germany) the neurological situation normalized in the following 2 weeks. No further medical treatment was necessary. In April 2003 therapy options were re-evaluated and the patient was started and maintained on a 7-day course of melphalan 7.5 mg daily (Alkeran ; GlaxoSmithKline, Uxbridge, U.K.) in combination with prednisolone 40 mg daily (Decortin H ; Merck, Darmstadt, Germany) every 6 weeks. This treat(a)
Validation of the Brazilian version of Guy's neurological disability scale.
The Guy's neurological disability scale (GNDS) has recently been introduced as a new measure of disability in multiple sclerosis. It is patient-oriented, multidimensional, and not biased towards any particular disability. The purpose of the present study was to validate the Brazilian version of the GNDS. The adaptation of the scale was based on the translation/back-translation methodology. Sixty-two patients with clinically definite multiple sclerosis (CDMS) according to Poser's criteria were recruited for this study. GNDS was administered individually to each subject. The EDSS and the ambulation index (AI) scores were assigned by a neurologist. The intraclass correlation coefficient and the Cronbach's alpha values of the Brazilian version of GNDS (0.94 and 0.83, respectively) were comparable to the original one (0.98 and 0.79, respectively). Furthermore, the factor analysis of the Brazilian version of GNDS suggested, as the original article, a four-factor solution which accounted for 68.8% of the total variance. The Brazilian version of GNDS was found to be clinically relevant as it correlated significantly with the EDSS and AI. In conclusion, the Brazilian version of GNDS can be considered an important tool to evaluate the disability in MS patients, with clinical usefulness and psychometrics soundness.
Why Firms Mandate ISO 14001 Certification
Thousands of facilities worldwide have certified to International Organization for Standardization (ISO) 14001, the international environmental management system standard, and previous research typically has studied these certification decisions at the facility level. However, significant anecdotal evidence indicates that firms may have a strong role, and if so, prior studies may be drawing inappropriate conclusions about the rationale for ISO 14001 certification. Drawing on institutional theory and the resource-based view of the firm, this study offers a conceptual framework that explains why parent companies would mandate—rather than simply encourage—their operational units to certify to ISO 14001. The framework is tested using survey data of corporate environmental managers. The results show that firms have a central role in nearly half of all facility-level certifications and that firms that mandate ISO 14001 endure greater external pressures and have stronger complementary resources and capabilities that support their organization-wide ISO 14001 policies.
Training generative neural networks via Maximum Mean Discrepancy optimization
We consider training a deep neural network to generate samples from an unknown distribution given i.i.d. data. We frame learning as an optimization minimizing a two-sample test statistic—informally speaking, a good generator network produces samples that cause a twosample test to fail to reject the null hypothesis. As our two-sample test statistic, we use an unbiased estimate of the maximum mean discrepancy, which is the centerpiece of the nonparametric kernel two-sample test proposed by Gretton et al. [2]. We compare to the adversarial nets framework introduced by Goodfellow et al. [1], in which learning is a two-player game between a generator network and an adversarial discriminator network, both trained to outwit the other. From this perspective, the MMD statistic plays the role of the discriminator. In addition to empirical comparisons, we prove bounds on the generalization error incurred by optimizing the empirical MMD.
Design of a highly biomimetic anthropomorphic robotic hand towards artificial limb regeneration
A wide range of research areas, from telemanipulation in robotics to limb regeneration in tissue engineering, could benefit from an anthropomorphic robotic hand that mimics the salient features of the human hand. The challenges of designing such a robotic hand are mainly resulted from our limited understanding of the human hand from engineering point of view and our ability to replicate the important biomechanical features with conventional mechanical design. We believe that the biomechanics of human hand is an essential component of the hand dexterity and can be replicated with highly biomimetic design. To this end, we reinterpret the important biomechanical advantages of the human hand from roboticist's perspective and design a biomimetic robotic hand that closely mimics its human counterpart with artificial joint capsules, crocheted ligaments and tendons, laser-cut extensor hood, and elastic pulley mechanisms. We experimentally identify the workspaces of the fingertips and successfully demonstrate that our proofof- concept design can be teleoperated to grasp and manipulate daily objects with a variety of natural hand postures based on hand taxonomy.
The Digital Divide: Current and Future Research Directions
The digital divide refers to the separation between those who have access to digital information and communications technology (ICT) and those who do not. Many believe that universal access to ICT would bring about a global community of interaction, commerce, and learning resulting in higher standards of living and improved social welfare. However, the digital divide threatens this outcome, leading many public policy makers to debate the best way to bridge the divide. Much of the research on the digital divide focuses on first order effects regarding who has access to the technology, but some work addresses the second order effects of inequality in the ability to use the technology among those who do have access. In this paper, we examine both first and second order effects of the digital divide at three levels of analysis  the individual level, the organizational level, and the global level. At each level, we survey the existing research noting the theoretical perspective taken in the work, the research methodology employed, and the key results that were obtained. We then suggest a series of research questions at each level of analysis to guide researchers seeking to further examine the digital divide and how it impacts citizens, managers, and economies.
The ironic spectator: Solidarity in the age of post-humanitarianism [Book Review]
Review(s) of: The ironic spectator: Solidarity in the age of post-humanitarianism, by Chouliaraki, Lilie, Polity Press, Cambridge, 2012, ISBN 9 7807 4564 2116, 248 pp.
Netdispatcher: a Tcp Connection Router Netdispatcher: a Tcp Connection Router
LIMITED DISTRIBUTION NOTICE This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It has been issued as a Research Report for e a rly dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and speciic requests. After outside publication, requests should be lled only by reprints or legally obtained copies of the article e.g., payment of royalties. Abstract NetDispatcher is a software router of TCP connections that supports load sharing across multiple TCP servers. It consists of the Executor, an operating system kernel extension that supports fast IP packet forwarding, and a user level Manager process that controls it. The Manager implements a novel dynamic load-sharing algorithm for allocation of TCP connections among servers according to their real-time load and responsiveness. This algorithm produces weights that are used by the Executor to quickly select a server for each new connection request. This allocation method was shown to be highly eecient in real tests, for large Internet sites serving millions of TCP connections per day. The Executor forwards client TCP packets to the servers without performing any TCPPIP header translations. Outgoing server-to-client packets are not handled by NetDispatcher and can follow a separate network route to the clients. Depending on the workload traac, the performance beneet of this half-connection method can be signiicant. Prototypes of NetDispatcher were used to scale up several large and high-load Internet sites.
The role of septal surgery in management of the deviated nose.
The deviated nose represents a complex cosmetic and functional problem. Septal surgery plays a central role in the successful management of the externally deviated nose. This study included 260 patients seeking rhinoplasty to correct external nasal deviations; 75 percent of them had various degrees of nasal obstruction. Septal surgery was necessary in 232 patients (89 percent), not only to improve breathing but also to achieve a straight, symmetrical, external nose as well. A graduated surgical approach was adopted to allow correction of the dorsal and caudal deviations of the nasal septum without weakening its structural support to the dorsum or nasal tip. The approach depended on full mobilization of deviated cartilage, followed by straightening of the cartilage and its fixation in the corrected position by using bony splinting grafts through an external rhinoplasty approach.
Renal function at the time of a myocardial infarction maintains prognostic value for more than 10 years
BACKGROUND Renal function is an important predictor of mortality in patients with myocardial infarction (MI), but changes in the impact over time have not been well described.We examined the importance of renal function by estimated GFR (eGFR) and se-creatinine as an independent long-term prognostic factor. METHODS Prospective follow-up of 6653 consecutive MI patients screened for entry in the Trandolapril Cardiac Evaluation (TRACE) study. The patients were analysed by Kaplan-Meier survival analysis, landmark analysis and Cox proportional hazard models. Outcome measure was all-cause mortality. RESULTS An eGFR below 60 ml per minute per 1.73 m2, consistent with chronic renal disease, was present in 42% of the patients. We divided the patients into 4 groups according to eGFR. Overall, Cox proportional-hazards models showed that eGFR was a significant prognostic factor in the two groups with the lowest eGFR, hazard ratio 1,72 (confidence interval (CI) 1,56-1,91) in the group with the lowest eGFR. Using the eGFR group with normal renal function as reference, we observed an incremental rise in hazard ratio. We divided the follow-up period in 2-year intervals. Landmark analysis showed that eGFR at the time of screening continued to show prognostic effect until 16 years of follow-up. By multivariable Cox regression analysis, the prognostic effect of eGFR persisted for 12 years and of se-creatinine for 10 years. When comparing the lowest group of eGFR with the group with normal eGFR, prognostic significance was present in the entire period of follow-up with a hazard ratio between 1,97 (CI 1,65-2,35) and 1,35 (CI 0,99-1,84) in the 2-year periods. CONCLUSIONS One estimate of renal function is a strong and independent long-term prognostic factor for 10-12 years following a MI.
Prognostic Value of EMT-inducing Transcription Factors (EMT-TFs) in Metastatic Breast Cancer: A Systematic Review and Meta-analysis
The epithelial-to-mesenchymal transition (EMT) is a vital control point in metastatic breast cancer (MBC). TWIST1, SNAIL1, SLUG, and ZEB1, as key EMT-inducing transcription factors (EMT-TFs), are involved in MBC through different signaling cascades. This updated meta-analysis was conducted to assess the correlation between the expression of EMT-TFs and prognostic value in MBC patients. A total of 3,218 MBC patients from fourteen eligible studies were evaluated. The pooled hazard ratios (HR) for EMT-TFs suggested that high EMT-TF expression was significantly associated with poor prognosis in MBC patients (HRs = 1.72; 95% confidence intervals (CIs) = 1.53-1.93; P = 0.001). In addition, the overexpression of SLUG was the most impactful on the risk of MBC compared with TWIST1 and SNAIL1, which sponsored fixed models. Strikingly, the increased risk of MBC was less associated with ZEB1 expression. However, the EMT-TF expression levels significantly increased the risk of MBC in the Asian population (HR = 2.11, 95% CI = 1.70-2.62) without any publication bias (t = 1.70, P = 0.11). These findings suggest that the overexpression of potentially TWIST1, SNAIL1 and especially SLUG play a key role in the aggregation of MBC treatment as well as in the improvement of follow-up plans in Asian MBC patients.
Scalable Hardware Trojan Diagnosis
Hardware Trojans (HTs) pose a significant threat to the modern and pending integrated circuit (IC). Due to the diversity of HTs and intrinsic process variation (PV) in IC design, detecting and locating HTs is challenging. Several approaches have been proposed to address the problem, but they are either incapable of detecting various types of HTs or unable to handle very large circuits. We have developed a scalable HT detection and diagnosis approach that uses segmentation and gate level characterization (GLC). We ensure the detection of arbitrary malicious circuitry by measuring the overall leakage current for a set of different input vectors. In order to address the scalability issue, we employ a segmentation method that divides the large circuit into small sub-circuits using input vector selection. We develop a segment selection model in terms of properties of segments and their effects on GLC accuracy. The model parameters are calibrated by sampled data from the GLC process. Based on the selected segments we are able to detect and diagnose HTs by tracing gate level leakage power. We evaluate our approach on several ISCAS85/ISCAS89/ITC99 benchmarks. The simulation results show that our approach is capable of detecting and diagnosing HTs accurately on large circuits.
An Attentional Neural Conversation Model with Improved Specificity
In this paper we propose a neural conversation model for conducting dialogues. We demonstrate the use of this model to generate help desk responses, where users are asking questions about PC applications. Our model is distinguished by two characteristics. First, it models intention across turns with a recurrent network, and incorporates an attention model that is conditioned on the representation of intention. Secondly, it avoids generating nonspecific responses by incorporating an IDF term in the objective function. The model is evaluated both as a pure generation model in which a help-desk response is generated from scratch, and as a retrieval model with performance measured using recall rates of the correct response. Experimental results indicate that the model outperforms previously proposed neural conversation architectures, and that using specificity in the objective function significantly improves performances for both generation and retrieval.
Soft video parsing by label distribution learning
In this paper, we tackle the problem of segmenting out a sequence of actions from videos. The videos contain background and actions which are usually composed of ordered sub-actions. We refer the sub-actions and the background as semantic units. Considering the possible overlap between two adjacent semantic units, we propose a bidirectional sliding window method to generate the label distributions for various segments in the video. The label distribution covers a certain number of semantic unit labels, representing the degree to which each label describes the video segment. The mapping from a video segment to its label distribution is then learned by a Label Distribution Learning (LDL) algorithm. Based on the LDL model, a soft video parsing method with segmental regular grammars is proposed to construct a tree structure for the video. Each leaf of the tree stands for a video clip of background or sub-action. The proposed method shows promising results on the THUMOS’14, MSR-II and UCF101 datasets and its computational complexity is much less than the compared state-of-the-art video parsing method.
Word Embedding Composition for Data Imbalances in Sentiment and Emotion Classification
Text classification often faces the problem of imbalanced training data. This is true in sentiment analysis and particularly prominent in emotion classification where multiple emotion categories are very likely to produce naturally skewed training data. Different sampling methods have been proposed to improve classification performance by reducing the imbalance ratio between training classes. However, data sparseness and the small disjunct problem remain obstacles in generating new samples for minority classes when the data are skewed and limited. Methods to produce meaningful samples for smaller classes rather than simple duplication are essential in overcoming this problem. In this paper, we present an oversampling method based on word embedding compositionality which produces meaningful balanced training data. We first use a large corpus to train a continuous skip-gram model to form a word embedding model maintaining the syntactic and semantic integrity of the word features. Then, a compositional algorithm based on recursive neural tensor networks is used to construct sentence vectors based on the word embedding model. Finally, we use the SMOTE algorithm as an oversampling method to generate samples for the minority classes and produce a fully balanced training set. Evaluation results on two quite different tasks show that the feature composition method and the oversampling method are both important in obtaining improved classification results. Our method effectively addresses the data imbalance issue and consequently achieves improved results for both sentiment and emotion classification.
PLC Forensics Based on Control Program Logic Change Detection
Supervisory Control and Data Acquisition (SCADA) system is an industrial control automated system. It is built with multiple Programmable Logic Controllers (PLCs). PLC is a special form of microprocessor-based controller with proprietary operating system. Due to the unique architecture of PLC, traditional digital forensic tools are difficult to be applied. In this paper, we propose a program called Control Program Logic Change Detector (CPLCD), which works with a set of Detection Rules (DRs) to detect and record undesired incidents on interfering normal operations of PLC. In order to prove the feasibility of our solution, we set up two experiments for detecting two common PLC attacks. Moreover, we illustrate how CPLCD and network analyzer Wireshark could work together for performing digital forensic investigation on PLC.
Using Educational Data Mining Methods to Study the Impact of Virtual Classroom in E-Learning
In the past few years, Iranian universities have embarked to use e-learning tools and technologies to extend and improve their educational services. After a few years of conducting e-learning programs a debate took place within the executives and managers of the e-learning institutes concerning which activities are of the most influence on the learning progress of online students. This research is aimed to investigate the impact of a number of e-learning activities on the students’ learning development. The results show that participation in virtual classroom sessions has the most substantial impact on the students’ final grades. This paper presents the process of applying data mining methods to the web usage records of students’ activities in a virtual learning environment. The main idea is to rank the learning activities based on their importance in order to improve students’ performance by focusing on the most important ones.
Intelligent control scheme for twin rotor MIMO system
This paper presents a control problem involving an experimental propeller setup that is called the twin rotor multi-input multi-output system (TRMS). The control objective is to make the beam of the TRMS move quickly and accurately to the desired attitudes, both the pitch angle and the azimuth angle in the condition of decoupling between two axes. It is difficult to design a suitable controller because of the influence between the two axes and nonlinear movement. For easy demonstration in the vertical and horizontal separately, the TRMS is decoupled by the main rotor and tail rotor. An intelligent control scheme which utilizes a hybrid PID controller is implemented to this problem. Simulation results show that the new approach to the TRMS control problem can improve the tracking performance and reduce control energy.
Cross-modal adaptation for RGB-D detection
In this paper we propose a technique to adapt convolutional neural network (CNN) based object detectors trained on RGB images to effectively leverage depth images at test time to boost detection performance. Given labeled depth images for a handful of categories we adapt an RGB object detector for a new category such that it can now use depth images in addition to RGB images at test time to produce more accurate detections. Our approach is built upon the observation that lower layers of a CNN are largely task and category agnostic and domain specific while higher layers are largely task and category specific while being domain agnostic. We operationalize this observation by proposing a mid-level fusion of RGB and depth CNNs. Experimental evaluation on the challenging NYUD2 dataset shows that our proposed adaptation technique results in an average 21% relative improvement in detection performance over an RGB-only baseline even when no depth training data is available for the particular category evaluated. We believe our proposed technique will extend advances made in computer vision to RGB-D data leading to improvements in performance at little additional annotation effort.
Dropout Improves Recurrent Neural Networks for Handwriting Recognition
Recurrent neural networks (RNNs) with Long Short-Term memory cells currently hold the best known results in unconstrained handwriting recognition. We show that their performance can be greatly improved using dropout - a recently proposed regularization method for deep architectures. While previous works showed that dropout gave superior performance in the context of convolutional networks, it had never been applied to RNNs. In our approach, dropout is carefully used in the network so that it does not affect the recurrent connections, hence the power of RNNs in modeling sequences is preserved. Extensive experiments on a broad range of handwritten databases confirm the effectiveness of dropout on deep architectures even when the network mainly consists of recurrent and shared connections.
Modeling and Simulation of DC Microgrids for Electric Vehicle Charging Stations
This paper focuses on the evaluation of theoretical and numerical aspects related to an original DC microgrid power architecture for efficient charging of plug-in electric vehicles (PEVs). The proposed DC microgrid is based on photovoltaic array (PVA) generation, electrochemical storage, and grid connection; it is assumed that PEVs have a direct access to their DC charger input. As opposed to conventional power architecture designs, the PVA is coupled directly on the DC link without a static converter, which implies no DC voltage stabilization, increasing energy efficiency, and reducing control complexity. Based on a real-time rule-based algorithm, the proposed power management allows self-consumption according to PVA power production and storage constraints, and the public grid is seen only as back-up. The first phase of modeling aims to evaluate the main energy flows within the proposed DC microgrid architecture and to identify the control structure and the power management strategies. For this, an original model is obtained by applying the Energetic Macroscopic Representation formalism, which allows deducing the control design using Maximum Control Structure. The second phase of simulation is based on the numerical characterization of the DC microgrid components and the energy management strategies, which consider the power source requirements, charging times of different PEVs, electrochemical storage ageing, and grid power limitations for injection mode. The simulation results show the validity of the model and the feasibility of the proposed DC microgrid power architecture which presents good performance in terms of total efficiency and simplified control. OPEN ACCESS Energies 2015, 8 4336
Bulk Current Injection in Twisted Wire Pairs With Not Perfectly Balanced Terminations
In this paper, common mode (CM) and differential mode (DM) voltages induced by bulk current injection at the terminations of a differential interconnection are predicted and correlated to the degree of unbalance of the terminal units. This is done by extending a previously derived lumped-parameter representation of the injection probe and by combining modal analysis with a multiconductor transmission line theory. Under nonrestrictive assumptions on termination unbalance, it is shown that CM and DM voltages can be readily predicted by two equivalent circuits. The circuit for CM prediction is independent on termination unbalance and directly excited by the injection probe. The circuit for DM prediction does not involve any of the probe model parameters, and is driven by two voltage sources related to the induced CM voltages through the common-mode rejection ratio of each termination. Model accuracy was proven by measurements carried out on a test bench setup according to the BCI Standards.
Smart Cities at Risk!: Privacy and Security Borderlines from Social Networking in Cities
As smart cities infrastructures mature, data becomes a valuable asset which can radically improve city services and tools. Registration, acquisition and utilization of data, which will be transformed into smart services, are becoming more necessary than ever. Online social networks with their enormous momentum are one of the main sources of urban data offering heterogeneous real-time data at a minimal cost. However, various types of attacks often appear on them, which risk users' privacy and affect their online trust. The purpose of this article is to investigate how risks on online social networks affect smart cities and study the differences between privacy and security threats with regard to smart people and smart living dimensions.
Deep convolutional neural network based large-scale oil palm tree detection for high-resolution remote sensing images
This paper proposed a deep convolutional neural network (DCNN) based framework for large-scale oil palm tree detection using high-resolution remote sensing images in Malaysia. Different from the previous palm tree or tree crown detection studies, the palm trees in our study area are very crowded and their crowns often overlap. Moreover, there are various land cover types in our study area, e.g. impervious, bare land, and other vegetation, etc. The main steps of our proposed method include large-scale and multi-class sample collection, AlexNet-based DCNN training and optimization, sliding window-based label prediction, and post-processing. Compared with the manually interpreted ground truth, our proposed method achieves detection accuracies of 92%–97% in our study area, which are greatly higher than the accuracies obtained from another two detection methods used in this paper.
Relational coordination promotes quality of chronic care delivery in Dutch disease-management programs.
BACKGROUND Previous studies have shown that relational coordination is positively associated with the delivery of hospital care, acute care, emergency care, trauma care, and nursing home care. The effect of relational coordination in primary care settings, such as disease-management programs, remains unknown. PURPOSE This study examined relational coordination between general practitioners and other professionals in disease-management programs and assessed the impact of relational coordination on the delivery of chronic illness care. METHODOLOGY Professionals (n = 188; response rate = 57%) in 19 disease-management programs located throughout the Netherlands completed surveys that assessed relational coordination and chronic care delivery. We used a cross-sectional study design. FINDINGS Our study demonstrated that the delivery of chronic illness care was positively related to relational coordination. We found positive relationships with community linkages (r = .210, p < .01), self-management support (r = .217, p < .01), decision support (r = .190, p < .01), delivery system design (r = .278, p < .001), and clinical information systems (r = .193, p < .01). Organization of the health delivery system was not significantly related to relational coordination. The regression analyses showed that even after controlling for all background variables, relational coordination still significantly affected chronic care delivery (β = .212, p ≤ .01). As expected, our findings showed a lower degree of relational coordination among general practitioners than between general practitioners and other core disease-management team members: practice nurses (M = 2.69 vs. 3.73; p < .001), dieticians (M = 2.69 vs. 3.07; p < .01), physical therapists (M = 2.69 vs. 3.06; p < .01), medical specialists (M = 2.69 vs. 3.16; p < .01), and nurse practitioners (M = 2.69 vs. 3.19; p < .001). PRACTICE IMPLICATIONS The enhancement of relational coordination among core disease-management professionals with different disciplines is expected to improve chronic illness care delivery.
Assessment of cationic dye biosorption characteristics of untreated and non-conventional biomass: Pyracantha coccinea berries.
This work reports on the assessment of the dye methylene blue biosorption properties of Pyracantha coccinea berries under different experimental conditions. Equilibrium and kinetic studies were carried out to determine the biosorption capacity and rate constants. The highest biosorption yield was observed at about pH 6.0, while the biosorption capacity of the biomass decreased with decreasing initial pH values. Batch equilibrium data obtained at different temperatures (15, 25, 35 and 45 degrees C) were modeled by Freundlich, Langmuir and Dubinin-Radushkevich (D-R) isotherms. Langmuir isotherm model fitted the equilibrium data, at the all studied temperatures, better than the other isotherm models indicating monolayer dye biosorption process. The highest monolayer biosorption capacity was found to be 127.50mg/g dry biomass at 45 degrees C. Kinetic studies indicate that the biosorption process followed the pseudo-second-order model, rather than the pseudo-first-order model. DeltaG degrees , DeltaH degrees and DeltaS degrees parameters of biosorption show that the process is spontaneous and endothermic in nature. The biosorbent-dye interaction mechanisms were investigated using a combination of Fourier transform infrared spectroscopy and scanning electron microscopy. The biosorption procedure was applied to simulated wastewater including several pollutants. The results obtained indicated that the suggested inexpensive and readily available biomaterial has a good potential for the biosorptive removal of basic dye.
The Human Papillomavirus (HPV) in Human Pathology: Description, Pathogenesis, Oncogenic Role, Epidemiology and Detection Techniques
Persistent infection by human papilloma virus (HPV) is considered to be the main causative agent of cervical cancer and other anogenital cancers. Of more than 30 genotypes capable of infecting the anogenital tract it is estimated that, worldwide, HPV 16 and 18 cause 70 percent of the cervical cancers. So far, more than 100 types and subtypes of HPVs have been wholly or partially sequenced. All of them, approximately 40 types, have been isolated from lesions to the lower genital tract and between 15 and 20, according to different studies, have been detected in carcinomas. According to their oncogenic risk, they are classified as low-risk HPV (LR-HPV) and high-risk HPV (HR-HPV). We must take into account that certain viral types may appear in cancerous lesions as a result of a co-infection and not be the causative etiologic agents for tumour transformation. As is logical, epidemiological studies attribute important population variations to the prevalence and cause/effect of different viral types, however, there is no doubt about the high prevalence or involvement of types 16 and 18 in high level pathologies and carcinomas in our population. The detection of HPV DNA through techniques of Molecular Biology, regardless of the method used, are based on the specificity of the complementarity among the nucleic acids. A DNA sequence has the ability to specifically hybridise with other DNAs or RNAs so specifically that at a certain temperature they only form hybrids if 100 percent of the bases are complementary. The way of detecting these hybrids, the composition of the DNA probes, and the existence or not of amplification of the signal mark the difference between the different detection techniques. The assessment of the viral load, integration, and other molecular parameters are shaping up as excellent complementary diagnoses in daily clinical