title
stringlengths
8
300
abstract
stringlengths
0
10k
Subcallosal cingulate deep brain stimulation for treatment-resistant depression: a multisite, randomised, sham-controlled trial.
BACKGROUND Deep brain stimulation (DBS) of the subcallosal cingulate white matter has shown promise as an intervention for patients with chronic, unremitting depression. To test the safety and efficacy of DBS for treatment-resistant depression, a prospective, randomised, sham-controlled trial was conducted. METHODS Participants with treatment-resistant depression were implanted with a DBS system targeting bilateral subcallosal cingulate white matter and randomised to 6 months of active or sham DBS, followed by 6 months of open-label subcallosal cingulate DBS. Randomisation was computer generated with a block size of three at each site before the site started the study. The primary outcome was frequency of response (defined as a 40% or greater reduction in depression severity from baseline) averaged over months 4-6 of the double-blind phase. A futility analysis was performed when approximately half of the proposed sample received DBS implantation and completed the double-blind phase. At the conclusion of the 12-month study, a subset of patients were followed up for up to 24 months. The study is registered at ClinicalTrials.gov, number NCT00617162. FINDINGS Before the futility analysis, 90 participants were randomly assigned to active (n=60) or sham (n=30) stimulation between April 10, 2008, and Nov 21, 2012. Both groups showed improvement, but there was no statistically significant difference in response during the double-blind, sham-controlled phase (12 [20%] patients in the stimulation group vs five [17%] patients in the control group). 28 patients experienced 40 serious adverse events; eight of these (in seven patients) were deemed to be related to the study device or surgery. INTERPRETATION This study confirmed the safety and feasibility of subcallosal cingulate DBS as a treatment for treatment-resistant depression but did not show statistically significant antidepressant efficacy in a 6-month double-blind, sham-controlled trial. Future studies are needed to investigate factors such as clinical features or electrode placement that might improve efficacy. FUNDING Abbott (previously St Jude Medical).
Development of Kinect based teleoperation of Nao robot
In this paper, an online tracking system has been developed to control the arm and head of a Nao robot using Kinect sensor. The main goal of this work is to achieve that the robot is able to follow the motion of a human user in real time to track. This objective has been achieved using a RGB-D camera (Kinect v2) and a Nao robot, which is a humanoid robot with 5 degree of freedom (DOF) for each arm. The joint motions of the operator's head and arm in the real world captured by a Kinect camera can be transferred into the workspace mathematically via forward and inverse kinematics, realitically through data based UDP connection between the robot and Kinect sensor. The satisfactory performance of the proposed approaches have been achieved, which is shown in experimental results.
Emotion regulation and culture: are the social consequences of emotion suppression culture-specific?
Emotional suppression has been associated with generally negative social consequences (Butler et al., 2003; Gross & John, 2003). A cultural perspective suggests, however, that these consequences may be moderated by cultural values. We tested this hypothesis in a two-part study, and found that, for Americans holding Western-European values, habitual suppression was associated with self-protective goals and negative emotion. In addition, experimentally elicited suppression resulted in reduced interpersonal responsiveness during face-to-face interaction, along with negative partner-perceptions and hostile behavior. These deleterious effects were reduced when individuals with more Asian values suppressed, and these reductions were mediated by cultural differences in the responsiveness of the suppressors. These findings suggest that many of suppression's negative social impacts may be moderated by cultural values.
Building a corpus of "real" texts for deception detection
Text-based deception detection is currently emerging as a vital multidisciplinary field due to its indisputable theoretical and practical value (police, security, and customs, including predatory communications, such as Internet scams). A very important issue associated with deception detection is designing valid text corpora. Most research has been carried out using texts produced in laboratory settings. There is a lack of "real" deceptive texts written when the stakes for deception are high as they are obviously difficult to collect and label. In addition, studies in text-based deception detection have mostly been performed for Romance and Germanic languages. There are few studies dealing with deception detection in Slavic languages. In this paper one can find an overview of available text corpora used for studying text-based deception detection as well as the description of how the first corpus of "real" deceptive texts for Slavic languages was collected and labeled. We expect this corpus once finished to be a handy tool for developing and testing new deception detection techniques and for carrying out related cross-cultural studies.
Randomized controlled trial of pulsating cupping (pneumatic pulsation therapy) for chronic neck pain.
BACKGROUND Pneumatic pulsation therapy may combine the effects of cupping therapy and massage. This study investigated the effect of pneumatic pulsation therapy on chronic neck pain compared to standard medical care. METHODS 50 patients (79.15% female; 46.17 ± 12.21 years) with chronic nonspecific neck pain were randomized to treatment group (TG; n = 25) or control group (CG; n = 25). The TG received 5 pneumatic pulsation treatments over a period of 2 weeks utilizing a mechanical device. Treatment was applied as a combination of moving and stationary pulsating cupping. Main outcome measure was pain intensity in pain diaries (numerical rating scale). Secondary outcome measures included functional disability (NDI), quality of life (SF-36), and pain at motion. Sensory thresholds, including pressure pain threshold, were measured at pain-related sites. RESULTS After the intervention, significant group differences occurred regarding pain intensity (baseline: 4.12 ± 1.45 in TG and 4.20 ± 1.57 in CG; post-intervention: 2.72 ± 1.62 in TG and 4.44 ± 1.96 in CG; analysis of covariance: p = 0.001), NDI (baseline: 25.92 ± 8.23 and 29.83; post-intervention: 20.44 ± 10.17 and 28.83; p = 0.025), and physical quality of life (baseline: 43.85 ± 7.65 and 41.66 ± 7.09; post-intervention: 47.60 ± 7.93 and 40.49 ± 8.03; p = 0.002). Further significant group differences were found for pain at motion (p = 0.004) and pressure pain threshold (p = 0.002). No serious adverse events were reported. CONCLUSION Pneumatic pulsation therapy appears to be a safe and effective method to relieve pain and to improve function and quality of life in patients with chronic neck pain.
Decision tree rule-based feature selection for large-scale imbalanced data
A class imbalance problem often appears in many real world applications, e.g. fault diagnosis, text categorization, fraud detection. When dealing with a large-scale imbalanced dataset, feature selection becomes a great challenge. To confront it, this work proposes a feature selection approach based on a decision tree rule. The effectiveness of the proposed approach is verified by classifying a large-scale dataset from Santander Bank. The results show that our approach can achieve higher Area Under the Curve (AUC) and less computational time. We also compare it with filter-based feature selection approaches, i.e., Chi-Square and F-statistic. The results show that it outperforms them but needs slightly more computational efforts.
JBits: Java based interface for reconfigurable computing
The JBitsTM software is a set of JavaTM classes which provide an Application Programming Interface (API) to access the Xilinx FPGA bitstream. The interface operates on either bitstreams generated by Xilinx design tools, or on bitstreams read back from actual hardware. This permits all configurable resources like Look-up tables, routing and the flip-flops in the FPGA to be individually configured under software control.
The efficacy of a volunteer-administered cognitive stimulation program in long-term care homes.
BACKGROUND Cognitive impairment (CI) that arises in some older adults limits independence and decreases quality of life. Cognitive stimulation programs delivered by professional therapists have been shown to help maintain cognitive abilities, but the costs of such programming are prohibitive. The present study explored the feasibility and efficacy of using long-term care homes' volunteers to administer a cognitive stimulation program to residents. METHODS Thirty-six resident participants and 16 volunteers were alternately assigned to one of two parallel groups: a control group (CG) or stimulation group (SG). For eight weeks, three times each week, CG participants met for standard "friendly visits" (casual conversation between a resident and volunteer) and SG participants met to work through a variety of exercises to stimulate residents' reasoning, attention, and memory abilities. Resident participants were pre- and post-tested using the Weschler Abbreviated Scale of Intelligence-Second Edition, Test of Memory, and Learning-Senior Edition, a modified Letter Sorting test (LS), Clock Drawing Test (CDT), and the Action Word Verbal Fluency Test. RESULTS Two-way analyses of covariance (ANCOVA) controlling for dementia diagnosis indicated statistically greater improvements in the stimulation participants than in the control participants in Immediate Verbal Memory, p = 0.011; Non-Verbal Memory, p = 0.012; Learning, p = 0.016; and Verbal Fluency, p = 0.024. CONCLUSIONS The feasibility and efficiency of a volunteer-administered cognitive stimulation program was demonstrated. Longitudinal studies with larger sample sizes are recommended in order to continue investigating the breadth and depth volunteer roles in the maintenance of the cognitive abilities of older adults.
Distracted driver behaviors and distracting conditions among adolescent drivers: findings from a naturalistic driving study.
PURPOSE The proliferation of new communication technologies and capabilities has prompted concern about driving safety. This concern is particularly acute for inexperienced adolescent drivers. In addition to being early adopters of technology, many adolescents have not achieved the degree of automaticity in driving that characterizes experienced adults. Consequently, distractions may be more problematic in this group. Yet little is known about the nature or prevalence of distracted driving behaviors or distracting conditions among adolescent drivers. METHOD Vehicles of 52 high-school age drivers (N=38 beginners and N=14 more experienced) were equipped for 6 months with unobtrusive event-triggered data recorders that obtain 20-second clips of video, audio, and vehicle kinematic information when triggered. A low recording trigger threshold was set to obtain a sample of essentially random driving segments along with those indicating rough driving behaviors. RESULTS Electronic device use (6.7%) was the most common single type of distracted behavior, followed by adjusting vehicle controls (6.2%) and grooming (3.8%). Most distracted driver behaviors were less frequent when passengers were present. However, loud conversation and horseplay were quite common in the presence of multiple peer passengers. These conditions were associated with looking away from the road, the occurrence of serious events, and, to a lesser extent, rough driving (high g-force events). CONCLUSIONS Common assumptions about adolescent driver distraction are only partially borne out by in-vehicle measurement. The association of passengers with distraction appears more complex than previously realized. The relationship between distractions and serious events differed from the association with rough driving.
An inventory model for deteriorating items with stock-dependent consumption rate and shortages under inflation and time discounting
This paper derives an inventory model for deteriorating items with stock-dependent consumption rate and shortages under inflation and time discounting over a finite planning horizon. We show that the total cost function is convex. With the convexity, a simple solution algorithm is presented to determine the optimal order quantity and the optimal interval of the total cost function. The results are discussed with a numerical example and particular cases of the model are discussed in brief. A sensitivity analysis of the optimal solution with respect to the parameters of the system is carried out. 2004 Elsevier B.V. All rights reserved.
Improvement of intestinal permeability with alanyl-glutamine in HIV patients: a randomized, double blinded, placebo-controlled clinical trial.
CONTEXT Glutamine is the main source of energy of the enterocyte and diarrhea and weight loss are frequent in HIV infected patients. OBJECTIVE To determine the effect of alanyl-glutamine supplementation on intestinal permeability and absorption in these patients. METHODS Randomized double-blinded, placebo-controlled study using isonitrogenous doses of alanyl-glutamine (24 g/day) and placebo (glycine, 25 g/day) during 10 days. Before and after this nutritional supplementation lactulose and mannitol urinary excretion were determined by high performance liquid chromatography. RESULTS Forty six patients with HIV/AIDS, 36 of whom were male, with 37.28 ± 3 (mean ± standard error) years were enrolled. Twenty two and 24 subjects were treated with alanyl-glutamine and with glycine respectively. In nine patients among all in the study protocol that reported diarrhea in the 14 days preceding the beginning of the study, mannitol urinary excretion was significantly lower than patients who did not report this symptom [median (range): 10.51 (3.01-19.75) vs. 15.37 (3.93-46.73); P = 0.0281] and lactulose/mannitol ratio was significantly higher [median (range): 0.04 (0.00-2.89) vs. 0.02 (0.00-0.19); P = 0.0317]. There was also a significant increase in mannitol urinary excretion in the group treated with alanyl-glutamine [median (range): 14.38 (8.25-23.98) before vs 21.24 (6.27-32.99) after treatment; n = 14, P = 0.0382]. CONCLUSION Our results suggest that the integrity and intestinal absorption are more intensely affected in patients with HIV/AIDS who recently have had diarrhea. Additionally, nutritional supplementation with alanyl-glutamine was associated with an improvement in intestinal absorption.
Brushless alternator in automotive applications
This paper deals with reliability problems of common types of generators in hard conditions. It shows possibilities of construction changes that should increase the machine reliability. This contribution is dedicated to the study of brushless alternator for automotive industry. There are described problems with usage of common types of alternators and main benefits and disadvantages of several types of brushless alternators.
Predicting evolutionary patterns of mammalian teeth from development
One motivation in the study of development is the discovery of mechanisms that may guide evolutionary change. Here we report how development governs relative size and number of cheek teeth, or molars, in the mouse. We constructed an inhibitory cascade model by experimentally uncovering the activator–inhibitor logic of sequential tooth development. The inhibitory cascade acts as a ratchet that determines molar size differences along the jaw, one effect being that the second molar always makes up one-third of total molar area. By using a macroevolutionary test, we demonstrate the success of the model in predicting dentition patterns found among murine rodent species with various diets, thereby providing an example of ecologically driven evolution along a developmentally favoured trajectory. In general, our work demonstrates how to construct and test developmental rules with evolutionary predictability in natural systems.
Potential Use of Bacillus coagulans in the Food Industry
Probiotic microorganisms are generally considered to beneficially affect host health when used in adequate amounts. Although generally used in dairy products, they are also widely used in various commercial food products such as fermented meats, cereals, baby foods, fruit juices, and ice creams. Among lactic acid bacteria, Lactobacillus and Bifidobacterium are the most commonly used bacteria in probiotic foods, but they are not resistant to heat treatment. Probiotic food diversity is expected to be greater with the use of probiotics, which are resistant to heat treatment and gastrointestinal system conditions. Bacillus coagulans (B. coagulans) has recently attracted the attention of researchers and food manufacturers, as it exhibits characteristics of both the Bacillus and Lactobacillus genera. B. coagulans is a spore-forming bacterium which is resistant to high temperatures with its probiotic activity. In addition, a large number of studies have been carried out on the low-cost microbial production of industrially valuable products such as lactic acid and various enzymes of B. coagulans which have been used in food production. In this review, the importance of B. coagulans in food industry is discussed. Moreover, some studies on B. coagulans products and the use of B. coagulans as a probiotic in food products are summarized.
Facial Expression Recognition Using Active Appearance Models
A framework for automatic facial expression recognition combining Active Appearance Model (AAM) and Linear Discriminant Analysis (LDA) is proposed. Seven different expressions of several subjects, representing the neutral face and the facial emotions of happiness, sadness, surprise, anger, fear and disgust were analysed. The proposed solution starts by describing the human face by an AAM model, projecting the appearance results to a Fisherspace using LDA to emphasize the different expression categories. Finaly the performed classification is based on malahanobis distance.
Capturing Deep Correlations with 2-Way Nets
We present a noval bi-directional mapping deep neural network architecture for the task of matching vectors from two data-sources. Our approach employs tied neural network channels to project two views into a common, maximally correlated, space using the euclidean loss. To achieve both maximally correlating projection we built an encoder-decoder framework composed of two parallel networks and incorporated batch-normalization layers and dropout adapted to the model at hand. We show state of the art results on a number of computer vision tasks including MNIST image matching and sentence-image matching on the flickr8k and flickr30k datasets.
The Nature of Phonological Processing and Its Causal Role in the Acquisition of Reading Skills
Three bodies of research that have developed in relative isolation center on each of three kinds of phonological processing: phonological awareness, awareness of the sound structure of language; phonological receding in lexical access, receding written symbols into a sound-based representational system to get from the written word to its lexical referent; and phonetic receding in working memory, recoding written symbols into a sound-based representational system to maintain them efficiently in working memory. In this review we integrate these bodies of research and address the interdependent issues of the nature of phonological abilities and their causal roles in the acquisition of reading skills. Phonological ability seems to be general across tasks that purport to measure the three kinds of phonological processing, and this generality apparently is independent of general cognitive ability. However, the generality of phonological ability is not complete, and there is an empirical basis for distinguishing phonological awareness and phonetic recoding in working memory. Our review supports a causal role for phonological awareness in learning to read, and suggests the possibility of similar causal roles for phonological recoding in lexical access and phonetic recoding in working memory. Most researchers have neglected the probable causal role of learning to read in the development of phonological skills. It is no longer enough to ask whether phonological skills play a causal role in the acquisition of reading skills. The question now is which aspects of phonological processing (e.g., awareness, recoding in lexical access, recoding in working memory) are causally related to which aspects of reading (e.g., word recognition, word analysis, sentence comprehension), at which point in their codevelopment, and what are the directions of these causal relations?
Differentiation of Hepatocellular Carcinoma from Other Hepatic Malignancies in Patients at Risk: Diagnostic Performance of the Liver Imaging Reporting and Data System Version 2014.
Purpose To evaluate the diagnostic performance and interrater reliability of the Liver Imaging Reporting and Data System (LI-RADS) version 2014 in differentiating hepatocellular carcinoma (HCC) from non-HCC malignancy in a population of patients at risk for HCC. Materials and Methods This retrospective HIPAA-compliant institutional review board-approved study was exempt from informed consent. A total of 178 pathology-proven malignant liver masses were identified in 178 patients at risk for HCC but without established extrahepatic malignancy from August 2012 through August 2015. Two readers blinded to pathology findings and clinical follow-up data independently evaluated a liver protocol magnetic resonance or computed tomography study for each lesion and assigned LI-RADS categories, scoring all major and most ancillary features. Statistical analyses included the independent samples t test, x2 test, Fisher exact test, and Cohen k. Results This study included 136 HCCs and 42 non-HCC malignancies. Specificity and positive predictive value of an HCC imaging diagnosis (LR-5 or LR-5V) were 69.0% and 90.5%, respectively, for reader 1 (R1) and 88.3% and 95.5%, respectively, for reader 2 (R2). Tumor in vein was a common finding in patients with non-HCC malignancies (R1, 10 of 42 [23.8%]; R2, five of 42 [11.9%]). Exclusion of the LR-5V pathway improved specificity and positive predictive value for HCC to 83.3% and 92.9%, respectively, for R1 (six fewer false-positive findings) and 92.3% and 96.4%, respectively, for R2 (one fewer false-positive finding). Among masses with arterial phase hyperenhancement, the rim pattern was more common among non-HCC malignancies than among HCCs for both readers (R1: 24 of 36 [66.7%] vs 13 of 124, [10.5%], P < .001; R2: 27 of 35 [77.1%] vs 21 of 123 [17.1%], P < .001) (k = 0.76). Exclusion of rim arterial phase hyperenhancement as a means of satisfying LR-5 criteria also improved specificity and positive predictive value for HCC (R1, two fewer false-positive findings). Conclusion Modification of the algorithmic role of tumor in vein and rim arterial phase hyperenhancement improves the diagnostic performance of LI-RADS version 2014 in differentiating HCC from non-HCC malignancy. © RSNA, 2017 Online supplemental material is available for this article.
Peak Load Curtailment in a Smart Grid Via Fuzzy System Approach
Among many significant smart grid initiatives and challenges considered by many utilities and within the research community, are those associated with the energy management and conservation, in particular the management of energy demand during peak load periods. In this paper, a novel method for peak load curtailment by using a fuzzy system approach is presented. The proposed method is based on the application of fuzzy logic principles for peak load curtailment in a smart grid environment. The inputs to the system are the utility peak load data consisting of many energy demand scenarios, and the outputs are the necessary demand response power reductions required for the load curtailment during the peak load periods. The proposed method considers different peak load profiles and power consumption sources for multiple city regions. Furthermore, it is adaptable for use in many scenarios, such as those encompassing many input sources of power consumption with diverse input parameters of control (i.e., temperature offsets, duty cycle control, etc.) within numerous city regions. Thus, it can be applied to multiple output variables of control.
Climate change. Is weather event attribution necessary for adaptation funding?
I nternational funds created largely for funding climate adaptation programs and projects in developing countries were fi rst legally established through the seventh session of the Conference of the Parties (COP-7) to the United Nations Framework Convention on Climate Change (FCCC) held in 2001 at Marrakesh. In 2009, at COP-15 in Copenhagen, delegates “took note” of a pledge from developed countries to commit U.S. $30 billion for the period 2010–2012, ramping up to $100 billion per annum by 2020, to support a mixture of climate adaptation and mitigation activities in developing countries. International adaptation fi nance has therefore been, and continues to be, a signifi cant political issue for the FCCC and for international institutions, such as the World Bank, the Global Environment Facility, and regional development banks ( 1). Yet governance arrangements and allocation principles for these climate adaptation funds remain both underdeveloped and politically contested ( 2, 3). A Green Climate Fund for disbursing such funds was established at COP-16 in Cancún, and a Transitional Committee is currently developing operational documents for the fund to be adopted at COP-17 in Durban, South Africa, later this year. In this Policy Forum, we challenge claims made by proponents of the new science of weather event attribution that the capability of calculating the odds that specifi c weather extremes are caused by humans could assist in the allocation of international climate adaptation funds. By claiming such a new guiding principle, for example, in the context of the emerging loss and damage agenda ( 4), these proponents may unwittingly be providing convenient cover for those who would rather continue debating adaptation allocation principles than securing and investing new funds (see the photo). Instead, we argue that adaptation funding should continue to be distributed according to development needs and adaptive capacity, as negotiated by the FCCC over many years and as is currently practiced.
PlatEMO: A MATLAB Platform for Evolutionary Multi-Objective Optimization [Educational Forum]
Over the last three decades, a large number of evolutionary algorithms have been developed for solving multi-objective optimization problems. However, there lacks an upto-date and comprehensive software platform for researchers to properly benchmark existing algorithms and for practitioners to apply selected algorithms to solve their real-world problems. The demand of such a common tool becomes even more urgent, when the source code of many proposed algorithms has not been made publicly available. To address these issues, we have developed a MATLAB platform for evolutionary multi-objective optimization in this paper, called PlatEMO, which includes more than 50 multiobjective evolutionary algorithms and more than 100 multi-objective test problems, along with several widely used performance indicators. With a user-friendly graphical user interface, PlatEMO enables users to easily compare several evolutionary algorithms at one time and collect statistical results in Excel or LaTeX files. More importantly, PlatEMO is completely open source, such that users are able to develop new algorithms on the basis of it. This paper introduces the main features of PlatEMO and illustrates how to use it for performing comparative experiments, embedding new algorithms, creating new test problems, and developing performance indicators. Source code of PlatEMO is now available at: http://bimk.ahu.edu.cn/index.php?s=/Index/Software/index.html.
Mortality experience of haematite mine workers in China.
The mortality risk of iron ore (haematite) miners between 1970 and 1982 was investigated in a retrospective cohort study of workers from two mines, Longyan and Taochong, in China. The cohort was limited to men and consisted of 5406 underground miners and 1038 unexposed surface workers. Among the 490 underground miners who died, 205 (42%) died of silicosis and silicotuberculosis and 98 (20%) of cancer, including 29 cases (5.9%) of lung cancer. The study found an excess risk of non-malignant respiratory disease and of lung cancer among haematite miners. The standardised mortality ratio for lung cancer compared with nationwide male population rates was significantly raised (SMR = 3.7), especially for those miners who were first employed underground before mechanical ventilation and wet drilling were introduced (SMR = 4.8); with jobs involving heavy exposure to dust, radon, and radon daughters (SMR = 4.2); with a history of silicosis (SMR = 5.3); and with silicotuberculosis (SMR = 6.6). No excess risk of lung cancer was observed in unexposed workers (SMR = 1.2). Among current smokers, the risk of lung cancer increased with the level of exposure to dust. The mortality from all cancer, stomach, liver, and oesophageal cancer was not raised among underground miners. An excess risk of lung cancer among underground mine workers which could not be attributed solely to tobacco use was associated with working conditions underground, especially with exposure to dust and radon gas and with the presence of non-malignant respiratory disease. Because of an overlap of exposures to dust and radon daughters, the independent effects of these factors could not be evaluated.
Developmentally inspired drug prevention: middle school outcomes in a school-based randomized prevention trial.
Prior investigations have linked behavioral competencies in primary school to a reduced risk of later drug involvement. In this randomized prevention trial, we sought to quantify the potential early impact of two developmentally inspired universal preventive interventions on the risk of early-onset alcohol, inhalant, tobacco, and illegal drug use through early adolescence. Participants were recruited as they entered first grade within nine schools of an urban public school system. Approximately, 80% of the sample was followed from first to eighth grades. Two theory-based preventive interventions, (1) a family-school partnership (FSP) intervention and (2) a classroom-centered (CC) intervention, were developed to improve early risk behaviors in primary school. Generalized estimating equations (GEE) multivariate response profile regressions were used to estimate the relative profiles of drug involvement for intervention youths versus controls, i.e. youth in the standard educational setting. Relative to control youths, intervention youths were less likely to use tobacco, with modestly stronger evidence of protection associated with the CC intervention (RR=0.5; P=0.008) as compared to protection associated with the FSP intervention (RR=0.6; P=0.042). Intervention status was not associated with risk of starting alcohol, inhalants, or marijuana use, but assignment to the CC intervention was associated with reduced risk of starting to use other illegal drugs by early adolescence, i.e. heroin, crack, and cocaine powder (RR=0.32, P=0.042). This study adds new evidence on intervention-associated reduced risk of starting illegal drug use. In the context of 'gateway' models, the null evidence on marijuana is intriguing and merits attention in future investigations.
Correction of Short Nose Deformity Using a Septal Extension Graft Combined with a Derotation Graft
In patients having a short nose with a short septal length and/or severe columellar retraction, a septal extension graft is a good solution, as it allows the dome to move caudally and pushes down the columellar base. Fixing the medial crura of the alar cartilages to a septal extension graft leads to an uncomfortably rigid nasal tip and columella, and results in unnatural facial animation. Further, because of the relatively small and weak septal cartilage in the East Asian population, undercorrection of a short nose is not uncommon. To overcome these shortcomings, we have used the septal extension graft combined with a derotation graft. Among 113 patients who underwent the combined procedure, 82 patients had a short nose deformity alone; the remaining 31 patients had a short nose with columellar retraction. Thirty-two patients complained of nasal tip stiffness caused by a septal extension graft from previous operations. In addition to the septal extension graft, a derotation graft was used for bridging the gap between the alar cartilages and the septal extension graft for tip lengthening. Satisfactory results were obtained in 102 (90%) patients. Eleven (10%) patients required revision surgery. This combination method is a good surgical option for patients who have a short nose with small septal cartilages and do not have sufficient cartilage for tip lengthening by using a septal extension graft alone. It can also overcome the postoperative nasal tip rigidity of a septal extension graft.
Active learning for class imbalance problem
The class imbalance problem has been known to hinder the learning performance of classification algorithms. Various real-world classification tasks such as text categorization suffer from this phenomenon. We demonstrate that active learning is capable of solving the problem.
Textons, Contours and Regions: Cue Integration in Image Segmentation
This paper makes two contributions. It provides (1) an operational definition of textons, the putative elementary units of texture perception, and (2) an algorithm for partitioning the image into disjoint regions of coherent brigh tness and texture, where boundaries of regions are defined by peaks in contour orientation energy and differences in texton densities across the contour. Julesz introduced the term texton, analogous to a phoneme in speech recognition, but did not provide an operational definition for gray-level images. Here we re-invent textons as frequently co-occurring combinations of orient ed linear filter outputs. These can be learned using a K-means approach. By mapping each pixel to its nearest texton, the image can be analyzed into texton channels, each of which is a point set where discrete techniques such as Voronoi diagrams become applicable. Local histograms of texton frequencies can be used with a 2 test for significant differences to find texture boundaries. Natural images contain both textured and untextured regions, so we combine this cue with that of the presence of peaks of contour energy derived from outputs of oddand even-symmetric oriented Gaussian derivative filters. Each of these cues has a domain of applicability, so to facilitate cue combination we introduce a gating operator based on a statistical test for isotropy of Delaunay neighbors. Havin g obtained a local measure of how likely two nearby pixels are to belong to the same region, we use the spectral graph theoretic framework of normalized cuts to find partitions of the image into regions of coherent texture and brightness. Experimental results on a wide range of images are shown.
A MINIATURIZED PRINTED DIPOLE ANTENNA WITH V-SHAPED GROUND FOR 2 . 45 GHZ RFID
In this paper, a miniaturized printed dipole antenna with the V-shaped ground is proposed for radio frequency identification (RFID) readers operating at the frequency of 2.45 GHz. The principles of the microstrip balun and the printed dipole are analyzed and design considerations are formulated. Through extending and shaping the ground to reduce the coupling between the balun and the dipole, the antenna’s impedance bandwidth is broadened and the antenna’s radiation pattern is improved. The 3D finite difference time domain (FDTD) Electromagnetic simulations are carried out to evaluate the antenna’s performance. The effects of the extending angle and the position of the ground are investigated to obtain the optimized parameters. The antenna was fabricated and measured in a microwave anechoic chamber. The results show that the proposed antenna achieves a broader impedance bandwidth, a higher forward radiation gain and a stronger suppression to backward radiation compared with the one without such a ground.
Detecting group review spam
It is well-known that many online reviews are not written by genuine users of products, but by spammers who write fake reviews to promote or demote some target products. Although some existing works have been done to detect fake reviews and individual spammers, to our knowledge, no work has been done on detecting spammer groups. This paper focuses on this task and proposes an effective technique to detect such groups.
Does the human capital of board directors add value to firms? Evidence from an Asian market
This paper investigates the effect of the human capital of directors on the financial performance of Vietnamese-listed companies. The dynamic system generalised method of moments (system GMM) estimator is used to examine a panel data-set consisting of 315 firm-year observations over a four-year period from 2008 to 2011. In line with resource dependence theory, we report that the human capital of directors appears to have a positive influence upon a firm’s financial performance. To the best of our knowledge, this study is the first work on the topic of human capital of directors and firm performance for publicly listed companies in Vietnam. This study, by applying a dynamic longitudinal modelling approach, extends the nascent literature on board human capital as well as more generally to the corporate governance literature by providing robust empirical evidence showing that the general human capital of board directors may add value to firms. Our finding, therefore, supports the efforts made by Vietnamese policy-makers in setting up qualification standards as well as skills diversity for the boards.
Research and Practices on Open Innovation: Perspectives on SMEs
Innovation has become a recognized driver of economic prosperity of a country through sustained growth of its entrepreneurships. Moreover, recently coined term open innovation is increasingly taking a lead in enterprise management in terms of sustained profitability. Foci of researchers and practitioners are revolving around innovation methods, processes, and strategies. This chapter seeks to find out open innovation researches and practices that are being carried out circumscribing development of small and medium enterprises (SMEs) through a longitudinal study. Along this context the study is investigating into researches that are being carried out by leading researchers and research houses across the globe, and at the same time, it also investigates open innovation practices that are being carried out for the development of SMEs. Before its conclusion, the chapter attempts to develop a framework for future research practices.
Multimodal intervention to improve osteoporosis care in home health settings: results from a cluster randomized trial
We conducted a cluster randomized trial testing the effectiveness of an intervention to increase the use of osteoporosis medications in high-risk patients receiving home health care. The trial did not find a significant difference in medication use in the intervention arm. This study aims to test an evidence implementation intervention to improve the quality of care in the home health care setting for patients at high risk for fractures. We conducted a cluster randomized trial of a multimodal intervention targeted at home care for high-risk patients (prior fracture or physician-diagnosed osteoporosis) receiving care in a statewide home health agency in Alabama. Offices throughout the state were randomized to receive the intervention or to usual care. The primary outcome was the proportion of high-risk home health patients treated with osteoporosis medications. A t test of difference in proportions was conducted between intervention and control arms and constituted the primary analysis. Secondary analyses included logistic regression estimating the effect of individual patients being treated in an intervention arm office on the likelihood of a patient receiving osteoporosis medications. A follow-on analysis examined the effect of an automated alert built into the electronic medical record that prompted the home health care nurses to deploy the intervention for high-risk patients using a pre–post design. There were 11 offices randomized to each of the treatment and control arms; these offices treated 337 and 330 eligible patients, respectively. Among the offices in the intervention arm, the average proportion of eligible patients receiving osteoporosis medications post-intervention was 19.1 %, compared with 15.7 % in the usual care arm (difference in proportions 3.4 %, 95 % CI, −2.6 to 9.5 %). The overall rates of osteoporosis medication use increased from 14.8 % prior to activation of the automated alert to 17.6 % afterward, a nonsignificant difference. The home health intervention did not result in a significant improvement in use of osteoporosis medications in high-risk patients.
AAS 04-297 UNIFORMLY DISTRIBUTED FLOWER CONSTELLATION DESIGN STUDY FOR GLOBAL NAVIGATION SYSTEM
The recently developed Flower Constellations have the satellites following the same relative trajectory with respect to an Earth rotating reference frame. This allows you to design an approximate uniform relative trajectory over the region of interest (which can be global) and uniformly distribute in time the satellites along the relative trajectory. Based on this idea, a new constellation of 30 satellites is here proposed, designed, and compared with the existing GPS and GLONASS constellation and the proposed European Galileo constellation. The proposed solution presents better characteristics in terms of attitude and position errors.
Spectrum management in cognitive radio ad hoc networks
The problem of spectrum scarcity and inefficiency in spectrum usage will be addressed by the newly emerging cognitive radio paradigm that allows radios to opportunistically transmit in the vacant portions of the spectrum already assigned to licensed users. For this, the ability for spectrum sensing, spectrum sharing, choosing the best spectrum among the available options, and dynamically adapting transmission parameters based on the activity of the licensed spectrum owners must be integrated within cognitive radio users. Specifically in cognitive radio ad hoc networks, distributed multihop architecture, node mobility, and spatio-temporal variance in spectrum availability are some of the key distinguishing factors. In this article the important features of CRAHNs are presented, along with the design approaches and research challenges that must be addressed. Spectrum management in CRAHNs comprises spectrum sensing, sharing, decision, and mobility. In this article each of these functions are described in detail from the viewpoint of multihop infra-structureless networks requiring cooperation among users.
The nature and determinants of neuropsychological functioning in late-life depression.
CONTEXT Cognitive impairment in late-life depression (LLD) is highly prevalent, disabling, poorly understood, and likely related to long-term outcome. OBJECTIVES To determine the characteristics and determinants of neuropsychological functioning LLD. DESIGN Cross-sectional study of groups of LLD patients and control subjects. SETTING Outpatient, university-based depression research clinic. PARTICIPANTS One hundred patients without dementia 60 years and older who met DSM-IV criteria for current episode of unipolar major depression (nonpsychotic) and 40 nondepressed, age- and education-equated control subjects. MAIN OUTCOME MEASURES A comprehensive neuropsychological battery. RESULTS Relative to control subjects, LLD patients performed poorer in all cognitive domains. More than half exhibited significant impairment (performance below the 10th percentile of the control group). Information processing speed and visuospatial and executive abilities were the most broadly and frequently impaired. The neuropsychological impairments were mediated almost entirely by slowed information processing (beta =.45-.80). Education (beta =.32) and ventricular atrophy (beta =.28) made additional modest contributions to variance in measures of language ability. Medical and vascular disease burden, apolipoprotein E genotype, and serum anticholinergicity did not contribute to variance in any cognitive domain. CONCLUSIONS Late-life depression is characterized by slowed information processing, which affects all realms of cognition. This supports the concept that frontostriatal dysfunction plays a key role in LLD. The putative role of some risk factors was validated (eg, advanced age, low education, depression severity), whereas others were not (eg, medical burden, age at onset of first depressive episode). Further studies of neuropsychological functioning in remitted LLD patients are needed to parse episode-related and persistent factors and to relate them to underlying neural dysfunction.
Do nondiabetic patients undergoing coronary artery bypass grafting surgery require intraoperative management of hyperglycemia?
OBJECTIVE To study the effect of blood glucose (BG) control with insulin in preventing hyperglycemia during and after coronary artery bypass grafting (CABG) surgery in nondiabetic patients. METHODS In a randomized clinical trial, 120 nondiabetic patients who underwent elective CABG surgery were enrolled for study of whether the control of hyperglycemia was a need in such a surgery in a teaching heart hospital. The patients were randomly divided into study (n=60) and control (n=60) groups. In the study group, insulin was infused to maintain BG level between 110 mg/dL and 126 mg/dL (a modified insulin therapy protocol, and in the control group, the patients were excepted). Insulin therapy was limited to intraoperative period. BG levels during surgery and up to 48 hours after surgery and early postoperative complications were compared between the study and control groups. RESULTS One hundred seventeen patients completed the study (59 patients in study group and 58 in control group). Peak intraoperative BG level in the study group was 126.4±17.9 mg/dL and in the control group was 137.3±17.6 mg/dL (p=0.024). The frequencies of severe hyperglycemia (BG≥180 mg/dL) were 6 of 59 (10.1%) in the study group and 19 of 58 (32.7%) in the control group during operation (p=0.002). Peak postoperative BG level in the study group was 194.8±41.2 mg/dL and was 199.8±43.2 mg/dL in the control group (p=0.571). There was no hypoglycemic event in either group. The frequencies of early postoperative complications were 10 of 59 (16.9%) in the study group and 19 of 58 (32.7%) in the control group (p=0.047). CONCLUSIONS Hyperglycemia (BG≥126 mg/dL) is common in nondiabetic patients undergoing CABG surgery. A modified insulin therapy to maintain BG level between 110 mg/dL and 126 mg/dL may be acceptable for avoiding hypoglycemia and keeping intraoperative BG levels in acceptable range in nondiabetics.
Community challenges in biomedical text mining over 10 years: success, failure and the future
One effective way to improve the state of the art is through competitions. Following the success of the Critical Assessment of protein Structure Prediction (CASP) in bioinformatics research, a number of challenge evaluations have been organized by the text-mining research community to assess and advance natural language processing (NLP) research for biomedicine. In this article, we review the different community challenge evaluations held from 2002 to 2014 and their respective tasks. Furthermore, we examine these challenge tasks through their targeted problems in NLP research and biomedical applications, respectively. Next, we describe the general workflow of organizing a Biomedical NLP (BioNLP) challenge and involved stakeholders (task organizers, task data producers, task participants and end users). Finally, we summarize the impact and contributions by taking into account different BioNLP challenges as a whole, followed by a discussion of their limitations and difficulties. We conclude with future trends in BioNLP challenge evaluations.
Optimal combination forecasts for hierarchical time series
In many applications, there are multiple time series that are hierarchically organized and can be aggregated at several different levels in groups based on products, geography or some other features. We call these ‘‘hierarchical time series’’. They are commonly forecast using either a ‘‘bottom-up’’ or a ‘‘top-down’’ method. In this paper we propose a new approach to hierarchical forecasting which provides optimal forecasts that are better than forecasts produced by either a top-down or a bottomup approach. Our method is based on independently forecasting all series at all levels of the hierarchy and then using a regression model to optimally combine and reconcile these forecasts. The resulting revised forecasts add up appropriately across the hierarchy, are unbiased and have minimum variance amongst all combination forecasts under some simple assumptions. We show in a simulation study that our method performs well compared to the topdown approach and the bottom-up method. We demonstrate our proposed method by forecasting Australian tourism demand where the data are disaggregated by purpose of travel and geographical region. © 2011 Elsevier B.V. All rights reserved.
Effects of escitalopram on plasma concentrations of aripiprazole and its active metabolite, dehydroaripiprazole, in Japanese patients.
INTRODUCTION The effects of escitalopram (10 mg/d) coadministration on plasma concentrations of aripiprazole and its active metabolite, dehydroaripiprazole, were studied in 13 Japanese psychiatric patients and compared with those of paroxetine (10 mg/d) coadministration. METHODS The patients had received 6-24 mg/d of aripiprazole for at least 2 weeks. Patients were randomly allocated to one of 2 treatment sequences: paroxetine-escitalopram (n=6) or escitalopram-paroxetine (n=7). Each sequence consisted of two 2-week phases. Plasma concentrations of aripiprazole and dehydroaripiprazole were measured using liquid chromatography with mass spectrometric detection. RESULTS Plasma concentrations of aripiprazole and the sum of aripiprazole and dehydroaripiprazole during paroxetine coadministration were 1.7-fold (95% confidence intervals [CI], 1.3-2.1, p<0.001) and 1.5-fold (95% CI 1.2-1.9, p<0.01) higher than those values before the coadministration. These values were not influenced by escitalopram coadministration (1.3-fold, 95% CI 1.1-1.5 and 1.3-fold, 95% CI 1.0-1.5). Plasma dehydroaripiprazole concentrations remained constant during the study. CONCLUSION The present study suggests that low doses of escitalopram can be safely coadministered with aripiprazole, at least from a pharmacokinetic point of view.
The Romance of Learning from Disagreement. The Effect of Cohesiveness and Disagreement on Knowledge Sharing Behavior and Individual Performance Within Teams
PURPOSE: The purpose of this study was to explore the effects of disagreement and cohesiveness on knowledge sharing in teams, and on the performance of individual team members. DESIGN/METHODOLOGY/APPROACH: Data were obtained from a survey among 1,354 employees working in 126 teams in 17 organizations. FINDINGS: The results show that cohesiveness has a positive effect on the exchange of advice between team members and on openness for sharing opinions, whereas disagreement has a negative effect on openness for sharing opinions. Furthermore, the exchange of advice in a team has a positive effect on the performance of individual team members and acts as a mediator between cohesiveness and individual performance. IMPLICATIONS: Managers who want to stimulate knowledge sharing processes and performance within work teams may be advised to take measures to prevent disagreement between team members and to enhance team cohesiveness. ORIGINALITY/VALUE: Although some gurus in organizational learning claim that disagreement has a positive effect on group processes such as knowledge sharing and team learning, this study does not support this claim.
Convolutional Nonlinear Neighbourhood Components Analysis for Time Series Classification
During last decade, tremendous efforts have been devoted to the research of time series classification. Indeed, many previous works suggested that the simple nearest-neighbor classification is effective and difficult to beat. However, we usually need to determine the distance metric (e.g., Euclidean distance and Dynamic Time Warping) for different domains, and current evidence shows that there is no distance metric that is best for all time series data. Thus, the choice of distance metric has to be done empirically, which is time expensive and not always effective. To automatically determine the distance metric, in this paper, we investigate the distance metric learning and propose a novel Convolutional Nonlinear Neighbourhood Components Analysis model for time series classification. Specifically, our model performs supervised learning to project original time series into a transformed space. When classifying, nearest neighbor classifier is then performed in this transformed space. Finally, comprehensive experimental results demonstrate that our model can improve the classification accuracy to some extent, which indicates that it can learn a good distance metric.
Plasticity of Cancer Cell Invasion-Mechanisms and Implications for Therapy.
Cancer cell migration is a plastic and adaptive process integrating cytoskeletal dynamics, cell-extracellular matrix and cell-cell adhesion, as well as tissue remodeling. In response to molecular and physical microenvironmental cues during metastatic dissemination, cancer cells exploit a versatile repertoire of invasion and dissemination strategies, including collective and single-cell migration programs. This diversity generates molecular and physical heterogeneity of migration mechanisms and metastatic routes, and provides a basis for adaptation in response to microenvironmental and therapeutic challenge. We here summarize how cytoskeletal dynamics, protease systems, cell-matrix and cell-cell adhesion pathways control cancer cell invasion programs, and how reciprocal interaction of tumor cells with the microenvironment contributes to plasticity of invasion and dissemination strategies. We discuss the potential and future implications of predicted "antimigration" therapies that target cytoskeletal dynamics, adhesion, and protease systems to interfere with metastatic dissemination, and the options for integrating antimigration therapy into the spectrum of targeted molecular therapies.
Japanese Sentiment Classification using a Tree-Structured Long Short-Term Memory with Attention
Previous approaches to training syntaxbased sentiment classification models required phrase-level annotated corpora, which are not readily available in many languages other than English. Thus, we propose the use of tree-structured Long Short-Term Memory with an attention mechanism that pays attention to each subtree of the parse tree. Experimental results indicate that our model achieves the stateof-the-art performance in a Japanese sentiment classification task.
Subordinating Timor: Central authority and the origins of communal identities in East Timor
In 2006, a mere seven years after the overwhelming vote in opposition to Indonesia's final offer of 'broad autonomy' and only four years after the restoration of independence, communal violence erupted in Dili, the capital of East Timor. This violence was framed in terms of tensions between westerners, known as kaladi, and easterners, known as firaku. This essay seeks to answer two basic puzzles. First, what are the origins of these communal labels? Second, why did these terms resonate so profoundly within East Timorese society so soon after independence? Tracing the history of these terms, this essay argues that across more than three centuries these communal labels have emerged during crucial struggles to exert central authority. In doing so, this essay highlights the relationship between regional identities and the social ecology of food.
Evaluating topic models for digital libraries
Topic models could have a huge impact on improving the ways users find and discover content in digital libraries and search interfaces through their ability to automatically learn and apply subject tags to each and every item in a collection, and their ability to dynamically create virtual collections on the fly. However, much remains to be done to tap this potential, and empirically evaluate the true value of a given topic model to humans. In this work, we sketch out some sub-tasks that we suggest pave the way towards this goal, and present methods for assessing the coherence and interpretability of topics learned by topic models. Our large-scale user study includes over 70 human subjects evaluating and scoring almost 500 topics learned from collections from a wide range of genres and domains. We show how scoring model -- based on pointwise mutual information of word-pair using Wikipedia, Google and MEDLINE as external data sources - performs well at predicting human scores. This automated scoring of topics is an important first step to integrating topic modeling into digital libraries
Contemporary Cryptography
Probabilistic movement modeling for intention inference in human-robot interaction
Intention inference can be an essential step toward efficient humanrobot interaction. For this purpose, we propose the Intention-Driven Dynamics Model (IDDM) to probabilistically model the generative process of movements that are directed by the intention. The IDDM allows to infer the intention from observed movements using Bayes’ theorem. The IDDM simultaneously finds a latent state representation of noisy and highdimensional observations, and models the intention-driven dynamics in the latent states. As most robotics applications are subject to real-time constraints, we develop an efficient online algorithm that allows for real-time intention inference. Two human-robot interaction scenarios, i.e., target prediction for robot table tennis and action recognition for interactive humanoid robots, are used to evaluate the performance of our inference algorithm. In both intention inference tasks, the proposed algorithm achieves substantial improvements over support vector machines and Gaussian processes.
The validity of theology as an academic discipline
An analysis of relevant aspects of the history of science shows that theology's loss of credibility in an increasingly science-oriented age can be attributed to unresolved disputes from the past over metaphysical, epistemological and methodological issues. In Chapters 1 and 2, an attempt is made to show that the basic disagreements between science and theology can be traced to the ongoing quest for the principles of knowing shared by all disciplines. In Chapters 3 to 6, an attempt is made to identify these principles. Firstly, the processes and principles by which science acquires its knowledge and deems it to be objective are examined. Secondly it is argued that these same processes and principles are not the special property of science but are used by the humanities as well. Thirdly, it is contended that these principles are "empirico-critical." They enable us to bridge the gap between thought and reality and gain access to knowledge of the external world. A more comprehensive model for knowing is proposed. Chapters 7 to 9 examine whether it is possible to apply empirico-critical principles to theology. From a study of relevant aspects of Austin Farrer's thought, it is argued (i) that the processes of knowing in theology are the same as those in the sciences and the humanities, (ii) that, though theology's procedures and techniques are necessarily different from, say, the sciences because of its subject matter, these are capable of adhering to the same principles of objectivity, and (iii) that, in principle, theological decision-making is possible, even in the most controversial debates. The conclusion is that since the same processes and principles of trustworthy knowing in the sciences and humanities are fully applicable to theology, theology's viability as a source of trustworthy knowing should no longer be held in doubt.
Path-guided artificial potential fields with stochastic reachable sets for motion planning in highly dynamic environments
Highly dynamic environments pose a particular challenge for motion planning due to the need for constant evaluation or validation of plans. However, due to the wide range of applications, an algorithm to safely plan in the presence of moving obstacles is required. In this paper, we propose a novel technique that provides computationally efficient planning solutions in environments with static obstacles and several dynamic obstacles with stochastic motions. Path-Guided APF-SR works by first applying a sampling-based technique to identify a valid, collision-free path in the presence of static obstacles. Then, an artificial potential field planning method is used to safely navigate through the moving obstacles using the path as an attractive intermediate goal bias. In order to improve the safety of the artificial potential field, repulsive potential fields around moving obstacles are calculated with stochastic reachable sets, a method previously shown to significantly improve planning success in highly dynamic environments. We show that Path-Guided APF-SR outperforms other methods that have high planning success in environments with 300 stochastically moving obstacles. Furthermore, planning is achievable in environments in which previously developed methods have failed.
Testing for exceptional bulls and bears: a non-parametric perspective
This paper investigates exceptional phases of stock market cycles. Defined in Pagan and Sossounov (2003) as unusual, they are detected as outliers in the historical distribution. Moreover, this study completes the growing literature on stock market bulls and bears in several aspects. First,it extends the description of financial cy- cles by going beyond solely the duration feature. Second, a new strategy to test for single and multiple outliers is presented. Based on this procedure, the exceptional bulls and bears that occurred since 1973 are detected. A complementary analysis deals with the specific cross-country patterns of the current sub-prime crisis. Our results are mixed, in the sense that they do not support the idea that the ongoing bear is exceptional for all the analyzed countries. Moreover, the results indicate that the stock market indices are still far away from the thresholds beyond which the current bear phase will become exceptional worldwide.
Chemical named entities recognition: a review on approaches and applications
The rapid increase in the flow rate of published digital information in all disciplines has resulted in a pressing need for techniques that can simplify the use of this information. The chemistry literature is very rich with information about chemical entities. Extracting molecules and their related properties and activities from the scientific literature to "text mine" these extracted data and determine contextual relationships helps research scientists, particularly those in drug development. One of the most important challenges in chemical text mining is the recognition of chemical entities mentioned in the texts. In this review, the authors briefly introduce the fundamental concepts of chemical literature mining, the textual contents of chemical documents, and the methods of naming chemicals in documents. We sketch out dictionary-based, rule-based and machine learning, as well as hybrid chemical named entity recognition approaches with their applied solutions. We end with an outlook on the pros and cons of these approaches and the types of chemical entities extracted.
A NEW MODEL FOR CAVITATION INDUCED PRIMARY BREAK-UP OF DIESEL SPRAYS
In the case of high pressure diesel injection the flow conditions inside the injection holes have an important influence on the development of the spray. The existence of cavitation structures is known to contribute to the instantaneous break-up of the liquid when it leaves the nozzle. Today the majority of CFD-codes use an Eulerian /Lagrangeian way of description in order to calculate the temporal and spatial distribution of the continuous gas phase and the dispersed liquid. Because of the Lagrangeian way to track the liquid the spray calculation starts with big spherical fuel droplets that are subject to secondary aerodynamic induced break-up. The origin of these drops (primary break-up) is usually not modelled but replaced by assumptions. Today it is well known that this method of treating the primary break-up is not sufficient at all and that the primary break-up in the near nozzle region is mainly dependent on the flow conditions inside the injection holes. In this paper a new model for cavitation and turbulence induced primary break-up is presented, which is able to map the influence of the cavitating nozzle flow on spray break-up. Different locations and sizes of both vapour and liquid zones inside the injection holes lead to different spray structures and cone angles near the nozzle. The model includes cavitation bubble dynamics. It describes the transition from the cavitating flow inside the injection hole to the dense spray near the nozzle and provides all necessary starting conditions for the spray simulation like spray cone angle, drop sizes, velocities etc. The Kelvin-Helmholtz model is used to calculate the secondary break-up. The model has been implemented in the 3-d-CFD code KIVA-3V and a first validation has been done. Introduction In direct injection diesel engines the fuel atomization process strongly affects combustion and exhaust emissions [1]. Fig.1 shows details of the disintegration process which is divided into the primary and the secondary break-up. The primary break-up is the first disintegration of the coherent liquid into big droplets and ligaments near the nozzle. It strongly depends on the flow conditions inside the injection holes that give the starting conditions for the spray break-up. In the case of high pressure injection the presence of cavitation makes the disintegration already begin inside the holes: because of the strong acceleration of the fuel at the inlet of the holes the static pressure decreases considerably. The curvature of the streamlines superimposes an additional radial pressure gradient because of centrifugal forces [2]. At the inlet edge the pressure falls to the vapour pressure resulting in the formation of cavitation structures along the walls. These cavitation structures extend to the exit, leave the nozzle and collapse outside (Fig.1). This results in instantaneous break-up and spray divergence. The turbulent and cavitating nozzle flow has been recognized as the most important influence parameter on the primary break-up in the case of high pressure diesel injection [2,3,4]. The secondary break-up is the further break-up of droplets into smaller ones. Because of the relative velocity between droplet and gas aerodynamic forces make surface waves grow which are then split off and generate small droplets. Today quite sophisticated models for the description of the secondary break-up like Taylor Analogy, KelvinHelmholtz and Rayleigh-Taylor Break-up are implemented in modern CFD-codes. The Lagrangeian way to describe the liquid implies the existence of drops. In order to describe the transition from the coherent liquid inside the nozzle to the primary droplets, sub-models have to be used. Usually uniform droplets whose diameter is equal to the nozzle diameter are assumed to leave the nozzle (Fig.1, blob-method). The starting conditions of these blobs have to be adjusted for each calculation in order to get reasonable results. The influence of the cavitating nozzle flow on the drop size distribution, the spray angle etc. cannot be mapped satisfying. Arcoumanis et al. [5] have developed a cavitation induced atomization model which uses the total area at the exit of the injection hole occupied by cavitation bubbles to calculate the radius of an equivalent bubble having the same area as all bubbles together. The collapse time of this artificial bubble is used as time scale for the atomization process. The collapse energy which contributes to the production of droplets in the primary break-up zone is not included. Huh and Gosman [6] have published a phenomenological model based on the assumption that cavitation and turbulence inside the nozzle holes can be attributed to turbulent fluctuations in the exit flow being the dominant source of perturbations to the free surface. The analysis reproduces measured spray angles tolerably well. However the effects of cavitation are represented in a very crude fashion. Nishimura and Assanis [7] have presented a model for primary atomization based on cavitation bubble collapse energy. It tracks bubble dynamics inside the injector and transfers collapse energy to turbulent kinetic energy. The latter induces an additional break-up force that is balanced with aerodynamic and surface tension forces to determine primary break-up time and total mass of child droplets. The model gives good results but is not able to map the influence of flow asymmetries inside the holes on the 3-d primary spray. Figure 1: Cavitation induced primary break-up (high pressure diesel injection) and blob-method In order to develop an improved primary break-up model for high pressure diesel injection detailed experimental and numerical investigations of the nozzle hole flow have been performed [8]. These investigations have shown that during the quasi-stationary injection phase (full needle lift) there is a stationary distribution of cavitation and liquid regions. Thus the flow can be divided into two zones (Fig. 2): zone 1 (liquid, high momentum) and zone 2 (mixture of cavitation bubbles and liquid ligaments, low momentum). The shape, extension and position of the zones is strongly dependent on the nozzle geometry. Fig. 2 shows the two-zone distribution for an axis-symmetric (geometry A) and a non-axis-symmetric single hole nozzle (geometry B). Details about the exact geometries are published in [8]. In case of geometry A the exit flow consists of an inner liquid flow surrounded by the cavitation zone. This leads to a symmetric primary spray. In case of geometry B the cavitation zone is concentrated at the upper wall resulting in a larger divergence of the upper part of the primary spray. Both geometries are used in this paper to study the behaviour of the new primary break-up model. Figure 2: Two-zone structure of nozzle hole flow Primary break-up model The purpose of the new primary break-up model is to describe the transition from the flow inside the nozzle to the first primary droplets and to provide all starting conditions for the calculation of secondary break-up and spray formation. The input data for the new model like average flow velocity u of the liquid zone, extension, shape and position of liquid (zone 1) and cavitation (zone 2), mass flow of both zones and the average void fraction α = (ρ ρl)/(ρv ρl) of zone 2 are extracted from a CFD calculation of the nozzle hole flow. The indices “v” and “l” indicate vapour and liquid, ρ is the average density. Fig. 3 shows the structure of the primary break-up model. The model assumes that the primary break-up begins already inside the nozzle and that large cylindrical primary ligaments of length L and Diameter D leave the nozzle hole. D is equal to the nozzle diameter and L is equal to the effective diameter of the liquid zone (deff = (4 areazone1/π) ). Smaller areas of zone 1 result in shorter primary ligaments and represent an increased part of the primary break-up that has already taken place inside the nozzle. According to the two-zone nozzle hole flow the primary ligaments also consist of two zones whose distribution is equal to the one at the nozzle exit. The flow velocity u in axial direction is the average velocity of the liquid zone at the nozzle exit. Because the stochastic parcel method is used to simulate the spray break-up process, all cavitation bubbles inside a primary ligament have the same size, but from ligament to ligament the sizes differ. No detailed experimental data about bubble sizes is available in the literature, and a size distribution has to be assumed. For the investigations described in this paper bubble sizes are sampled from a Gaussian distribution ( = r 10 μm, std.dev.:10 μm), but only the part of the curve between a minimum radius of 2 μm and a maximum one of Lcav, max/2 is used ( Lcav, max: see Fig. 4). From the known average void fraction and size of zone 2 the volume of pure vapour and the number of bubbles inside a primary ligament can be calculated. The calculation of bubble dynamics gives the total collapse energy Ecav and the bubble collapse time tcoll . Figure 3: Structure of the new two-zone primary break-up model The break-up of the primary ligament into secondary droplets is assumed to occur at the time tcoll after leaving the nozzle. Furthermore it is assumed that the bubble collapse is homogeneously distributed in zone 2 and that it results in pressure waves which propagate to the interfaces. At the interface between gas and zone 2 the collapse energy Ecav2 reinforces the break-up of zone 2. At the interface between zone 2 and zone 1 the energy Ecav1 is absorbed from zone 1 and contributes to the break-up of the liquid. The ratio Ecav2/Ecav1 is therefore calculated as area of the interface between gas and zone 2 divided by the one between zone 1 and zone 2. The larger the cavitation zone, the bigger the part of energy that is available for the break-up of zone 2. A concentration of a fixed volume of zone 2 e.g. at the upper wall (Fig. 2, geometry B) also make
Scannerless 3D imaging sensors
This contribution describes the research activity on the development of different smart pixel topologies aimed at three-dimensional (3D) vision applications exploiting the multiple-pulse indirect time-of-flight (TOF) and standard direct TOF techniques. The proposed approaches allow for the realization of scannerless laser ranging systems capable of fast collection of 3D data sets, as required in a growing number of applications like, automotive, security, surveillance and robotic guidance. Single channel approach, as well as matrix-organized sensors, will be described, facing the demanding constraints of specific applications, like the high dynamic range capability and the background immunity. Real time range (3D) and intensity (2D) imaging of non-cooperative targets, also in presence of strong background illumination, has been successfully performed in the 2m-9m range with a precision better than 5% and an accuracy of about 1%.
Design of Android type Humanoid Robot Albert HUBO
To celebrate the 100th anniversary of the announcement of the special relativity theory of Albert Einstein, KAIST HUBO team and Hanson robotics team developed android type humanoid robot Albert HUBO which may be the world's first expressive human face on a walking biped robot. The Albert HUBO adopts the techniques of the HUBO design for Albert HUBO body and the techniques of Hanson robotics for Albert HUBO's head. Its height and weight are 137 cm and 57 Kg. Albert HUBO has 66 DOFs (31 for head motions and 35 for body motions) And head part uses `fubber' materials for smooth artificial skin and 28 servo motors for face movements and 3 servo motors for neck movements are used for generating a full range of facial expressions such as laugh, sadness, angry, surprised, etc. and body is modified with HUBO(KHR-3) introduced in 2004 to join with Albert HUBO's head and 35 DC motors are embedded for imitating various human-like body motions
2D shape morphing via automatic feature matching and hierarchical interpolation
The paper presents a new method to interpolate a pair of 2D shapes that are represented by piecewise linear curves. The method addresses two key problems in 2D shape morphing process: feature correspondence and path interpolation. First, a robust feature metric is defined to measure the similarity of a pair of 2D shapes in terms of visual appearance, orientation and relative size. Based on the metric, an optimal problem is defined and solved to associate the features on the source shape with the corresponding ones on the target shape. Then, a two-level hierarchical approach is proposed to solve the corresponding features interpolation trajectory problem. The algorithm decomposes the input shapes into a pair of corresponding coarse polygons and several pairs of corresponding features. Then the corresponding coarse polygons are interpolated in an as-rigid-as-possible plausible way; meanwhile the corresponding features are interpolated using the intrinsic method. Thus interior distortions of the intermediate shapes could be avoided and the feature details on the input shapes could be well preserved. Experimental results show that the method can generate smooth, natural and visually pleasing 2D shape morphing effects. & 2009 Elsevier Ltd. All rights reserved.
Holographic Pattern Synthesis With Modulated Substrate Integrated Waveguide Line-Source Leaky-Wave Antennas
We present the synthesis of one-dimensional (line-source) leaky-wave antennas (LWAs) in substrate integrated waveguide (SIW) technology with modulated geometry, demonstrating the capability to flexibly tailor the radiated fields pattern, both in nearand far-field regimes. The synthesis technique is inspired in holographic concepts, which are related to the existence of modulated leaky waves. A systematic design algorithm to obtain the requested modulation of the SIW width and distance between posts to synthesize the desired radiation pattern is described. Several design examples operating at 15 GHz are reported and experimentally validated, showing the power and versatility of the proposed holographic SIW technology.
Efficient, High-Quality, GPU-Based Visualization of Voxelized Surface Data with Fine and Complicated Structures
This paper proposes a GPU-based method that can visualize voxelized surface data with fine and complicated features, has high rendering quality at interactive frame rates, and provides low memory consumption. The surface data is compressed using run-length encoding (RLE) for each level of detail (LOD). Then, the loop for the rendering process is performed on the GPU for the position of the viewpoint at each time instant. The scene is raycasted in planes, where each plane is perpendicular to the horizontal plane in the world coordinate system and passes through the viewpoint. For each plane, one ray is cast to rasterize all RLE elements intersecting this plane, starting from the viewpoint and ranging up to the maximum view distance. This rasterization process projects each RLE element passing the occlusion test onto the screen at a LOD that decreases with the distance of the RLE element from the viewpoint. Finally, the smoothing of voxels in screen space and full screen anti-aliasing is performed. To provide lighting calculations without storing the normal vector inside the RLE data structure, our algorithm recovers the normal vectors from the rendered scene’s depth buffer. After the viewpoint changes, the same process is re-executed for the new viewpoint. Experiments using different scenes have shown that the proposed algorithm is faster than the equivalent CPU implementation and other related methods. Our experiments further prove that this method is memory efficient and achieves high quality results. key words: volume data, voxels, raycasting, splatting, view-transform, run-length encoding
The chemistry of graphene oxide.
The chemistry of graphene oxide is discussed in this critical review. Particular emphasis is directed toward the synthesis of graphene oxide, as well as its structure. Graphene oxide as a substrate for a variety of chemical transformations, including its reduction to graphene-like materials, is also discussed. This review will be of value to synthetic chemists interested in this emerging field of materials science, as well as those investigating applications of graphene who would find a more thorough treatment of the chemistry of graphene oxide useful in understanding the scope and limitations of current approaches which utilize this material (91 references).
Kello Depa Depa Warmth and Competence as Universal Dimensions of Social Perception : The Stereotype Content Model and the BIAS Map
The stereotype content model (SCM) defines two fundamental dimensions of social perception, warmth and competence, predicted respectively by perceived competition and status. Combinations of warmth and competence generate distinct emotions of admiration, contempt, envy, and pity. From these intergroup emotions and stereotypes, the behavior from intergroup affect and stereotypes (BIAS) map predicts distinct behaviors: active and passive, facilitative and harmful. After defining warmth/communion and competence/agency, the chapter integrates converging work documenting the centrality of these dimensions in interpersonal as well as intergroup perception. Structural origins of warmth and competence perceptions result from competitors judged as not warm, and allies judged as warm; high status confers competence and low status incompetence. Warmth and competence judgments support systematic patterns of cognitive, emotional, and behavioral reactions, including ambivalent prejudices. Past views of prejudice as a univalent antipathy have obscured the unique responses toward groups stereotyped as competent but not warm or warm but not competent. Finally, the chapter addresses unresolved issues and future research directions.
Boosting Variational Inference: an Optimization Perspective
Variational inference is a popular technique to approximate a possibly intractable Bayesian posterior with a more tractable one. Recently, boosting variational inference [20, 4] has been proposed as a new paradigm to approximate the posterior by a mixture of densities by greedily adding components to the mixture. However, as is the case with many other variational inference algorithms, its theoretical properties have not been studied. In the present work, we study the convergence properties of this approach from a modern optimization viewpoint by establishing connections to the classic Frank-Wolfe algorithm. Our analyses yields novel theoretical insights regarding the sufficient conditions for convergence, explicit rates, and algorithmic simplifications. Since a lot of focus in previous works for variational inference has been on tractability, our work is especially important as a much needed attempt to bridge the gap between probabilistic models and their corresponding theoretical properties.
End-to-end attention-based large vocabulary speech recognition
Many state-of-the-art Large Vocabulary Continuous Speech Recognition (LVCSR) Systems are hybrids of neural networks and Hidden Markov Models (HMMs). Recently, more direct end-to-end methods have been investigated, in which neural architectures were trained to model sequences of characters [1,2]. To our knowledge, all these approaches relied on Connectionist Temporal Classification [3] modules. We investigate an alternative method for sequence modelling based on an attention mechanism that allows a Recurrent Neural Network (RNN) to learn alignments between sequences of input frames and output labels. We show how this setup can be applied to LVCSR by integrating the decoding RNN with an n-gram language model and by speeding up its operation by constraining selections made by the attention mechanism and by reducing the source sequence lengths by pooling information over time. Recognition accuracies similar to other HMM-free RNN-based approaches are reported for the Wall Street Journal corpus.
Randomized controlled trial of albendazole in new onset epilepsy and MRI confirmed solitary cerebral cysticercal lesion: Effect on long-term seizure outcome
No trials to date have focused on long-term seizure outcome in solitary cerebral cysticercal lesion (SCCL), which is believed to produce a relatively benign form of epilepsy. This is a prospective randomized controlled study to evaluate the effect of Albendazole on long-term seizure outcome in patients with MRI-confirmed solitary cerebral cysticercal lesion (SCCL). One hundred and twenty-three patients with new-onset seizures and SCCL on contrast MRI were randomized to treatment with albendazole and followed for up to five years with serial MRI and clinical evaluation. At final analysis 103 patients (M-54, F-49) with a mean age of 18.6+/-10.7 years and follow-up period more than 12 months were included. The mean follow-up duration was 31.4+/-14.8 months (12-64). At one month follow-up more patients receiving albendazole were seizure-free (62% versus 49% for controls). Subsequently there was no significant difference in overall seizure outcome between the two groups. There was no correlation between seizure semiology, albendazole therapy and long-term seizure outcome. Baseline MRI showed active lesions in all; 23% remained active at 12 months with no difference between the albendazole and control groups. Patients whose lesions resolved at 12 months showed better seizure outcome. Reduction in mean cyst area was greater in the albendazole group as compared to the controls and the difference at six months was significant (p<0.05). At three months follow-up perilesional edema also resolved faster in albendazole group (p<0.05). Thus, albendazole did not alter the long-term seizure outcome in patients with SCCL and epilepsy. However, albendazole hastened resolution of SCCL on MRI, but interestingly 23% of lesions were still active 12 months after treatment.
Word Attention for Sequence to Sequence Text Understanding
Attention mechanism has been a key component in Recurrent Neural Networks (RNNs) based sequence to sequence learning framework, which has been adopted in many text understanding tasks, such as neural machine translation and abstractive summarization. In these tasks, the attention mechanism models how important each part of the source sentence is to generate a target side word. To compute such importance scores, the attention mechanism summarizes the source side information in the encoder RNN hidden states (i.e., ht), and then builds a context vector for a target side word upon a subsequence representation of the source sentence, since ht actually summarizes the information of the subsequence containing the first t-th words in the source sentence. We in this paper, show that an additional attention mechanism called word attention, that builds itself upon word level representations, significantly enhances the performance of sequence to sequence learning. Our word attention can enrich the source side contextual representation by directly promoting the clean word level information in each step. Furthermore, we propose to use contextual gates to dynamically combine the subsequence level and word level contextual information. Experimental results on abstractive summarization and neural machine translation show that word attention significantly improve over strong baselines. In particular, we achieve the state-of-the-art result on WMT’14 English-French translation task with 12M training data.
The End of an Architectural Era (It's Time for a Complete Rewrite)
In previous papers [SC05, SBC+07], some of us predicted the end of “one size fits all” as a commercial relational DBMS paradigm. These papers presented reasons and experimental evidence that showed that the major RDBMS vendors can be outperformed by 1-2 orders of magnitude by specialized engines in the data warehouse, stream processing, text, and scientific database markets. Assuming that specialized engines dominate these markets over time, the current relational DBMS code lines will be left with the business data processing (OLTP) market and hybrid markets where more than one kind of capability is required. In this paper we show that current RDBMSs can be beaten by nearly two orders of magnitude in the OLTP market as well. The experimental evidence comes from comparing a new OLTP prototype, H-Store, which we have built at M.I.T., to a popular RDBMS on the standard transactional benchmark, TPC-C. We conclude that the current RDBMS code lines, while attempting to be a “one size fits all” solution, in fact, excel at nothing. Hence, they are 25 year old legacy code lines that should be retired in favor of a collection of “from scratch” specialized engines. The DBMS vendors (and the research community) should start with a clean sheet of paper and design systems for tomorrow’s requirements, not continue to push code lines and architectures designed for yesterday’s needs.
Sigsoftmax: Reanalysis of the Softmax Bottleneck
Softmax is an output activation function for modeling categorical probability distributions in many applications of deep learning. However, a recent study revealed that softmax can be a bottleneck of representational capacity of neural networks in language modeling (the softmax bottleneck). In this paper, we propose an output activation function for breaking the softmax bottleneck without additional parameters. We re-analyze the softmax bottleneck from the perspective of the output set of log-softmax and identify the cause of the softmax bottleneck. On the basis of this analysis, we propose sigsoftmax, which is composed of a multiplication of an exponential function and sigmoid function. Sigsoftmax can break the softmax bottleneck. The experiments on language modeling demonstrate that sigsoftmax and mixture of sigsoftmax outperform softmax and mixture of softmax, respectively.
Scrap your boilerplate: a practical design pattern for generic programming
We describe a design pattern for writing programs that traverse data structures built from rich mutually-recursive data types. Such programs often have a great deal of "boilerplate" code that simply walks the structure, hiding a small amount of "real" code that constitutes the reason for the traversal.Our technique allows most of this boilerplate to be written once and for all, or even generated mechanically, leaving the programmer free to concentrate on the important part of the algorithm. These generic programs are much more adaptive when faced with data structure evolution because they contain many fewer lines of type-specific code.Our approach is simple to understand, reasonably efficient, and it handles all the data types found in conventional functional programming languages. It makes essential use of rank-2 polymorphism, an extension found in some implementations of Haskell. Further it relies on a simple type-safe cast operator.
Video games as a multifaceted medium: a review of quantitative social science research on video games and a typology of video game research approaches
• Much quantitative social science research has explored video games’ social impact using widely varied methods and approaches. • As light is sometimes studied as a wave and sometimes as a particle, video game research has used many perspectives. • It is difficult to compare some game research because studies often examine one social dimension of games while ignoring others. • Researchers exploring different video game dimensions are sometimes like the Indian parable of the blind men and the elephant. • A typology of social science research approaches to video games will aid comparison, synthesis, and expansion of research. • This review of video game research approaches identifies four distinct perspectives used in much video game research. • The “video games as stimulus” perspective includes research focused on effects of game content and features on users. • The “video games as avocation” perspective includes research focused on users of video games and their commitment to the medium. • The “video games as skill” perspective includes research focused on video games as a tool for developing skills and abilities. • The “video games as social environment” perspective includes research focused on social interaction between game users.
Sitagliptin/Metformin Versus Insulin Glargine Combined With Metformin in Obese Subjects With Newly Diagnosed Type 2 Diabetes
To compare the therapeutic effects of different regimens in Chinese obese type 2 diabetic mellitus (T2DM) patients. From October 2013 to July 2014, a total of 166 T2DM outpatients who attended the Shanghai Changhai Hospital and the Yijishan Hospital of Wannan Medical College were randomly assigned into an experimental sitagliptin/metformin combined with low caloric diet group (n = 115) and an insulin glargine combined with metformin control group (n = 51). Inclusion criteria were body mass index (BMI) ≥ 25 kg/m and diagnosed with T2DM with glycosylated hemoglobin (glycated hemoglobin A1C [HbA1c]) >9%. Main outcome parameters were fasting plasma glucose, postprandial plasma glucose, BMI, HbA1c, fasting C-peptide, 2-h postprandial C-peptide, triglyceride (TG), total cholesterol (TC), high-density cholesterol (HDL-C), and low-density cholesterol (LDL-C), which were determined by the 75 g steamed-bun meal tolerance test before and 4, 8, 12, and 24 weeks after the treatment started. Treatment costs and life quality were also assessed. BMI, HbA1C, TG, TC, and LDL were significantly more reduced (P < 0.000) and HbA1c significantly better improved in the experimental group than in the control group (<6.5% in 24 [20.87%] vs 2 [3.92%], P < 0.001; <7% in 65 [56.52%] vs 12 [23.53%], P < 0.001). Quality of life scores in the experimental group increased more than in the control group (P < 0.001). The costs for the experimental group medication were less than for other regimens. For obese T2DM patients diagnosed with a glycosylated hemoglobin level >9%, oral sitagliptin/metformin combined with a low caloric diet effectively and economically maintained glycemic control and significantly improved life quality.
Large-scale Artificial Neural Network: MapReduce-based Deep Learning
Faced with continuously increasing scale of data, original back-propagation neural network based machine learning algorithm presents two non-trivial challenges: huge amount of data makes it difficult to maintain both efficiency and accuracy; redundant data aggravates the system workload. This project is mainly focused on the solution to the issues above, combining deep learning algorithm with cloud computing platform to deal with large-scale data. A MapReduce-based handwriting character recognizer will be designed in this project to verify the efficiency improvement this mechanism will achieve on training and practical large-scale data. Careful discussion and experiment will be developed to illustrate how deep learning algorithm works to train handwritten digits data, how MapReduce is implemented on deep learning neural network, and why this combination accelerates computation. Besides performance, the scalability and robustness will be mentioned in this report as well. Our system comes with two demonstration software that visually illustrates our handwritten digit recognition/encoding application. 1
Combining Traditional Marketing and Viral Marketing with Amphibious Influence Maximization
In this paper, we propose the amphibious influence maximization (AIM) model that combines traditional marketing via content providers and viral marketing to consumers in social networks in a single framework. In AIM, a set of content providers and consumers form a bipartite network while consumers also form their social network, and influence propagates from the content providers to consumers and among consumers in the social network following the independent cascade model. An advertiser needs to select a subset of seed content providers and a subset of seed consumers, such that the influence from the seed providers passing through the seed consumers could reach a large number of consumers in the social network in expectation. We prove that the AIM problem is NP-hard to approximate to within any constant factor via a reduction from Feige's k-prover proof system for 3-SAT5. We also give evidence that even when the social network graph is trivial (i.e. has no edges), a polynomial time constant factor approximation for AIM is unlikely. However, when we assume that the weighted bi-adjacency matrix that describes the influence of content providers on consumers is of constant rank, a common assumption often used in recommender systems, we provide a polynomial-time algorithm that achieves approximation ratio of (1-1/e-ε)3 for any (polynomially small) ε > 0. Our algorithmic results still hold for a more general model where cascades in social network follow a general monotone and submodular function.
Mitoxantrone combined with paclitaxel as salvage therapy for platinum-refractory ovarian cancer: laboratory study and clinical pilot trial.
This report describes preclinical and early clinical investigations of the mitoxantrone/paclitaxel combination (NT) for patients with platinum-refractory ovarian cancer. The preclinical activity of NT was studied ex vivo, evaluating native tumor specimens with the ATP tumor chemosensitivity assay. Of 24 tumors tested, 20 (83%) were sensitive to NT, whereas 7 (29%) responded to mitoxantrone and 8 (33%) responded to paclitaxel. In the majority of tumors assayed (19 of 24), potentiating or major independent effects between both agents were found. Subsequently, a clinical pilot trial of NT was initiated for patients with platinum-refractory ovarian cancer. Patients had failed one to four (median, two) prior chemotherapy regimens. In 11 cases, NT was administered every three weeks with 8 mg/m2 mito-xantrone and 180 mg/m2 paclitaxel (NT-I). Seven patients were treated biweekly with 6 mg/m2 mitoxantrone and weekly with 100 mg/m2 paclitaxel (NT-II). During 92 NT courses, myelosuppression with leucopenia, anemia, and thrombocytopenia was the limiting toxicity, occurring more frequently with NT-II. No patient required hospitalization due to any life-threatening complication. Five complete and nine partial remissions were observed with both NT-I and NT-II, accounting for an overall 78% response rate, with a median progression-free survival of 40 weeks. One patient showed early progression during therapy. Currently, three patients (NT-I, two; NT-II, one) have died due to progressive relapsed ovarian cancer, so that the median overall survival is not reached after a median follow-up of 40.5+ weeks. Both schedules were found to be equal in terms of response rate and overall survival. NT is highly active and practical for salvage treatment of ovarian cancer. NT-II may be preferred due to both clinical activity and patients' acceptance. However, NT-I seems to be a less myelotoxic alternative. Both schedules warrant further clinical investigation.
Automatic Text Categorization by Unsupervised Learning
The goal of text categorization is to classify documents into a certain number of predefined categories. The previous works in this area have used a large number of labeled training doculnents for supervised learning. One problem is that it is difficult to create the labeled training documents. While it is easy to collect the unlabeled documents, it is not so easy to manually categorize them for creating traiuing documents. In this paper, we propose an unsupervised learning method to overcome these difficulties. The proposed lnethod divides the documents into sentences, and categorizes each sentence using keyword lists of each category and sentence simihuity measure. And then, it uses the categorized sentences for refining. The proposed method shows a similar degree of performance, compared with the traditional supervised learning inethods. Therefore, this method can be used in areas where low-cost text categorization is needed. It also can be used for creating training documents.
Attitudes of Malaysian general hospital staff towards patients with mental illness and diabetes
BACKGROUND The context of the study is the increased assessment and treatment of persons with mental illness in general hospital settings by general health staff, as the move away from mental hospitals gathers pace in low and middle income countries. The purpose of the study was to examine whether general attitudes of hospital staff towards persons with mental illness, and extent of mental health training and clinical experience, are associated with different attitudes and behaviours towards a patient with mental illness than towards a patients with a general health problem - diabetes. METHODS General hospital health professionals in Malaysia were randomly allocated one of two vignettes, one describing a patient with mental illness and the other a patient with diabetes, and invited to complete a questionnaire examining attitudes and health care practices in relation to the case. The questionnaires completed by respondents included questions on demographics, training in mental health, exposure in clinical practice to people with mental illness, attitudes and expected health care behaviour towards the patient in the vignette, and a general questionnaire exploring negative attitudes towards people with mental illness. Questionnaires with complete responses were received from 654 study participants. RESULTS Stigmatising attitudes towards persons with mental illness were common. Those responding to the mental illness vignette (N = 356) gave significantly lower ratings on care and support and higher ratings on avoidance and negative stereotype expectations compared with those responding the diabetes vignette (N = 298). CONCLUSIONS Results support the view that, in the Malaysian setting, patients with mental illness may receive differential care from general hospital staff and that general stigmatising attitudes among professionals may influence their care practices. More direct measurement of clinician behaviours than able to be implemented through survey method is required to support these conclusions.
Growing Story Forest Online from Massive Breaking News
We describe our experience of implementing a news content organization system at Tencent that discovers events from vast streams of breaking news and evolves news story structures in an online fashion. Our real-world system has distinct requirements in contrast to previous studies on topic detection and tracking (TDT) and event timeline or graph generation, in that we 1) need to accurately and quickly extract distinguishable events from massive streams of long text documents that cover diverse topics and contain highly redundant information, and 2) must develop the structures of event stories in an online manner, without repeatedly restructuring previously formed stories, in order to guarantee a consistent user viewing experience. In solving these challenges, we propose Story Forest, a set of online schemes that automatically clusters streaming documents into events, while connecting related events in growing trees to tell evolving stories. We conducted extensive evaluation based on 60 GB of real-world Chinese news data, although our ideas are not language-dependent and can easily be extended to other languages, through detailed pilot user experience studies. The results demonstrate the superior capability of Story Forest to accurately identify events and organize news text into a logical structure that is appealing to human readers, compared to multiple existing algorithm frameworks.
Latent social structure in open source projects
Commercial software project managers design project organizational structure carefully, mindful of available skills, division of labour, geographical boundaries, etc. These organizational "cathedrals" are to be contrasted with the "bazaar-like" nature of Open Source Software (OSS) Projects, which have no pre-designed organizational structure. Any structure that exists is dynamic, self-organizing, latent, and usually not explicitly stated. Still, in large, complex, successful, OSS projects, we do expect that subcommunities will form spontaneously within the developer teams. Studying these subcommunities, and their behavior can shed light on how successful OSS projects self-organize. This phenomenon could well hold important lessons for how commercial software teams might be organized. Building on known well-established techniques for detecting community structure in complex networks, we extract and study latent subcommunities from the email social network of several projects: Apache HTTPD, Python, PostgresSQL, Perl, and Apache ANT. We then validate them with software development activity history. Our results show that subcommunities do indeed spontaneously arise within these projects as the projects evolve. These subcommunities manifest most strongly in technical discussions, and are significantly connected with collaboration behaviour.
Towards systems engineering-a personal view of progress
The article is written from the standpoint of an industrial consumer of tools, methods, and theories as they concern the building of software-intensive IT (information technology) systems. A personal view is given of the contribution that advances in software engineering and related disciplines have made or might make to the job of improving the way in which systems might be constructed in a predictable, cost-effective manner that meets the customer's requirements.<<ETX>>
GlobalFS: A Strongly Consistent Multi-site File System
This paper introduces GlobalFS, a POSIX-compliant geographically distributed file system. GlobalFS builds on two fundamental building blocks, an atomic multicast group communication abstraction and multiple instances of a single-site data store. We define four execution modes and show how all file system operations can be implemented with these modes while ensuring strong consistency and tolerating failures. We describe the GlobalFS prototype in detail and report on an extensive performance assessment. We have deployed GlobalFS across all EC2 regions and show that the system scales geographically, providing performance comparable to other state-of-the-art distributed file systems for local commands and allowing for strongly consistent operations over the whole system. The code of GlobalFS is available as open source.
Improving Multi-class Text Classification with Naive Bayes
There are numerous text documents available in electronic form. More and more are becoming available every day. Such documents represent a massive amount of information that is easily accessible. Seeking value in this huge collection requires organization; much of the work of organizing documents can be automated through text classification. The accuracy and our understanding of such systems greatly influences their usefulness. In this paper, we seek 1) to advance the understanding of commonly used text classification techniques, and 2) through that understanding, improve the tools that are available for text classification. We begin by clarifying the assumptions made in the derivation of Naive Bayes, noting basic properties and proposing ways for its extension and improvement. Next, we investigate the quality of Naive Bayes parameter estimates and their impact on classification. Our analysis leads to a theorem which gives an explanation for the improvements that can be found in multiclass classification with Naive Bayes using Error-Correcting Output Codes. We use experimental evidence on two commonly-used data sets to exhibit an application of the theorem. Finally, we show fundamental flaws in a commonly-used feature selection algorithm and develop a statistics-based framework for text feature selection. Greater understanding of Naive Bayes and the properties of text allows us to make better use of it in text classification. Thesis Supervisor: Tommi Jaakkola Title: Assistant Professor of Electrical Engineering and Computer Science
Randomized phase II study of docetaxel and prednisone with or without OGX-011 in patients with metastatic castration-resistant prostate cancer.
PURPOSE To determine the clinical activity of OGX-011, an antisense inhibitor of clusterin, in combination with docetaxel/prednisone in patients with metastatic castration-resistant prostate cancer. PATIENTS AND METHODS Patients were randomly assigned 1:1 to receive docetaxel/prednisone either with (arm A) or without (arm B) OGX-011 640 mg intravenously weekly. The primary end point was the proportion of patients with a prostate-specific antigen (PSA) decline of ≥ 50% from baseline, with the experimental therapy being considered of interest if the proportion of patients with a PSA decline was more than 60%. Secondary end points were objective response rate, progression-free survival (PFS), overall survival (OS), and changes in serum clusterin. RESULTS Eighty-two patients were accrued, 41 to each arm. OGX-011 adverse effects included rigors and fevers. After cycle 1, median serum clusterin decreased by 26% in arm A and increased by 0.9% in arm B (P < .001). PSA declined by ≥ 50% in 58% of patients in arm A and 54% in arm B. Partial response occurred in 19% and 25% of patients in arms A and B, respectively. Median PFS and OS times were 7.3 months (95% CI, 5.3 to 8.8 months) and 23.8 months (95% CI, 16.2 months to not reached), respectively, in arm A and 6.1 months (95% CI, 3.7 to 8.6 months) and 16.9 months (95% CI, 12.8 to 25.8 months), respectively, in arm B. Baseline factors associated with improved OS on exploratory multivariate analysis were an Eastern Cooperative Oncology Group performance status of 0 (hazard ratio [HR], 0.27; 95% CI, 0.14 to 0.51), presence of bone or lymph node metastases only (HR, 0.45; 95% CI, 0.25 to 0.79), and treatment assignment to OGX-011 (HR, 0.50; 95% CI, 0.29 to 0.87). CONCLUSION Treatment with OGX-011 and docetaxel was well tolerated with evidence of biologic effect and was associated with improved survival. Further evaluation is warranted.
Type shifting in construction grammar : An integrated approach to aspectual coercion *
Implicit type shifting, or coercion, appears to indicate a modular grammatical architecture, in which the process of semantic composition may add meanings absent from the syntax in order to ensure that certain operators, e.g., the progressive, receive suitable arguments (Jackendo¤ 1997; De Swart 1998). I will argue that coercion phenomena actually provide strong support for a sign-based model of grammar, in which rules of morphosyntactic combination can shift the designations of content words with which they combine. On this account, enriched composition is a by-product of the ordinary referring behavior of constructions. Thus, for example, the constraint which requires semantic concord between the syntactic sisters in the string a bottle is also what underlies the coerced interpretation found in a beer. If this concord constraint is stated for a rule of morphosyntactic combination, we capture an important generalization: a single combinatory mechanism, the construction, is responsible for both coerced and compositional meanings. Since both type-selecting constructions (e.g., the French Imparfait) and type-shifting constructions (e.g., English progressive aspect) require semantic concord between syntactic sisters, we account for the fact that constructions of both types perform coercion. Coercion data suggest that aspectual sensitivity is not merely a property of formally di¤erentiated past tenses, as in French and Latin, but a general property of tense constructions, including the English present and past tenses.
Classification on ADHD with Deep Learning
Effective discrimination of attention deficit hyperactivity disorder (ADHD) using imaging and functional biomarkers would have fundamental influence on public health. In usual, the discrimination is based on the standards of American Psychiatric Association. In this paper, we modified one of the deep learning method on structure and parameters according to the properties of ADHD data, to discriminate ADHD on the unique public dataset of ADHD-200. We predicted the subjects as control, combined, inattentive or hyperactive through their frequency features. The results achieved improvement greatly compared to the performance released by the competition. Besides, the imbalance in datasets of deep learning model influenced the results of classification. As far as we know, it is the first time that the deep learning method has been used for the discrimination of ADHD with fMRI data.
Sensorless PMSM Drive Based on Stator Feedforward Voltage Estimation Improved With MRAS Multiparameter Estimation
In order to reduce the adverse effect of parameter variation in position sensorless speed control of permanent magnet synchronous motor (PMSM) based on stator feedforward voltage estimation (FFVE), multiparameter estimation using a model reference adaptive system is proposed. Since the FFVE scheme relies on motor parameters, the stator resistance and rotor flux linkage are estimated and continuously updated in the FFVE model in a closed-loop fashion, and the sensitivity to multiparameter changes at low speed is eliminated. To improve the dynamics and stability of the overall system and eliminate transient oscillations in speed estimation, a phase-locked loop like speed estimation method is proposed, which is obtained by passing the q-axis proportional integrator (PI) current regulator output through a first-order filter in the FFVE scheme. The proposed control method is similar to V/f control as in induction motors; therefore, starting from zero speed is possible. The experimental tests are implemented with 1-kW PMSM drive controlled by a TMS320F28335 DSP. The proposed sensorless scheme is also compared with the classical sliding mode observer (SMO). Experimental results show that the proposed sensorless scheme exhibits greater stability at lower speed than the classical SMO under parameter detuning. Experimental results and stability analysis demonstrate the feasibility and effectiveness of the proposed sensorless scheme for PMSM under various load and speed conditions.
Multi-objective Moth Flame Optimization
Mirjalili in 2015, proposed a new nature-inspired meta-heuristic Moth Flame Optimization (MFO). It is inspired by the characteristics of a moth in the dark night to either fly straight towards the moon or fly in a spiral path to arrive at a nearby artificial light source. It aims to reach a brighter destination which is treated as a global solution for an optimization problem. In this paper, the original MFO is suitably modified to handle multi-objective optimization problems termed as MOMFO. Typically concepts like the introduction of archive grid, coordinate based distance for sorting, non-dominance of solutions make the proposed approach different from the original single objective MFO. The performance of proposed MOMFO is demonstrated on six benchmark mathematical function optimization problems regarding superior accuracy and lower computational time achieved compared to Non-dominated sorting genetic algorithm-II (NSGA-II) and Multi-objective particle swarm optimization (MOPSO).
CT screening for lung cancer.
Recommendations against screening for lung cancer were based on the lack of a reduction in mortality of the screened group as compared with the control group in randomized control trials. These results were interpreted as showing that early detection of lung cancer as a result of screening did not decrease the mortality rate compared with detection after presentation of symptoms for the populations being screened. Evidence, however, shows that earlier-stage intervention leads to substantially higher rates of survival. Screening, therefore, is an effective means to prevent deaths from this otherwise fatal disease. This article discusses the evidence of both CT and chest radiograph screening.
Suicide pact by drowning with bound wrists: a case of medico-legal importance.
Suicide pacts are uncommon and mainly committed by male-female pairs in a consortial relationship. The victims frequently choose methods such as hanging, poisoning, using a firearm, etc; however, a case of a suicide pact by drowning is rare in forensic literature. We report a case where a male and a female, both young adults, in a relationship of adopted "brother of convenience" were found drowned in a river. The victims were bound together at their wrists which helped with our conclusion this was a suicide pact. The medico-legal importance of wrist binding in drowning cases is also discussed in this article.
Predictors of outcome from computer-based treatment for substance use disorders: Results from a randomized clinical trial.
BACKGROUND Although empirical evidence for the effectiveness of technology-mediated interventions for substance use disorders is rapidly growing, the role of baseline characteristics of patients in predicting treatment outcomes of a technology-based therapy is largely unknown. METHOD Participants were randomly assigned to either standard methadone maintenance treatment or reduced standard treatment combined with the computer-based therapeutic education system (TES). An array of demographic and behavioral characteristics of participants (N=160) was measured at baseline. Opioid abstinence and treatment retention were measured weekly for a 52-week intervention period. Generalized linear model and Cox-regression were used to estimate the predictive roles of baseline characteristics in predicting treatment outcomes. RESULTS We found significant predictors of opioid abstinence and treatment retention within and across conditions. Among 21 baseline characteristics of participants, employment status, anxiety, and ambivalent attitudes toward substance use predicted better opioid abstinence in the reduced-standard-plus-TES condition compared to standard treatment. Participants who had used cocaine/crack in the past 30 days at baseline showed lower dropout rates in standard treatment, whereas those who had not used exhibited lower dropout rates in the reduced-standard-plus-TES condition. CONCLUSIONS This study is the first randomized controlled trial, evaluating over a 12-month period, how various aspects of participant characteristics impact outcomes for treatments that do or do not include technology-based therapy. Compared to standard alone treatment, including TES as part of the care was preferable for patients who were employed, highly anxious, and ambivalent about substance use and did not produce worse outcomes for any subgroups of participants.
An Introduction to Restricted Boltzmann Machines
Restricted Boltzmann machines (RBMs) are probabilistic graphical models that can be interpreted as stochastic neural networks. The increase in computational power and the development of faster learning algorithms have made them applicable to relevant machine learning problems. They attracted much attention recently after being proposed as building blocks of multi-layer learning systems called deep belief networks. This tutorial introduces RBMs as undirected graphical models. The basic concepts of graphical models are introduced first, however, basic knowledge in statistics is presumed. Different learning algorithms for RBMs are discussed. As most of them are based on Markov chain Monte Carlo (MCMC) methods, an introduction to Markov chains and the required MCMC techniques is provided.
Pulse lead/lag timing detection for adaptive feedback and control based on optical spike-timing-dependent plasticity.
Biological neurons perform information processing using a model called pulse processing, which is both computationally efficient and scalable, adopting the best features of both analog and digital computing. Implementing pulse processing with photonics can result in bandwidths that are billions of times faster than biological neurons and substantially faster than electronics. Neurons have the ability to learn and adapt their processing based on experience through a change in the strength of synaptic connections in response to spiking activity. This mechanism is called spike-timing-dependent plasticity (STDP). Functionally, STDP constitutes a mechanism in which strengths of connections between neurons are based on the timing and order between presynaptic spikes and postsynaptic spikes, essentially forming a pulse lead/lag timing detector that is useful in feedback control and adaptation. Here we report for the first time the demonstration of optical STDP that is useful in pulse lead/lag timing detection and apply it to automatic gain control of a photonic pulse processor.
Deep Generative Model using Unregularized Score for Anomaly Detection with Heterogeneous Complexity
Accurate and automated detection of anomalous samples in a natural image dataset can be accomplished with a probabilistic model for end-to-end modeling of images. Such images have heterogeneous complexity, however, and a probabilistic model overlooks simply shaped objects with small anomalies. This is because the probabilistic model assigns undesirably lower likelihoods to complexly shaped objects that are nevertheless consistent with set standards. To overcome this difficulty, we propose an unregularized score for deep generative models (DGMs), which are generative models leveraging deep neural networks. We found that the regularization terms of the DGMs considerably influence the anomaly score depending on the complexity of the samples. By removing these terms, we obtain an unregularized score, which we evaluated on a toy dataset and real-world manufacturing datasets. Empirical results demonstrate that the unregularized score is robust to the inherent complexity of samples and can be used to better detect anomalies.
Mechanisms of self-incompatibility in flowering plants
Self-incompatibility is a widespread mechanism in flowering plants that prevents inbreeding and promotes outcrossing. The self-incompatibility response is genetically controlled by one or more multi-allelic loci, and relies on a series of complex cellular interactions between the self-incompatible pollen and pistil. Although self-incompatibility functions ultimately to prevent self-fertilization, flowering plants have evolved several unique mechanisms for rejecting the self-incompatible pollen. The self-incompatibility system in the Solanaceae makes use of a multi-allelic RNase in the pistil to block incompatible pollen tube growth. In contrast, the Papaveraceae system appears to have complex cellular responses such as calcium fluxes, actin rearrangements, and programmed cell death occurring in the incompatible pollen tube. Finally, the Brassicaceae system has a receptor kinase signalling pathway activated in the pistil leading to pollen rejection. This review highlights the recent advances made towards understanding the cellular mechanisms involved in these self-incompatibility systems and discusses the striking differences between these systems.
A Survey of Automatic Indexing Techniques for Thai Text Documents
* Faculty of Information Technology, Rangsit University. Abstract With the rapidly increasing number of Thai text documents available in digital media and websites, it is important to find an efficient text indexing technique to facilitate search and retrieval. An efficient index would speed up the response time and improve the accessibility of the documents. Up to now, not much research in Thai text indexing has been conducted as compared to more commonly used languages like English or other European languages. In Thai text indexing, the extraction of indexing terms becomes a main issue because they cannot be specified automatically from text documents, due to the nature of Thai texts being non-segmented. As a result, there are many challenges for indexing Thai text documents. The ma-jority of Thai text indexing techniques can be divided into two main categories: a language-dependent technique and a lan-guage-independent technique as will be described in this paper.
Deep Abstract Q-Networks
Weexamine the problem of learning and planning on high-dimensional domains with long horizons and sparse rewards. Recent approaches have shown great successes in many Atari 2600 domains. However, domains with long horizons and sparse rewards, such as Montezuma’s Revenge and Venture, remain challenging for existing methods. Methods using abstraction [5, 13] have shown to be useful in tackling long-horizon problems. We combine recent techniques of deep reinforcement learning with existing model-based approaches using an expert-provided state abstraction. We construct toy domains that elucidate the problem of long horizons, sparse rewards and high-dimensional inputs, and show that our algorithm significantly outperforms previous methods on these domains. Our abstraction-based approach outperforms Deep QNetworks [11] on Montezuma’s Revenge and Venture, and exhibits backtracking behavior that is absent from previous methods.
Interactions of B-class complex proteins involved in tepal development in Phalaenopsis orchid.
In our previous studies, we identified four DEFICIENS (DEF)-like genes and one GLOBOSA (GLO)-like gene involved in floral organ development in Phalaenopsis equestris. Revealing the DNA binding properties and protein-protein interactions of these floral homeotic MADS-box protein complexes (PeMADS) in orchids is crucial for the elucidation of the unique orchid floral morphogenesis. In this study, the interactome of B-class PeMADS proteins was assayed by the yeast two-hybrid system (Y2H) and glutathione S-transferase (GST) pull-down assays. Furthermore, the DNA binding activities of these proteins were assessed by using electrophoretic mobility shift assay (EMSA). All four DEF-like PeMADS proteins interacted individually with the GLO-like PeMADS6 in Y2H assay, yet with different strengths of interaction. Generally, the PeMADS3/PeMADS4 lineage interacted more strongly with PeMADS6 than the PeMADS2/PeMADS5 lineage did. In addition, independent homodimer formation for both PeMADS4 (DEF-like) and PeMADS6 (GLO-like) was detected. The protein-protein interactions between pairs of PeMADS proteins were further confirmed by using a GST pull-down assay. Furthermore, both the PeMADS4 homodimer and the PeMADS6 homodimer/homomultimer per se were able to bind to the MADS-box protein-binding motif CArG. The heterodimeric complexes PeMADS2-PeMADS6, PeMADS4-PeMADS6 and PeMADS5-PeMADS6 showed CArG binding activity. Taken together, these results suggest that various complexes formed among different combinations of the five B-class PeMADS proteins may increase the complexity of their regulatory functions and thus specify the molecular basis of whorl morphogenesis and combinatorial interactions of floral organ identity genes in orchids.
Development of an evidence-based framework of factors contributing to patient safety incidents in hospital settings: a systematic review
OBJECTIVE The aim of this systematic review was to develop a 'contributory factors framework' from a synthesis of empirical work which summarises factors contributing to patient safety incidents in hospital settings. DESIGN A mixed-methods systematic review of the literature was conducted. DATA SOURCES Electronic databases (Medline, PsycInfo, ISI Web of knowledge, CINAHL and EMBASE), article reference lists, patient safety websites, registered study databases and author contacts. ELIGIBILITY CRITERIA Studies were included that reported data from primary research in secondary care aiming to identify the contributory factors to error or threats to patient safety. RESULTS 1502 potential articles were identified. 95 papers (representing 83 studies) which met the inclusion criteria were included, and 1676 contributory factors extracted. Initial coding of contributory factors by two independent reviewers resulted in 20 domains (eg, team factors, supervision and leadership). Each contributory factor was then coded by two reviewers to one of these 20 domains. The majority of studies identified active failures (errors and violations) as factors contributing to patient safety incidents. Individual factors, communication, and equipment and supplies were the other most frequently reported factors within the existing evidence base. CONCLUSIONS This review has culminated in an empirically based framework of the factors contributing to patient safety incidents. This framework has the potential to be applied across hospital settings to improve the identification and prevention of factors that cause harm to patients.
Grinding surface roughness measurement based on the co-occurrence matrix of speckle pattern texture.
Surface speckle pattern intensity distribution resulting from laser light scattering from a rough surface contains various information about the surface geometrical and physical properties. A surface roughness measurement technique based on the texture analysis of surface speckle pattern texture images is put forward. In the surface roughness measurement technique, the speckle pattern texture images are taken by a simple setup configuration consisting of a laser and a CCD camera. Our experimental results show that the surface roughness contained in the surface speckle pattern texture images has a good monotonic relationship with their energy feature of the gray-level co-occurrence matrices. After the measurement system is calibrated by a standard surface roughness specimen, the surface roughness of the object surface composed of the same material and machined by the same method as the standard specimen surface can be evaluated from a single speckle pattern texture image. The robustness of the characterization of speckle pattern texture for surface roughness is also discussed. Thus the surface roughness measurement technique can be used for an in-process surface measurement.
Deep Learning Based MIMO Communications
We introduce a novel physical layer scheme for single user Multiple-Input Multiple-Output (MIMO) communications based on unsupervised deep learning using an autoencoder. This method extends prior work on the joint optimization of physical layer representation and encoding and decoding processes as a single end-to-end task by expanding transmitter and receivers to the multi-antenna case. We introduce a widely used domain appropriate wireless channel impairment model (Rayleigh fading channel), into the autoencoder optimization problem in order to directly learn a system which optimizes for it. We considered both spatial diversity and spatial multiplexing techniques in our implementation. Our deep learning-based approach demonstrates significant potential for learning schemes which approach and exceed the performance of the methods which are widely used in existing wireless MIMO systems. We discuss how the proposed scheme can be easily adapted for open-loop and closed-loop operation in spatial diversity and multiplexing modes and extended use with only compact binary channel state information (CSI) as feedback.
FINITE FORMULATION OF THE ELECTROMAGNETIC FIELD
The objective of this paper is to present an approach to electromagnetic field simulation based on the systematic use of the global (i.e. integral) quantities. In this approach, the equations of electromagnetism are obtained directly in a finite form starting from experimental laws without resorting to the differential formulation. This finite formulation is the natural extension of the network theory to electromagnetic field and it is suitable for computational electromagnetics.
Early Clinical Outcomes After Transcatheter Aortic Valve Replacement Using a Novel Self-Expanding Bioprosthesis in Patients With Severe Aortic Stenosis Who Are Suboptimal for Surgery: Results of the Evolut R U.S. Study.
OBJECTIVES This study sought to evaluate this transcatheter aortic valve (TAV) bioprosthesis in patients who are poorly suitable for surgical aortic valve (AV) replacement. BACKGROUND A novel self-expandable TAV bioprosthesis was designed to provide a low-profile delivery system, conformable annular sealing, and the ability to resheath and reposition during deployment. METHODS The Evolut R U.S. study included 241 patients with severe aortic stenosis who were deemed to be at least high risk for surgery treated at 23 clinical sites in the United States. Clinical outcomes at 30 days were evaluated using Valve Academic Research Consortium-2 criteria. An independent echocardiography laboratory was used to evaluate hemodynamic outcomes. RESULTS Patients were elderly (83.3 ± 7.2 years of age) and had high surgical risk (Society of Thoracic Surgeons predicted risk of mortality of 7.4 ± 3.4%). The majority of patients (89.5%) were treated by iliofemoral access. Resheathing or recapturing was performed in 22.6% of patients; more than 1 valve was required in 3 patients (1.3%). The 30-day outcomes included all-cause mortality (2.5%), disabling stroke (3.3%), major vascular complications (7.5%), life-threatening or disabling bleeding (7.1%), and new permanent pacemaker (16.4%). AV hemodynamics were markedly improved at 30 days: the mean AV gradient was reduced from 48.2 ± 13.0 mm Hg to 7.8 ± 3.1 mm Hg (p < 0.001) and AV area increased from 0.6 ± 0.2 cm2 to 1.9 ± 0.5 cm2 (p < 0.001). Moderate residual paravalvular leak was identified in 5.3% of patients. CONCLUSIONS We conclude that this novel self-expanding TAV bioprosthesis is safe and effective for the treatment of patients with severe aortic stenosis who are suboptimal for surgery. (Medtronic CoreValve Evolut R U.S. Clinical Study; NCT02207569).
Manifold-based multi-objective policy search with sample reuse
Many real-world applications are characterized by multiple conflicting objectives. In such problems optimality is replaced by Pareto optimality and the goal is to find the Pareto frontier, a set of solutions representing different compromises among the objectives. Despite recent advances in multi-objective optimization, achieving an accurate representation of the Pareto frontier is still an important challenge. Building on recent advances in reinforcement learning and multi-objective policy search, we present two novel manifold-based algorithms to solve multi-objective Markov decision processes. These algorithms combine episodic exploration strategies and importance sampling to efficiently learn a manifold in the policy parameter space such that its image in the objective space accurately approximates the Pareto frontier. We show that episode-based approaches and importance sampling can lead to significantly better results in the context of multi-objective reinforcement learning. Evaluated on three multi-objective problems, our algorithms outperform state-of-the-art methods both in terms of quality of the learned Pareto frontier and sample efficiency.
User modeling with personas
User demographic and behavior data obtained from real user observation provide valuable information for designers. Yet, such information can be misinterpreted if presented as statistic figures. Personas are fictitious user representations created in order to embody behaviors and motivations that a group of real users might express, representing them during the project development process. This article describes the persona as being an effective tool to the users' descriptive model.
Closed-Loop Deep Brain Stimulation Is Superior in Ameliorating Parkinsonism
Continuous high-frequency deep brain stimulation (DBS) is a widely used therapy for advanced Parkinson's disease (PD) management. However, the mechanisms underlying DBS effects remain enigmatic and are the subject of an ongoing debate. Here, we present and test a closed-loop stimulation strategy for PD in the 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) primate model of PD. Application of pallidal closed-loop stimulation leads to dissociation between changes in basal ganglia (BG) discharge rates and patterns, providing insights into PD pathophysiology. Furthermore, cortico-pallidal closed-loop stimulation has a significantly greater effect on akinesia and on cortical and pallidal discharge patterns than standard open-loop DBS and matched control stimulation paradigms. Thus, closed-loop DBS paradigms, by modulating pathological oscillatory activity rather than the discharge rate of the BG-cortical networks, may afford more effective management of advanced PD. Such strategies have the potential to be effective in additional brain disorders in which a pathological neuronal discharge pattern can be recognized.