title
stringlengths
8
300
abstract
stringlengths
0
10k
Classification for High Resolution Remote Sensing Imagery Using a Fully Convolutional Network
As a variant of Convolutional Neural Networks (CNNs) in Deep Learning, the Fully Convolutional Network (FCN) model achieved state-of-the-art performance for natural image semantic segmentation. In this paper, an accurate classification approach for high resolution remote sensing imagery based on the improved FCN model is proposed. Firstly, we improve the density of output class maps by introducing Atrous convolution, and secondly, we design a multi-scale network architecture by adding a skip-layer structure to make it capable for multi-resolution image classification. Finally, we further refine the output class map using Conditional Random Fields (CRFs) post-processing. Our classification model is trained on 70 GF-2 true color images, and tested on the other 4 GF-2 images and 3 IKONOS true color images. We also employ object-oriented classification, patch-based CNN classification, and the FCN-8s approach on the same images for comparison. The experiments show that compared with the existing approaches, our approach has an obvious improvement in accuracy. The average precision, recall, and Kappa coefficient of our approach are 0.81, 0.78, and 0.83, respectively. The experiments also prove that our approach has strong applicability for multi-resolution image classification.
Systematic Poisoning Attacks on and Defenses for Machine Learning in Healthcare
Machine learning is being used in a wide range of application domains to discover patterns in large datasets. Increasingly, the results of machine learning drive critical decisions in applications related to healthcare and biomedicine. Such health-related applications are often sensitive, and thus, any security breach would be catastrophic. Naturally, the integrity of the results computed by machine learning is of great importance. Recent research has shown that some machine-learning algorithms can be compromised by augmenting their training datasets with malicious data, leading to a new class of attacks called poisoning attacks. Hindrance of a diagnosis may have life-threatening consequences and could cause distrust. On the other hand, not only may a false diagnosis prompt users to distrust the machine-learning algorithm and even abandon the entire system but also such a false positive classification may cause patient distress. In this paper, we present a systematic, algorithm-independent approach for mounting poisoning attacks across a wide range of machine-learning algorithms and healthcare datasets. The proposed attack procedure generates input data, which, when added to the training set, can either cause the results of machine learning to have targeted errors (e.g., increase the likelihood of classification into a specific class), or simply introduce arbitrary errors (incorrect classification). These attacks may be applied to both fixed and evolving datasets. They can be applied even when only statistics of the training dataset are available or, in some cases, even without access to the training dataset, although at a lower efficacy. We establish the effectiveness of the proposed attacks using a suite of six machine-learning algorithms and five healthcare datasets. Finally, we present countermeasures against the proposed generic attacks that are based on tracking and detecting deviations in various accuracy metrics, and benchmark their effectiveness.
Ku-band sidearm orthomode transducer manufactured by additive layer manufacturing
This paper investigates the use of additive layer manufacturing (ALM) for waveguide components based on two Ku-band sidearm orthomode transducers (OMT). The advantages and disadvantages of the ALM manufacturing regarding RF waveguide components are discussed and measurement results are compared to those of an equal OMT manufactured by conventional techniques. The paper concludes with an outlook to the capability of advanced manufacturing techniques for RF space applications as well as ongoing development activities.
Long-term relations among prosocial-media use, empathy, and prosocial behavior.
Despite recent growth of research on the effects of prosocial media, processes underlying these effects are not well understood. Two studies explored theoretically relevant mediators and moderators of the effects of prosocial media on helping. Study 1 examined associations among prosocial- and violent-media use, empathy, and helping in samples from seven countries. Prosocial-media use was positively associated with helping. This effect was mediated by empathy and was similar across cultures. Study 2 explored longitudinal relations among prosocial-video-game use, violent-video-game use, empathy, and helping in a large sample of Singaporean children and adolescents measured three times across 2 years. Path analyses showed significant longitudinal effects of prosocial- and violent-video-game use on prosocial behavior through empathy. Latent-growth-curve modeling for the 2-year period revealed that change in video-game use significantly affected change in helping, and that this relationship was mediated by change in empathy.
Processing the Text of the Holy Quran: a Text Mining Study
The Holy Quran is the reference book for more than 1.6 billion of Muslims all around the world Extracting information and knowledge from the Holy Quran is of high benefit for both specialized people in Islamic studies as well as non-specialized people. This paper initiates a series of research studies that aim to serve the Holy Quran and provide helpful and accurate information and knowledge to the all human beings. Also, the planned research studies aim to lay out a framework that will be used by researchers in the field of Arabic natural language processing by providing a ”Golden Dataset” along with useful techniques and information that will advance this field further. The aim of this paper is to find an approach for analyzing Arabic text and then providing statistical information which might be helpful for the people in this research area. In this paper the holly Quran text is preprocessed and then different text mining operations are applied to it to reveal simple facts about the terms of the holy Quran. The results show a variety of characteristics of the Holy Quran such as its most important words, its wordcloud and chapters with high term frequencies. All these results are based on term frequencies that are calculated using both Term Frequency (TF) and Term Frequency-Inverse Document Frequency (TF-IDF) methods. Keywords—Holy Quran; Text Mining; Arabic Natural Language Processing
Consumer health information seeking on the Internet: the state of the art.
Increasingly, consumers engage in health information seeking via the Internet. Taking a communication perspective, this review argues why public health professionals should be concerned about the topic, considers potential benefits, synthesizes quality concerns, identifies criteria for evaluating online health information and critiques the literature. More than 70 000 websites disseminate health information; in excess of 50 million people seek health information online, with likely consequences for the health care system. The Internet offers widespread access to health information, and the advantages of interactivity, information tailoring and anonymity. However, access is inequitable and use is hindered further by navigational challenges due to numerous design features (e.g. disorganization, technical language and lack of permanence). Increasingly, critics question the quality of online health information; limited research indicates that much is inaccurate. Meager information-evaluation skills add to consumers' vulnerability, and reinforce the need for quality standards and widespread criteria for evaluating health information. Extant literature can be characterized as speculative, comprised of basic 'how to' presentations, with little empirical research. Future research needs to address the Internet as part of the larger health communication system and take advantage of incorporating extant communication concepts. Not only should research focus on the 'net-gap' and information quality, it also should address the inherently communicative and transactional quality of Internet use. Both interpersonal and mass communication concepts open avenues for investigation and understanding the influence of the Internet on health beliefs and behaviors, health care, medical outcomes, and the health care system.
Matching as Nonparametric Preprocessing for Reducing Model Dependence in Parametric Causal Inference
Albert J. Weatherhead III University Professor, Harvard University, 2009 to the present. David Florence Professor of Government, Harvard University, 2002 to 2009. Professor of Government, Department of Government, Harvard University, 1990 to 2002. John L. Loeb Associate Professor of the Social Sciences, Department of Government, Harvard University, 1989. Associate Professor, Department of Government, Harvard University, 1987 to 1989. Visiting Assistant Professor, Department of Political Science, University of Wisconsin, Madison, Summer 1985. Assistant Professor, Department of Politics, New York University, September, 1984 to 1987.
A System for Multi-step Mobile Manipulation: Architecture, Algorithms, and Experiments
Household manipulation presents a challenge to robots because it requires perceiving a variety of objects, planning multi-step motions, and recovering from failure. This paper presents practical techniques that improve performance in these areas by considering the complete system in the context of this specific domain. We validate these techniques on a table-clearing task that involves loading objects into a tray and transporting it. The results show that these techniques improve success rate and task completion time by incorporating expected realworld performance into the system design.
A 2.1 MHz Crystal Oscillator Time Base with a Current Consumption under 500 nA
A micro-power circuit is encapsulated with a 2.1 MHz ZT-cut quartz in a vacuum package. The oscillator core has 2 complementary active MOSFETS and amplitude stabilization. New coupling and biasing circuits, and dynamic frequency dividers allow to achieve ±2 ppm frequency stability down to 1.8 V with a current under 0.5 ¿A.
The case study of kindergarten design towards low carbon and green architecture
It is discussed the meaning of green architecture and low carbon architecture. With the case study of design strategies in Xiaoshicheng kindergarten design, a great deal of design method and their applications are developed on account of climate character, architectural function, building technology.
Safety and tolerability of velafermin (CG53135-05) in patients receiving high-dose chemotherapy and autologous peripheral blood stem cell transplant
The objective of this study was to evaluate the safety and tolerability of velafermin in patients at risk of developing severe oral mucositis (OM) from chemotherapy. This study was a single-center, open-label, single-dose escalation, phase I trial in patients undergoing high-dose chemotherapy (HDCT) and autologous peripheral blood stem cell transplant (PBSCT). Velafermin was administered 24 h after stem cell infusion as a single intravenous dose infused over 15 min. Clinical safety variables were assessed and OM status scored daily for 30 days using the World Health Organization (WHO) grading scale. Thirty patients were treated with velafermin at doses of 0.03 (n = 10), 0.1 (n = 10), 0.2 (n = 8), or 0.33 mg/kg (n = 2). Patients were diagnosed with multiple myeloma (n = 16), non-Hodgkin’s lymphoma (n = 12), acute myelogenous leukemia (n = 1), or desmoplasmic round cell tumor (n = 1). Velafermin was well tolerated at doses up to 0.2 mg/kg. There were no drug-related serious adverse events. No patient discontinued because of adverse events; however, two patients administered 0.33 mg/kg developed adverse reactions immediately after infusion of the study drug. No other patients were treated at this dose level. The most frequent (>35% of patients) treatment-emergent adverse events were diarrhea, fatigue, pyrexia, vomiting, and nausea. Most adverse events were mild or moderate and resolved the same day without sequelae. Eight (27%) patients developed WHO grade 3 or 4 OM during the study; seven of these patients received high-dose melphalan as a conditioning regimen. Velafermin was well tolerated by autologous PBSCT patients at doses up to 0.2 mg/kg.
A Solar Cell Stacked Multi-Slot Quad-Band PIFA for GSM, WLAN and WiMAX Networks
This letter presents a novel low-profile quad-bandsolar PIFA which has the potential to be employed in self-powered low-power GSM 1800, 2.4 GHz band WLAN and 2.3/3.3/5.8 GHz band WiMAX networks. The multi-slot loaded radiating PIFA element consisting of W-L shaped slots stacked with a polycrystalline silicon (poly-Si) solar cell operating as a parasitic patch element enables the proposed solar PIFA to operate at the center frequency bands of 1.8, 2.4, 3.4, and 5.8 GHz with measured impedance bandwidths of 16.7%, 9.16%, 7.65%, and 3.45%, respectively. By incorporating a stacked poly-Si solar cell as a parasitic patch element an adequate solar efficiency of 14.5% can be achieved, generating a dc power output of 44 mW.
The YAGO-NAGA approach to knowledge discovery
This paper gives an overview on the YAGO-NAGA approach to information extraction for building a conveniently searchable, large-scale, highly accurate knowledge base of common facts. YAGO harvests infoboxes and category names of Wikipedia for facts about individual entities, and it reconciles these with the taxonomic backbone of WordNet in order to ensure that all entities have proper classes and the class system is consistent. Currently, the YAGO knowledge base contains about 19 million instances of binary relations for about 1.95 million entities. Based on intensive sampling, its accuracy is estimated to be above 95 percent. The paper presents the architecture of the YAGO extractor toolkit, its distinctive approach to consistency checking, its provisions for maintenance and further growth, and the query engine for YAGO, coined NAGA. It also discusses ongoing work on extensions towards integrating fact candidates extracted from natural-language text sources.
Layered foil as an alternative to litz wire: Multiple methods for equal current sharing among layers
Litz wire is useful for high-frequency power but has high cost, limited thermal conductivity, and decreased effectiveness above 1 MHz. Multiple parallel layers of foil have the potential to overcome these limitations, but techniques are needed to enforce current sharing between the foil layers, as is accomplished by twisting in litz wire. Four strategies for this include already known techniques for interchanging foil layer positions, which are reviewed, and three new approaches: 1) balancing flux linkage by adjusting spacing between layers, 2) using capacitance between layers to provide ballast impedance, and 3) adding miniature balancing transformers to the foil terminations. The methods are analyzed and their scope of applicability is compared.
Exercise maintenance following pulmonary rehabilitation: effect of distractive stimuli.
STUDY OBJECTIVE To determine if distractive auditory stimuli (DAS) in the form of music would promote adherence to a walking regimen following completion of a pulmonary rehabilitation program (PRP) and, thereby, maintenance of gains achieved during the program. DESIGN Experimental, randomized, two-group design with testing at baseline, 4 weeks, and 8 weeks. SETTING Outpatient. PATIENTS Twenty-four patients (4 men and 20 women) with moderate-to-severe COPD (FEV(1) 41.3 +/- 13% predicted [mean +/- SD]). INTERVENTION Experimental group subjects (n = 12) were instructed to walk at their own pace for 20 to 45 min, two to five times a week, using DAS with a portable audiocassette player. The control group (n = 12) received the same instructions, but no DAS. MEASUREMENTS AND RESULTS Primary outcome measures were perceived dyspnea during activities of daily living (ADL) and 6-min walk (6MW) distance. Secondary outcome measures were anxiety, depressive symptoms, health-related quality of life (QoL), global QoL, and breathlessness and fatigue at completion of the 6MW. In addition, all subjects recorded the distance and time walked using self-report (pedometers and daily logs). There was a significant decrease in perceived dyspnea during ADL (p = 0.0004) and a significant increase in 6MW distance (p = 0.0004) over time in the DAS group compared to the control group. DAS subjects increased 6MW distance 445 +/- 264 feet (mean +/- SD) from baseline to 8 weeks, whereas control subjects decreased 6MW distance to 169 +/- 154 feet. No significant differences were noted for the remaining variables. The cumulative distance walked by the DAS group was 19.1 +/- 16.7 miles compared to 15.4 +/- 8.0 miles for the control group, a 24% difference (p = 0.49). Despite this difference, self-report exercise log data were similar for the two groups. CONCLUSION Subjects who used DAS while walking had improved functional performance and decreased perceptions of dyspnea, whereas control subjects could not maintain post-PRP gains. DAS is a simple, cost-effective strategy that may have the potential to augment the effectiveness of post-PRP maintenance training.
Enhancing stance phase propulsion during level walking by combining fes with a powered exoskeleton for persons with paraplegia
This paper describes the design and implementation of a cooperative controller that combines functional electrical stimulation (FES) with a powered lower limb exoskeleton to provide enhanced hip extension during the stance phase of walking in persons with paraplegia. The controller utilizes two sources of actuation: the electric motors of the powered exoskeleton and the user's machine (FSM), a set of FES. It consists of a finite-state machine (FSM), a set of proportional-derivative (PD) controllers for the exoskeleton and a cycle-to-cycle adaptive controller for muscle stimulation. Level ground walking is conducted on a single subject with complete T10 paraplegia. Results show a 34% reduction in electrical power requirements at the hip joints during the stance phase of the gait cycle with the cooperative controller compared to using electric motors alone.
Use of reporter genes to study the activity of promoters in ovarian granulosa cells.
Use of reporter genes provides a convenient way to study the activity and regulation of promoters and examine the rate and control of gene transcription. Many reporter genes and transfection methods can be efficiently used for this purpose. To investigate gene regulation and signaling pathway interactions during ovarian follicle development, we have examined promoter activities of several key follicle-regulating genes in the mouse ovary. In this chapter, we describe use of luciferase and beta-galactosidase genes as reporters and a cationic liposome mediated cell transfection method for studying regulation of activin subunit- and estrogen receptor alpha (ERalpha)-promoter activities. We have demonstrated that estrogen suppresses activin subunit gene promoter activity while activin increases ERalpha promoter activity and increases functional ER activity, suggesting a reciprocal regulation between activin and estrogen signaling in the ovary. We also discuss more broadly some key considerations in the use of reporter genes and cell-based transfection assays in endocrine research.
gvnn: Neural Network Library for Geometric Computer Vision
We introduce gvnn, a neural network library in Torch aimed towards bridging the gap between classic geometric computer vision and deep learning. Inspired by the recent success of Spatial Transformer Networks, we propose several new layers which are often used as parametric transformations on the data in geometric computer vision. These layers can be inserted within a neural network much in the spirit of the original spatial transformers and allow backpropagation to enable end-toend learning of a network involving any domain knowledge in geometric computer vision. This opens up applications in learning invariance to 3D geometric transformation for place recognition, end-to-end visual odometry, depth estimation and unsupervised learning through warping with a parametric transformation for image reconstruction error.
Developing Smart Car Parking System Using Wireless Sensor Networks
Car parking is a serious problem and one of the major contributors to traffic congestion in urban areas. This challenge is as a result of sharp increase in numbers of automobiles on the roads. This paper presents the development of a smart parking system using wireless sensor networks. The system can monitor the state of every parking lot by deploying a magnetic sensor node on each lot and also identify improper parking using infrared sensor to check if vehicles are properly parked. The system uses Xbee radio for transmitting information to the base station which performs necessary information processing, analysis and interpretation on the data received to usable and meaningful format for the end users. The results obtained after qualitative testing of the developed prototype shows that server concurrency utilization is in the average of 12 users per minute. It was also observed that the system acquire speedup in terms of average response time of 1.414 seconds. This implies that the system is robust to handle large number of users and is also fast enough in terms of response which gives accurate information about the parking lot. With the system in place, traffic related hazards, fuel wastage and other relted hazards could be reduced.
Willingness to Donate Organs Among People Living With HIV.
BACKGROUND With passage of the HIV Organ Policy Equity (HOPE) Act, people living with HIV (PLWH) can donate organs to PLWH awaiting transplant. Understanding knowledge and attitudes regarding organ donation among PLWH in the United States is critical to implementing the HOPE Act. METHODS PLWH were surveyed regarding their knowledge, attitudes, and beliefs about organ donation and transplantation at an urban academic HIV clinic in Baltimore, MD, between August 2016 and October 2016. Responses were compared using Fisher exact and χ tests. RESULTS Among 114 survey respondents, median age was 55 years, 47.8% were female, and 91.2% were African American. Most were willing to be deceased donors (79.8%) or living donors (62.3%). Most (80.7%) were aware of the US organ shortage; however, only 24.6% knew about the HOPE Act, and only 21.1% were registered donors. Respondents who trusted the medical system or thought their organs would function adequately in recipients were more likely to be willing to be deceased donors (P < 0.001). Respondents who were concerned about surgery, worse health postdonation, or need for changes in HIV treatment because of donation were less likely to be willing to be living donors (P < 0.05 for all). Most believed that PLWH should be permitted to donate (90.4%) and that using HIV+ donor organs for transplant would reduce discrimination against PLWH (72.8%). CONCLUSIONS Many of the PLWH surveyed expressed willingness to be organ donors. However, knowledge about the HOPE Act and donor registration was low, highlighting a need to increase outreach.
A Brief Introduction into Machine Learning
The Machine Learning field evolved from the broad field of Artificial Intelligence, which aims to mimic intelligent abilities of humans by machines. In the field of Machine Learningone considers the important question of how to make machines able to “learn”. Learning in this context is understood as inductive inference , where one observesexamplesthat represent incomplete information about some “statistical phenomenon”. Inunsupervisedlearning one typically tries to uncover hidden regularities (e.g. clusters) or to detect anomalies in the data (for instance some unusual machine function or a network intrusion). Insupervised learning , there is alabel associated with each example. It is supposed to be the answer to a question about the example. If the label is discrete, then the task is called classification problem– otherwise, for realvalued labels we speak of a regression problem. Based on these examples (including the labels), one is particularly interested to predict the answer for other cases before they are explicitly observed. Hence, learning is not only a question of remembering but also ofgeneralization to unseen cases .
Cooccurrence of Multiple Sclerosis and Idiopathic Basal Ganglia Calcification
Multiple sclerosis (MS) is a chronic inflammatory demyelinating and neurodegenerative disease of central nervous system that affects both white and gray matter. Idiopathic calcification of the basal ganglia is a rare neurodegenerative disorder of unknown cause that is characterized by sporadic or familial brain calcification. Concurrence of multiple sclerosis (MS) and idiopathic basal ganglia calcification (Fahr's disease) is very rare event. In this study, we describe a cooccurrence of idiopathic basal ganglia calcification with multiple sclerosis. The association between this disease and MS is unclear and also maybe probably coincidental.
Novel FPGA based Haar classifier face detection algorithm acceleration
We present here a novel approach to use FPGA to accelerate the Haar-classifier based face detection algorithm. With highly pipelined microarchitecture and utilizing abundant parallel arithmetic units in the FPGA, wepsilave achieved real-time performance of face detection having very high detection rate and low false positives. Moreover, our approach is flexible toward the resources available on the FPGA chip. This work also provides us an understanding toward using FPGA for implementing non-systolic based vision algorithm acceleration. Our implementation is realized on a HiTech Global PCIe card that contains a Xilinx XC5VLX110T FPGA chip.
Component-based software engineering - new challenges in software development
The primary role of component-based software engineering is to address the development of systems as an assembly of parts components , the development of parts as reusable entities, and the maintenance and upgrading of systems by customising and replacing such parts. This requires established methodologies and tool support covering the entire component and system lifecycle including technological, organisational, marketing, legal, and other aspects. The traditional disciplines from software engineering need new methodologies to support component-based development.
Ontology of core data mining entities
In this article, we present OntoDM-core, an ontology of core data mining entities. OntoDM-core defines the most essential data mining entities in a three-layered ontological structure comprising of a specification, an implementation and an application layer. It provides a representational framework for the description of mining structured data, and in addition provides taxonomies of datasets, data mining tasks, generalizations, data mining algorithms and constraints, based on the type of data. OntoDM-core is designed to support a wide range of applications/use cases, such as semantic annotation of data mining algorithms, datasets and results; annotation of QSAR studies in the context of drug discovery investigations; and disambiguation of terms in text mining. The ontology has been thoroughly assessed following the practices in ontology engineering, is fully interoperable with many domain resources and is easy to extend. OntoDM-core is available at http://www.ontodm.com .
An Efficient Finger-Knuckle-Print Based Recognition System Fusing SIFT and SURF Matching Scores
This paper presents a novel combination of local-local information for an efficient finger-knuckle-print (FKP) based recognition system which is robust to scale and rotation. The non-uniform brightness of the FKP due to relatively curvature surface is corrected and texture is enhanced. The local features of the enhanced FKP are extracted using the scale invariant feature transform (SIFT) and the speeded up robust features (SURF). Corresponding features of the enrolled and the query FKPs are matched using nearest-neighbour-ratio method and then the derived SIFT and SURF matching scores are fused using weighted sum rule. The proposed system is evaluated using PolyU FKP database of 7920 images for both identification mode and verification mode. It is observed that the system performs with CRR of 100% and EER of 0.215%. Further, it is evaluated against various scales and rotations of the query image and is found to be robust for query images downscaled upto 60% and for any orientation of query image.
Ray tracing: graphics for the masses
Although three-dimensional computer graphics have been around for several decades, there has been a surge of general interest towards the field in the last couple of years. Just a quick glance at the latest blockbuster movies is enough to see the public's fascination with the new generation of graphics. As exciting as graphics are, however, there is a definite barrier which prevents most people from learning about them. For one thing, there is a lot of math and theory involved. Beyond that, just getting a window to display even simple 2D graphics can often be a daunting task. In this article, we will talk about a powerful yet simple 3D graphics method known as ray tracing, which can be understood and implemented without dealing with much math or the intricacies of windowing systems. The only math we assume is a basic knowledge of vectors, dot products, and cross products. We will skip most of the theory behind ray tracing, focusing instead on a general overview of the technique from an implementation-oriented perspective. Full C++ source code for a simple, hardwareindependent ray tracer is available online, to show how the principles described in this paper are applied.
SIFT-based local spectrogram image descriptor: a novel feature for robust music identification
Music identification via audio fingerprinting has been an active research field in recent years. In the real-world environment, music queries are often deformed by various interferences which typically include signal distortions and time-frequency misalignments caused by time stretching, pitch shifting, etc. Therefore, robustness plays a crucial role in music identification technique. In this paper, we propose to use scale invariant feature transform (SIFT) local descriptors computed from a spectrogram image as sub-fingerprints for music identification. Experiments show that these sub-fingerprints exhibit strong robustness against serious time stretching and pitch shifting simultaneously. In addition, a locality sensitive hashing (LSH)-based nearest sub-fingerprint retrieval method and a matching determination mechanism are applied for robust sub-fingerprint matching, which makes the identification efficient and precise. Finally, as an auxiliary function, we demonstrate that by comparing the time-frequency locations of corresponding SIFT keypoints, the factor of time stretching and pitch shifting that music queries might have experienced can be accurately estimated.
Learning Compact Hash Codes for Multimodal Representations Using Orthogonal Deep Structure
As large-scale multimodal data are ubiquitous in many real-world applications, learning multimodal representations for efficient retrieval is a fundamental problem. Most existing methods adopt shallow structures to perform multimodal representation learning. Due to a limitation of learning ability of shallow structures, they fail to capture the correlation of multiple modalities. Recently, multimodal deep learning was proposed and had proven its superiority in representing multimodal data due to its high nonlinearity. However, in order to learn compact and accurate representations, how to reduce the redundant information lying in the multimodal representations and incorporate different complexities of different modalities in the deep models is still an open problem. In order to address the aforementioned problem, in this paper we propose a hashing-based orthogonal deep model to learn accurate and compact multimodal representations. The method can better capture the intra-modality and inter-modality correlations to learn accurate representations. Meanwhile, in order to make the representations compact, the hashing-based model can generate compact hash codes and the proposed orthogonal structure can reduce the redundant information lying in the codes by imposing orthogonal regularizer on the weighting matrices. We also theoretically prove that, in this case, the learned codes are guaranteed to be approximately orthogonal. Moreover, considering the different characteristics of different modalities, effective representations can be attained with different number of layers for different modalities. Comprehensive experiments on three real-world datasets demonstrate a substantial gain of our method on retrieval tasks compared with existing algorithms.
Recurrent Attentional Reinforcement Learning for Multi-Label Image Recognition
Recognizing multiple labels of images is a fundamental but challenging task in computer vision, and remarkable progress has been attained by localizing semantic-aware image regions and predicting their labels with deep convolutional neural networks. The step of hypothesis regions (region proposals) localization in these existing multi-label image recognition pipelines, however, usually takes redundant computation cost, e.g., generating hundreds of meaningless proposals with nondiscriminative information and extracting their features, and the spatial contextual dependency modeling among the localized regions are often ignored or over-simplified. To resolve these issues, this paper proposes a recurrent attention reinforcement learning framework to iteratively discover a sequence of attentional and informative regions that are related to different semantic objects and further predict label scores conditioned on these regions. Besides, our method explicitly models longterm dependencies among these attentional regions that help to capture semantic label co-occurrence and thus facilitate multilabel recognition. Extensive experiments and comparisons on two large-scale benchmarks (i.e., PASCAL VOC and MSCOCO) show that our model achieves superior performance over existing state-of-the-art methods in both performance and efficiency as well as explicitly identifying image-level semantic labels to specific object regions.
A pilot study evaluating ceftriaxone and penicillin G as treatment agents for neurosyphilis in human immunodeficiency virus-infected individuals.
To compare intravenous (iv) ceftriaxone and penicillin G as therapy for neurosyphilis, blood and CSF were collected before and 14-26 weeks after therapy from 30 subjects infected with human immunodeficiency virus (HIV)-1 who had (1) rapid plasma reagin (RPR) test titers >/=1&rcolon;16, (2) reactive serum treponemal tests, and (3) either reactive CSF-Venereal Disease Research Laboratory (VDRL) tests or CSF abnormalities: (a) CSF WBC values >/=20/microL or (b) CSF protein values >/=50 mg/dL. At baseline, more ceftriaxone recipients had skin symptoms and signs (6 [43%] of 14 vs. 1 [6%] of 16; P=.03), and more penicillin recipients had a history of neurosyphilis (7 [44%] of 16 vs. 1 [7%] of 14; P=.04). There was no difference in the proportion of subjects in each group whose CSF measures improved. Significantly more ceftriaxone recipients had a decline in serum RPR titers (8 [80%] of 10 vs. 2 [13%] of 15; P=. 003), even after controlling for baseline RPR titer, skin symptoms and signs, or prior neurosyphilis were controlled for. Differences in the 2 groups limit comparisons between them. However, iv ceftriaxone may be an alternative to penicillin for treatment of HIV-infected patients with neurosyphilis and concomitant early syphilis.
Predicting Users' Personality from Instagram Pictures: Using Visual and/or Content Features?
Instagram is a popular social networking application that allows users to express themselves through the uploaded content and the different filters they can apply. In this study we look at personality prediction from Instagram picture features. We explore two different features that can be extracted from pictures: 1) visual features (e.g., hue, valence, saturation), and 2) content features (i.e., the content of the pictures). To collect data, we conducted an online survey where we asked participants to fill in a personality questionnaire and grant us access to their Instagram account through the Instagram API. We gathered 54,962 pictures of 193 Instagram users. With our results we show that visual and content features can be used to predict personality from and perform in general equally well. Combining the two however does not result in an increased predictive power. Seemingly, they are not adding more value than they already consist of independently.
Active damping of drive train oscillations for an electrically driven vehicle
A problem encountered with electrically driven vehicles are resonances in the drive train caused by elasticity and gear play. Disadvantageous effects caused by this are noticeable vibrations and high mechanical stresses due to torque oscillations. The oscillations can be damped using a control structure consisting of a nonlinear observer to estimate the torque in the gear and a controller, which computes a damping torque signal that is added to the driver's demand. The control algorithm was implemented in the existing motor control unit without any additional hardware cost. The controller was successfully tested in a test vehicle. The resonances can essentially be eliminated. The controller copes satisfactorily with the backlash problem.
An X‐ray method for determination of crystallinity as a function of depth from a polymer surface
Manufactured articles of semicrystalline polymers usually have a variation in texture (degree of crystallinity and crystal orientation) as a function of the depth from the external surface. This is often due to the fact that the crystallization process near the surfaces occurs at different conditions and rates relative to the bulk of the material. In this report we present an X-ray diffraction technique to elucidate the changes in the degree of crystallinity as a function of depth from the surface. Changes in the scheme of X-ray diffractometer (reflection or transmission, slit width, etc.), the linear material absorption coefficient (using different wavelengths) and the diffraction angle have different effects on the scattering from layers at different depths. It is thus possible to define the material heterogeneity as a function of depth. This method is demonstrated using a film of vinylidene fluoride–tetrafluoroethylene (5 mol%) copolymer that has been crystallized asymmetrically by quenching on one side. The increase in crystallinity from a negligible value on the quenched side to the bulk value is presented. Copyright © 2003 John Wiley & Sons, Ltd.
DETECTING LASER SPOT IN SHOOTING SIMULATOR USING AN EMBEDDED CAMERA
This paper presents the application of an embedded camera system for detecting laser spot in the shooting simulator. The proposed shooting simulator uses a specific target box, where the circular pattern target is mounted. The embedded camera is installed inside the box to capture the circular pattern target and laser spot image. To localize the circular pattern automatically, two colored solid circles are painted on the target. This technique allows the simple and fast color tracking to track the colored objects for localizing the circular pattern. The CMUCam4 is employed as the embedded camera. It is able to localize the target and detect the laser spot in real-time at 30 fps. From the experimental results, the errors in calculating shooting score and detecting laser spot are 3.82% and 0.68% respectively. Further the proposed system provides the more accurate scoring system in real number compared to the conventional integer number.
The diagnosis and treatment of deltoid ligament lesions in supination–external rotation ankle fractures: a review
The supination-external rotation or Weber B type fracture exists as a stable and an unstable type. The unstable type has a medial malleolus fracture or deltoid ligament lesion in addition to a fibular fracture. The consensus is the unstable type and best treated by open reduction and internal fixation. The diagnostic process for a medial ligament lesion has been well investigated but there is no consensus as to the best method of assessment. The number of deltoid ruptures as a result of an external rotation mechanism is higher than previously believed. The derivation of the injury mechanism could provide information of the likely ligamentous lesion in several fracture patterns. The use of the Lauge-Hansen classification system in the assessment of the initial X-ray images can be helpful in predicting the involvement of the deltoid ligament but the reliability in terms of sensitivity and specificity is unknown. Clinical examination, stress radiography, magnetic resonance imaging, arthroscopy, and ultrasonography have been used to investigate medial collateral integrity in cases of ankle fractures. None of these has shown to possess the combination of being cost-effective, reliable and easy to use; currently gravity stress radiography is favoured and, in cases of doubt, arthroscopy could be of value. There is a disagreement as to the benefit of repair by suture of the deltoid ligament in cases of an acute rupture in combination with a lateral malleolar fracture. There is no evidence found for suturing but exploration is thought to be beneficial in case of interposition of medial structures.
Passenger Deletions Generate Therapeutic Vulnerabilities in Cancer
Inactivation of tumour-suppressor genes by homozygous deletion is a prototypic event in the cancer genome, yet such deletions often encompass neighbouring genes. We propose that homozygous deletions in such passenger genes can expose cancer-specific therapeutic vulnerabilities when the collaterally deleted gene is a member of a functionally redundant family of genes carrying out an essential function. The glycolytic gene enolase 1 (ENO1) in the 1p36 locus is deleted in glioblastoma (GBM), which is tolerated by the expression of ENO2. Here we show that short-hairpin-RNA-mediated silencing of ENO2 selectively inhibits growth, survival and the tumorigenic potential of ENO1-deleted GBM cells, and that the enolase inhibitor phosphonoacetohydroxamate is selectively toxic to ENO1-deleted GBM cells relative to ENO1-intact GBM cells or normal astrocytes. The principle of collateral vulnerability should be applicable to other passenger-deleted genes encoding functionally redundant essential activities and provide an effective treatment strategy for cancers containing such genomic events.
Semi-supervised model personalization for improved detection of learner's emotional engagement
Affective states play a crucial role in learning. Existing Intelligent Tutoring Systems (ITSs) fail to track affective states of learners accurately. Without an accurate detection of such states, ITSs are limited in providing truly personalized learning experience. In our longitudinal research, we have been working towards developing an empathic autonomous 'tutor' closely monitoring students in real-time using multiple sources of data to understand their affective states corresponding to emotional engagement. We focus on detecting learning related states (i.e., 'Satisfied', 'Bored', and 'Confused'). We have collected 210 hours of data through authentic classroom pilots of 17 sessions. We collected information from two modalities: (1) appearance, which is collected from the camera, and (2) context-performance, that is derived from the content platform. The learning content of the content platform consists of two section types: (1) instructional where students watch instructional videos and (2) assessment where students solve exercise questions. Since there are individual differences in expressing affective states, the detection of emotional engagement needs to be customized for each individual. In this paper, we propose a hierarchical semi-supervised model adaptation method to achieve highly accurate emotional engagement detectors. In the initial calibration phase, a personalized context-performance classifier is obtained. In the online usage phase, the appearance classifier is automatically personalized using the labels generated by the context-performance model. The experimental results show that personalization enables performance improvement of our generic emotional engagement detectors. The proposed semi-supervised hierarchical personalization method result in 89.23% and 75.20% F1 measures for the instructional and assessment sections respectively.
Design and Analysis of a Robust, Low-cost, Highly Articulated manipulator enabled by jamming of granular media
Hyper-redundant manipulators can be fragile, expensive, and limited in their flexibility due to the distributed and bulky actuators that are typically used to achieve the precision and degrees of freedom (DOFs) required. Here, a manipulator is proposed that is robust, high-force, low-cost, and highly articulated without employing traditional actuators mounted at the manipulator joints. Rather, local tunable stiffness is coupled with off-board spooler motors and tension cables to achieve complex manipulator configurations. Tunable stiffness is achieved by reversible jamming of granular media, which-by applying a vacuum to enclosed grains-causes the grains to transition between solid-like states and liquid-like ones. Experimental studies were conducted to identify grains with high strength-to-weight performance. A prototype of the manipulator is presented with performance analysis, with emphasis on speed, strength, and articulation. This novel design for a manipulator-and use of jamming for robotic applications in general-could greatly benefit applications such as human-safe robotics and systems in which robots need to exhibit high flexibility to conform to their environments.
Alterations in Hypothalamus-Pituitary-Adrenal/Thyroid Axes and Gonadotropin-Releasing Hormone in the Patients with Primary Insomnia: A Clinical Research
The hypothalamus-pituitary-target gland axis is thought to be linked with insomnia, yet there has been a lack of further systematic studies to prove this. This study included 30 patients with primary insomnia (PI), 30 patients with depression-comorbid insomnia (DCI), and 30 healthy controls for exploring the alterations in the hypothalamus-pituitary-adrenal/thyroid axes' hormones and gonadotropin-releasing hormone (GnRH). The Pittsburgh Sleep Quality Index was used to evaluate sleep quality in all subjects. The serum concentrations of corticotrophin-releasing hormone (CRH), thyrotrophin-releasing hormone (TRH), GnRH, adrenocorticotropic hormone (ACTH), thyroid stimulating hormone (TSH), cortisol, total triiodothyronine (TT3), and total thyroxine (TT4) in the morning (between 0730 h and 0800 h) were detected. Compared to the controls, all hormonal levels were elevated in the insomniacs, except ACTH and TSH in the PI group. Compared to the DCI patients, the PI patients had higher levels of CRH, cortisol, TT3, and TT4 but lower levels of TRH, GnRH, and ACTH. Spearman's correlation analysis indicated that CRH, TRH, GnRH, TSH, cortisol, TT4, and TT3 were positively correlated with the severity of insomnia. The linear regression analysis showed that only CRH, GnRH, cortisol, and TT3 were affected by the PSQI scores among all subjects, and only CRH was included in the regression model by the "stepwise" method in the insomnia patients. Our results indicated that PI patients may have over-activity of the hypothalamus-pituitary-adrenal/thyroid axes and an elevated level of GnRH in the morning.
We know what @you #tag: does the dual role affect hashtag adoption?
Researchers and social observers have both believed that hashtags, as a new type of organizational objects of information, play a dual role in online microblogging communities (e.g., Twitter). On one hand, a hashtag serves as a bookmark of content, which links tweets with similar topics; on the other hand, a hashtag serves as the symbol of a community membership, which bridges a virtual community of users. Are the real users aware of this dual role of hashtags? Is the dual role affecting their behavior of adopting a hashtag? Is hashtag adoption predictable? We take the initiative to investigate and quantify the effects of the dual role on hashtag adoption. We propose comprehensive measures to quantify the major factors of how a user selects content tags as well as joins communities. Experiments using large scale Twitter datasets prove the effectiveness of the dual role, where both the content measures and the community measures significantly correlate to hashtag adoption on Twitter. With these measures as features, a machine learning model can effectively predict the future adoption of hashtags that a user has never used before.
Dynamic Optimization of Neural Network Structures Using Probabilistic Modeling
Deep neural networks (DNNs) are powerful machine learning models and have succeeded in various artificial intelligence tasks. Although various architectures and modules for the DNNs have been proposed, selecting and designing the appropriate network structure for a target problem is a challenging task. In this paper, we propose a method to simultaneously optimize the network structure and weight parameters during neural network training. We consider a probability distribution that generates network structures, and optimize the parameters of the distribution instead of directly optimizing the network structure. The proposed method can apply to the various network structure optimization problems under the same framework. We apply the proposed method to several structure optimization problems such as selection of layers, selection of unit types, and selection of connections using the MNIST, CIFAR-10, and CIFAR-100 datasets. The experimental results show that the proposed method can find the appropriate and competitive network structures.
Shortest-path feasibility algorithms: An experimental evaluation
This is an experimental study of algorithms for the shortest-path feasibility problem: Given a directed weighted graph, find a negative cycle or present a short proof that none exists. We study previously known and new algorithms. Our testbed is more extensive than those previously used, including both static and incremental problems, as well as worst-case instances. We show that, while no single algorithm dominates, a small subset (including new algorithms) has very robust performance in practice. Our work advances the state of the art in the area.
Using OS Design Patterns to Provide Reliability and Security as-a-Service for VM-based Clouds
This paper extends the concepts behind cloud services to offer hypervisor-based reliability and security monitors for cloud virtual machines. Cloud VMs can be heterogeneous and as such guest OS parameters needed for monitoring can vary across different VMs and must be obtained in some way. Past work involves running code inside the VM, which is unacceptable for a cloud environment. We solve this problem by recognizing that there are common OS design patterns that can be used to infer monitoring parameters from the guest OS. We extract information about the cloud user's guest OS with the user's existing VM image and knowledge of OS design patterns as the only inputs to analysis. To demonstrate the range of monitoring functionality possible with this technique, we implemented four sample monitors: a guest OS process tracer, an OS hang detector, a return-to-user attack detector, and a process-based keylogger detector.
High Level Information Fusion developments, issues, and grand challenges: Fusion 2010 panel discussion
The goal of the High-Level Information Fusion (HLIF) Panel Discussion is to present contemporary HLIF advances and developments to determine unsolved grand challenges and issues. The discussion will address the issues between low-level (signal processing and object state estimation and characterization) and high-level information fusion (control, situational understanding, and relationships to the environment). Specific areas of interest include modeling (situations, environments), representations (semantic, knowledge, and complex), systems design (scenario-based, user-based, distributed-agent) and evaluation (measures of effectiveness and empirical case studies). The goal is to address the contemporary operational and strategic issues in information fusion system design.
Hierarchical Multimodal LSTM for Dense Visual-Semantic Embedding
We address the problem of dense visual-semantic embedding that maps not only full sentences and whole images but also phrases within sentences and salient regions within images into a multimodal embedding space. Such dense embeddings, when applied to the task of image captioning, enable us to produce several region-oriented and detailed phrases rather than just an overview sentence to describe an image. Specifically, we present a hierarchical structured recurrent neural network (RNN), namely Hierarchical Multimodal LSTM (HM-LSTM). Compared with chain structured RNN, our proposed model exploits the hierarchical relations between sentences and phrases, and between whole images and image regions, to jointly establish their representations. Without the need of any supervised labels, our proposed model automatically learns the fine-grained correspondences between phrases and image regions towards the dense embedding. Extensive experiments on several datasets validate the efficacy of our method, which compares favorably with the state-of-the-art methods.
Dynamic Traffic Diversion in SDN: testbed vs Mininet
In this paper, we first propose a simple Dynamic Traffic Diversion (DTD) algorithm for Software Defined Networks (SDN). After implementing the algorithm inside the controller, we then compare the results obtained under two different test environments: 1) a testbed using real Cisco equipment and 2) a network emulation using Mininet. From the results, we get two key messages. First, we can clearly see that dynamically diverting important traffic on a backup path will prevent packet loss and reduce jitter. Finally, the two test environments provide relatively similar results. The small differences could be explained by the early field trial image that was used on the Cisco equipment and by the many setting parameters that are available in both environments.
Universal HIV testing in London tuberculosis clinics: a cluster randomised controlled trial.
We assessed whether implementation of a combination of interventions in London tuberculosis clinics raised the levels of HIV test offers, acceptance and coverage. A stepped-wedge cluster randomised controlled trial was conducted across 24 clinics. Interventions were training of clinical staff and provision of tailor-made information resources with or without a change in clinic policy from selective to universal HIV testing. The primary outcome was HIV test acceptance amongst those offered a test, before and after the intervention; the secondary outcome was an offer of HIV testing. Additionally, the number and proportion of HIV tests among all clinic attendees (coverage) was assessed. 1,315 patients were seen in 24 clinics. The offer and coverage of testing rose significantly in clinics without (p = 0.002 and p = 0.004, respectively) and with an existing policy of universal testing (p = 0.02 and p = 0.04, respectively). However, the level of HIV test acceptance did not increase in 18 clinics without routine universal testing (p = 0.76) or the six clinics with existing universal testing (p = 0.40). The intervention significantly increased the number of HIV tests offered and proportion of participants tested, although acceptance did not change significantly. However, the magnitude of increase is modest due to the high baseline coverage.
The analgesic properties of scalp infiltrations with ropivacaine after intracranial tumoral resection.
BACKGROUND The issue of postoperative pain after neurosurgery is controversial. It has been reported as mild to moderate and its treatment may be inadequate. Infiltration of the surgical site with local anesthetics has provided transient benefit after craniotomy, but its effect on chronic pain has not been evaluated. Accordingly, we designed the present study to test the hypothesis that ropivacaine infiltration of the scalp reduces acute and persistent postoperative pain after intracranial tumor resection. METHODS This was a prospective, single-blinded study. Inclusion criteria were intracranial tumor resection, age > or = 18 or < or = 80 yr, and ability to understand and use a visual analog scale (VAS). Exclusion criteria were history of craniotomy, chronic drug abuse, and neurologic disorders. All eligible patients were randomly included in Group I (infiltration) or C (control). Postoperative analgesia was IV acetaminophen combined with nalbuphine. At the end of the surgery, Group I received an infiltration of the surgical site with 20 mL of ropivacaine 0.75%. Acute pain was evaluated hourly by VAS during the first 24 h. The analgesic effect of ropivacaine was evaluated based on total consumption of nalbuphine and VAS scores. The incidence of persistent pain and neuropathic pain was assessed at the 2-mo postoperative evaluation. We used the Student's t-test to compare total nalbuphine consumption, repeated measures analysis of variance with post hoc Bonferroni t-test for VAS score and the Fisher's exact test for chronic and neuropathic pain. RESULTS Fifty-two patients were enrolled, 25 in Group I and 27 in Group C. Demographic and intraoperative data were similar between groups. Group I showed a nonsignificant trend toward reduced nalbuphine consumption during the first postoperative day, 11.2 +/- 9.2 mg vs 16.6 +/- 11.0 mg for Group C (mean +/- SD, P = 0.054). VAS scores were significantly higher in Group C. Two months after surgery, persistent pain was significantly lower in Group I, 2/24 (8%) vs 14/25 (56%), P = 0.0003. One patient (4.1%) in Group I versus six (25%) patients in Group C (P = 0.04) experienced neuropathic pain. CONCLUSIONS Because pain is moderate after intracranial tumor resection, there is limited interest in scalp infiltrations with ropivacaine in the acute postoperative period. Nevertheless, these infiltrations may be relevant for the rehabilitation of neurosurgical patients and their quality of life by limiting the development of persistent pain and particularly neuropathic pain.
Effectiveness of Housing First with Intensive Case Management in an Ethnically Diverse Sample of Homeless Adults with Mental Illness: A Randomized Controlled Trial
UNLABELLED Housing First (HF) is being widely disseminated in efforts to end homelessness among homeless adults with psychiatric disabilities. This study evaluates the effectiveness of HF with Intensive Case Management (ICM) among ethnically diverse homeless adults in an urban setting. 378 participants were randomized to HF with ICM or treatment-as-usual (TAU) in Toronto (Canada), and followed for 24 months. Measures of effectiveness included housing stability, physical (EQ5D-VAS) and mental (CSI, GAIN-SS) health, social functioning (MCAS), quality of life (QoLI20), and health service use. Two-thirds of the sample (63%) was from racialized groups and half (50%) were born outside Canada. Over the 24 months of follow-up, HF participants spent a significantly greater percentage of time in stable residences compared to TAU participants (75.1% 95% CI 70.5 to 79.7 vs. 39.3% 95% CI 34.3 to 44.2, respectively). Similarly, community functioning (MCAS) improved significantly from baseline in HF compared to TAU participants (change in mean difference = +1.67 95% CI 0.04 to 3.30). There was a significant reduction in the number of days spent experiencing alcohol problems among the HF compared to TAU participants at 24 months (ratio of rate ratios = 0.47 95% CI 0.22 to 0.99) relative to baseline, a reduction of 53%. Although the number of emergency department visits and days in hospital over 24 months did not differ significantly between HF and TAU participants, fewer HF participants compared to TAU participants had 1 or more hospitalizations during this period (70.4% vs. 81.1%, respectively; P=0.044). Compared to non-racialized HF participants, racialized HF participants saw an increase in the amount of money spent on alcohol (change in mean difference = $112.90 95% CI 5.84 to 219.96) and a reduction in physical community integration (ratio of rate ratios = 0.67 95% CI 0.47 to 0.96) from baseline to 24 months. Secondary analyses found a significant reduction in the number of days experiencing problems due to alcohol use among foreign-born (vs. Canadian-born) HF participants at 24 months (ratio of rate ratios = 0.19 95% 0.04 to 0.88), relative to baseline. Compared to usual care, HF with ICM can improve housing stability and community functioning and reduce the days of alcohol related problems in an ethnically diverse sample of homeless adults with mental illness within 2-years. TRIAL REGISTRATION Controlled-Trials.com ISRCTN42520374.
Management of functional nonretentive fecal incontinence in children: Recommendations from the International Children's Continence Society.
BACKGROUND Fecal incontinence (FI) in children is frequently encountered in pediatric practice, and often occurs in combination with urinary incontinence. In most cases, FI is constipation-associated, but in 20% of children presenting with FI, no constipation or other underlying cause can be found - these children suffer from functional nonretentive fecal incontinence (FNRFI). OBJECTIVE To summarize the evidence-based recommendations of the International Children's Continence Society for the evaluation and management of children with FNRFI. RECOMMENDATIONS Functional nonretentive fecal incontinence is a clinical diagnosis based on medical history and physical examination. Except for determining colonic transit time, additional investigations are seldom indicated in the workup of FNRFI. Treatment should consist of education, a nonaccusatory approach, and a toileting program encompassing a daily bowel diary and a reward system. Special attention should be paid to psychosocial or behavioral problems, since these frequently occur in affected children. Functional nonretentive fecal incontinence is often difficult to treat, requiring prolonged therapies with incremental improvement on treatment and frequent relapses.
Overview of BioNLP'09 Shared Task on Event Extraction
The paper presents the design and implementation of the BioNLP’09 Shared Task, and reports the final results with analysis. The shared task consists of three sub-tasks, each of which addresses bio-molecular event extraction at a different level of specificity. The data was developed based on the GENIA event corpus. The shared task was run over 12 weeks, drawing initial interest from 42 teams. Of these teams, 24 submitted final results. The evaluation results are encouraging, indicating that state-of-the-art performance is approaching a practically applicable level and revealing some remaining challenges.
The Imprint of the equation of state on the axial w modes of oscillating neutron stars
We discuss the dependence of the pulsation frequencies of the axial quasi-normal modes of a nonrotating neutron star upon the equation of state describing the star interior. The continued fraction method has been used to compute the complex frequencies for a set of equations of state based on different physical assumptions and spanning a wide range of stiffness. The numerical results show that the detection of axial gravitational waves would allow to discriminate between the models underlying the different equation of states, thus providing relevant information on both the structure of neutron star matter and the nature of the hadronic interactions.
Spoken language understanding using long short-term memory neural networks
Neural network based approaches have recently produced record-setting performances in natural language understanding tasks such as word labeling. In the word labeling task, a tagger is used to assign a label to each word in an input sequence. Specifically, simple recurrent neural networks (RNNs) and convolutional neural networks (CNNs) have shown to significantly outperform the previous state-of-the-art - conditional random fields (CRFs). This paper investigates using long short-term memory (LSTM) neural networks, which contain input, output and forgetting gates and are more advanced than simple RNN, for the word labeling task. To explicitly model output-label dependence, we propose a regression model on top of the LSTM un-normalized scores. We also propose to apply deep LSTM to the task. We investigated the relative importance of each gate in the LSTM by setting other gates to a constant and only learning particular gates. Experiments on the ATIS dataset validated the effectiveness of the proposed models.
Community discovery using nonnegative matrix factorization
Complex networks exist in a wide range of real world systems, such as social networks, technological networks, and biological networks. During the last decades, many researchers have concentrated on exploring some common things contained in those large networks include the small-world property, power-law degree distributions, and network connectivity. In this paper, we will investigate another important issue, community discovery, in network analysis. We choose Nonnegative Matrix Factorization (NMF) as our tool to find the communities because of its powerful interpretability and close relationship between clustering methods. Targeting different types of networks (undirected, directed and compound), we propose three NMF techniques (Symmetric NMF, Asymmetric NMF and Joint NMF). The correctness and convergence properties of those algorithms are also studied. Finally the experiments on real world networks are presented to show the effectiveness of the proposed methods.
Efficient Active Learning for Image Classification and Segmentation Using a Sample Selection and Conditional Generative Adversarial Network
Training robust deep learning (DL) systems for medical image classification or segmentation is challenging due to limited images covering different disease types and severity. We propose an active learning (AL) framework to select most informative samples and add to the training data. We use conditional generative adversarial networks (cGANs) to generate realistic chest xray images with different disease characteristics by conditioning its generation on a real image sample. Informative samples to add to the training set are identified using a Bayesian neural network. Experiments show our proposed AL framework is able to achieve state of the art performance by using about 35% of the full dataset, thus saving significant time and effort over conventional methods.
Algorithmic composition: computational thinking in music
The composer still composes but also gets to take a programming-enabled journey of musical discovery.
Computer-Mediated Communication Effects on Disclosure , Impressions , and Interpersonal Evaluations Getting to Know One Another a Bit at a Time
This investigation examined how computer-mediated communication (CMC) partners exchange personal information in initial interactions, focusing on the effects of communication channels on self-disclosure, question-asking, and uncertainty reduction. Unacquainted individuals (N = 158) met either face-to-face or via CMC. Computer-mediated interactants exhibited a greater proportion of more direct and intimate uncertainty reduction behaviors than unmediated participants did, and demonstrated significantly greater gains in attributional confidence over the course of the conversations. The use of direct strategies by mediated interactants resulted in judgments of greater conversational effectiveness by partners. Results illuminate some microstructures previously asserted but unverified within social information processing theory (Walther, 1992), and extend uncertainty reduction theory (Berger & Calabrese, 1975) to CMC interaction.
NODDI: Practical <ce:italic>in vivo</ce:italic> neurite orientation dispersion and density imaging of the human brain
This paper introduces neurite orientation dispersion and density imaging (NODDI), a practical diffusion MRI technique for estimating the microstructural complexity of dendrites and axons in vivo on clinical MRI scanners. Such indices of neurites relate more directly to and provide more specific markers of brain tissue microstructure than standard indices from diffusion tensor imaging, such as fractional anisotropy (FA). Mapping these indices over the whole brain on clinical scanners presents new opportunities for understanding brain development and disorders. The proposed technique enables such mapping by combining a three-compartment tissue model with a two-shell high-angular-resolution diffusion imaging (HARDI) protocol optimized for clinical feasibility. An index of orientation dispersion is defined to characterize angular variation of neurites. We evaluate the method both in simulation and on a live human brain using a clinical 3T scanner. Results demonstrate that NODDI provides sensible neurite density and orientation dispersion estimates, thereby disentangling two key contributing factors to FA and enabling the analysis of each factor individually. We additionally show that while orientation dispersion can be estimated with just a single HARDI shell, neurite density requires at least two shells and can be estimated more accurately with the optimized two-shell protocol than with alternative two-shell protocols. The optimized protocol takes about 30 min to acquire, making it feasible for inclusion in a typical clinical setting. We further show that sampling fewer orientations in each shell can reduce the acquisition time to just 10 min with minimal impact on the accuracy of the estimates. This demonstrates the feasibility of NODDI even for the most time-sensitive clinical applications, such as neonatal and dementia imaging.
Bivalirudin and provisional glycoprotein IIb/IIIa blockade compared with heparin and planned glycoprotein IIb/IIIa blockade during percutaneous coronary intervention: REPLACE-2 randomized trial.
CONTEXT The direct thrombin inhibitor bivalirudin has been associated with better efficacy and less bleeding than heparin during coronary balloon angioplasty but has not been widely tested during contemporary percutaneous coronary intervention (PCI). OBJECTIVE To determine the efficacy of bivalirudin, with glycoprotein IIb/IIIa (Gp IIb/IIIa) inhibition on a provisional basis for complications during PCI, compared with heparin plus planned Gp IIb/IIIa blockade with regard to protection from periprocedural ischemic and hemorrhagic complications. DESIGN, SETTING, AND PARTICIPANTS The Randomized Evaluation in PCI Linking Angiomax to Reduced Clinical Events (REPLACE)-2 trial, a randomized, double-blind, active-controlled trial conducted among 6010 patients undergoing urgent or elective PCI at 233 community or referral hospitals in 9 countries from October 2001 through August 2002. INTERVENTIONS Patients were randomly assigned to receive intravenous bivalirudin (0.75-mg/kg bolus plus 1.75 mg/kg per hour for the duration of PCI), with provisional Gp IIb/IIIa inhibition (n = 2999), or heparin (65-U/kg bolus) with planned Gp IIb/IIIa inhibition (abciximab or eptifibatide) (n = 3011). Both groups received daily aspirin and a thienopyridine for at least 30 days after PCI. MAIN OUTCOME MEASURES The primary composite end point was 30-day incidence of death, myocardial infarction, urgent repeat revascularization, or in-hospital major bleeding; the secondary composite end point was 30-day incidence of death, myocardial infarction, or urgent repeat revascularization. RESULTS Provisional Gp IIb/IIIa blockade was administered to 7.2% of patients in the bivalirudin group. By 30 days, the primary composite end point had occurred among 9.2% of patients in the bivalirudin group vs 10.0% of patients in the heparin-plus-Gp IIb/IIIa group (odds ratio, 0.92; 95% confidence interval, 0.77-1.09; P =.32). The secondary composite end point occurred in 7.6% of patients in the bivalirudin vs 7.1% of patients in the heparin-plus-Gp IIb/IIIa groups (odds ratio, 1.09; 95% confidence interval 0.90-1.32; P =.40). Prespecified statistical criteria for noninferiority to heparin plus Gp IIb/IIIa were satisfied for both end points. In-hospital major bleeding rates were significantly reduced by bivalirudin (2.4% vs 4.1%; P<.001). CONCLUSIONS Bivalirudin with provisional Gp IIb/IIIa blockade is statistically not inferior to heparin plus planned Gp IIb/IIIa blockade during contemporary PCI with regard to suppression of acute ischemic end points and is associated with less bleeding.
Emotional intelligence: the most potent factor in the success equation.
Star performers can be differentiated from average ones by emotional intelligence. For jobs of all kinds, emotional intelligence is twice as important as a person's intelligence quotient and technical skills combined. Excellent performance by top-level managers adds directly to a company's "hard" results, such as increased profitability, lower costs, and improved customer retention. Those with high emotional intelligence enhance "softer" results by contributing to increased morale and motivation, greater cooperation, and lower turnover. The author discusses the five components of emotional intelligence, its role in facilitating organizational change, and ways to increase an organization's emotional intelligence.
The Comovement of Investor Attention
Prior literature has documented that investor attention is associated with the pricing of stocks. We examine attention comovement, the extent to which investor attention for a firm is affected by attention paid to the industry and market in general. We find that attention comovement is positively related to comovement in firm fundamentals, and is also related to firm characteristics, such as size and visibility. We also find that the comovement of investor attention has market consequences: attention comovement is positively associated with excess stock return and trading volume comovement. Finally, we show that a prominent information release (a firm’s earnings announcement) contributes to attention comovement. Our results aid in understanding the industry and market-wide nature of investor attention and its market consequences. We thank Mary Barth (editor), an anonymous associate editor, two anonymous referees, Richard Willis, and workshop participants at Vanderbilt University for helpful comments and suggestions. We thank Yung-Yu Chen for assistance in acquiring Google search data and Nick Guest for research assistance. The financial support of the Fisher College of Business, Foster School of Business, Olin Business School, and the Marriott School of Management is gratefully acknowledged.
Using Wikipedia to boost collaborative filtering techniques
One important challenge in the field of recommender systems is the sparsity of available data. This problem limits the ability of recommender systems to provide accurate predictions of user ratings. We overcome this problem by using the publicly available user generated information contained in Wikipedia. We identify similarities between items by mapping them to Wikipedia pages and finding similarities in the text and commonalities in the links and categories of each page. These similarities can be used in the recommendation process and improve ranking predictions. We find that this method is most effective in cases where ratings are extremely sparse or nonexistent. Preliminary experimental results on the MovieLens dataset are encouraging.
Modular Continual Learning in a Unified Visual Environment
A core aspect of human intelligence is the ability to learn new tasks quickly and switch between them flexibly. Here, we describe a modular continual reinforcement learning paradigm inspired by these abilities. We first introduce a visual interaction environment that allows many types of tasks to be unified in a single framework. We then describe a reward map prediction scheme that learns new tasks robustly in the very large state and action spaces required by such an environment. We investigate how properties of module architecture influence efficiency of task learning, showing that a module motif incorporating specific design principles (e.g. early bottlenecks, low-order polynomial nonlinearities, and symmetry) significantly outperforms more standard neural network motifs, needing fewer training examples and fewer neurons to achieve high levels of performance. Finally, we present a meta-controller architecture for task switching based on a dynamic neural voting scheme, which allows new modules to use information learned from previouslyseen tasks to substantially improve their own learning efficiency.
Trends and causes of hospitalizations among HIV-infected persons during the late HAART era: what is the impact of CD4 counts and HAART use?
BACKGROUND Declining rates of hospitalizations occurred shortly after the availability of highly active antiretroviral therapy (HAART). However, trends in the late HAART era are less defined, and data on the impact of CD4 counts and HAART use on hospitalizations are needed. METHODS We evaluated hospitalization rates from 1999 to 2007 among HIV-infected persons enrolled in a large US military cohort. Poisson regression was used to compare hospitalization rates per year and to examine factors associated with hospitalization. RESULTS Of the 2429 participants, 822 (34%) were hospitalized at least once with 1770 separate hospital admissions. The rate of hospitalizations (137 per 1000 person-years) was constant over the study period [relative rate (RR) 1.00 per year change, 95% confidence interval: 0.98 to 1.02]. The hospitalization rates due to skin infections (RR: 1.50, P = 0.02), methicillin-resistant staphylococcus aureus (RR: 3.19, P = 0.03), liver disease (RR: 1.71, P = 0.04), and surgery (RR: 1.17, P = 0.04) significantly increased over time, whereas psychological causes (RR: 0.60, P < 0.01) and trauma (RR: 0.54, P < 0.01) decreased. In the multivariate model, higher nadir CD4 (RR: 0.92 per 50 cells, P < 0.01) and higher proximal CD4 counts (RR of 0.71 for 350-499 vs. <350 cells/mm(3) and RR 0.67 for > or = 500 vs. 350 cells/mm(3), both P < 0.01) were associated with lower risk of hospitalization. Risk of hospitalization was constant for proximal CD4 levels above 350 (RR: 0.94 P = 0.51, CD4 > or = 500 vs. 350-499). HAART was associated with a reduced risk of hospitalization among those with a CD4 <350 (RR: 0.72, P = 0.02) but had smaller estimated and nonsignificant effects at higher CD4 levels (RR: 0.81, P = 0.33 and 1.06, P = 0.71 for CD4 350-499 and > or = 500, respectively). CONCLUSIONS Hospitalizations continue to occur at high rates among HIV-infected persons with increasing rates for skin infections, methicillin-resistant staphylococcus aureus, liver disease, and surgeries. Factors associated with a reduced risk of hospitalization include CD4 counts >350 cells per cubic millimeter and HAART use among patients with a CD4 count <350 cells per cubic millimeter.
Single incision endoscope-assisted surgery for sagittal craniosynostosis
The objective of this study is to present the novel technique and associated results of a single-incision endoscope-assisted procedure for the treatment of sagittal craniosynostosis. We retrospectively reviewed the charts of infants who underwent single-incision endoscope-assisted sagittal craniectomy for craniosynostosis at our institution. Demographic data collected included patient age, blood loss, operative time, pre- and post-operative hemoglobin, pre- and post-operative cephalic index (CI), and hospital length of stay. Seven consecutive infants underwent surgery for sagittal craniosynostosis using a single-incision endoscopic technique. Average operative time was 87 (±10.5) minutes. Average blood loss was 32 (±13.5) cubic centimeters (cc). Post-operative hemoglobin was an average of 7.1 (±0.2) g/dL. No patients required a blood transfusion intra-operatively or in the post-operative setting. Dural tears were encountered in one patient. The average hospital length of stay was 1.4 (±1.1) days. Difference between pre- and post-operative CI was 8.4 % (±3.5; p < 0.05). We demonstrate the novel use of a single-incision technique for endoscope-assisted sagittal craniosynostosis correction that improves upon the classically described surgical procedure by decreasing invasiveness, while allowing for excellent clinical outcomes.
Double Multiple Stream Tube Model and Numerical Analysis of Vertical Axis Wind Turbine
The present paper contributes to the modeling of unsteady flow analysis of vertical axis wind turbine (VAWT). Double multiple stream tube (DSMT) model was applied for the performance prediction of straight bladed fixed pitch VAWT using NACA0018 airfoil at low wind speed. A moving mesh technique was used to investigate two-dimensional unsteady flow around the same VAWT model with NACA0018 airfoil modified to be flexible at 15 from the main blade axis of the turbine at the trailing edge located about 70% of the blade chord length using fluent solving Reynolds average Navier-strokes equation. The results obtained from DMST model and the simulation results were then compared. The result shows that the CFD simulation with airfoil modified has shown better performance at low tip speed ratios for the modeled turbine.
Robotic versus Open Partial Nephrectomy: A Systematic Review and Meta-Analysis
OBJECTIVES To critically review the currently available evidence of studies comparing robotic partial nephrectomy (RPN) and open partial nephrectomy (OPN). MATERIALS AND METHODS A comprehensive review of the literature from Pubmed, Web of Science and Scopus was performed in October 2013. All relevant studies comparing RPN with OPN were included for further screening. A cumulative meta-analysis of all comparative studies was performed and publication bias was assessed by a funnel plot. RESULTS Eight studies were included for the analysis, including a total of 3418 patients (757 patients in the robotic group and 2661 patients in the open group). Although RPN procedures had a longer operative time (weighted mean difference [WMD]: 40.89; 95% confidence interval [CI], 14.39-67.40; p = 0.002), patients in this group benefited from a lower perioperative complication rate (19.3% for RPN and 29.5% for OPN; odds ratio [OR]: 0.53; 95%CI, 0.42-0.67; p<0.00001), shorter hospital stay (WMD: -2.78; 95%CI, -3.36 to -1.92; p<0.00001), less estimated blood loss(WMD: -106.83; 95%CI, -176.4 to -37.27; p = 0.003). Transfusions, conversion to radical nephrectomy, ischemia time and estimated GFR change, margin status, and overall cost were comparable between the two techniques. The main limitation of the present meta-analysis is the non-randomization of all included studies. CONCLUSIONS RPN appears to be an efficient alternative to OPN with the advantages of a lower rate of perioperative complications, shorter length of hospital stay and less blood loss. Nevertheless, high quality prospective randomized studies with longer follow-up period are needed to confirm these findings.
A Novel Four-DOF Parallel Manipulator Mechanism and Its Kinematics
A novel 4-UPU parallel manipulator mechanism that can perform three-dimensional translations and rotation about Z axis is presented. The principle that the mechanism can perform the above motions is analyzed based on the screw theory, the mobility of the mechanism is calculated, and the rationality of the chosen input joints is discussed. The forward and inverse position kinematics solutions of the mechanism and corresponding numerical examples are given, the workspace and the singularity of the parallel mechanism are discussed. The mechanism having the advantages of simple symmetric structure and large stiffness can be applied to the developments of NC positioning platforms, parallel machine tools, four-dimensional force sensors and micro-positional parallel manipulators
What you want (and do not want) affects what you see (and do not see): avoidance social goals and social events.
Two studies examined the influence of approach and avoidance social goals on memory for and evaluation of ambiguous social information. Study 1 found that individual differences in avoidance social goals were associated with greater memory of negative information, negatively biased interpretation of ambiguous social cues, and a more pessimistic evaluation of social actors. Study 2 experimentally manipulated social goals and found that individuals high in avoidance social motivation remembered more negative information and expressed more dislike for a stranger in the avoidance condition than in the approach condition. Results suggest that avoidance social goals are associated with emphasizing potential threats when making sense of the social environment.
A Developmental Model of Critical Thinking
The critical thinking movement, it is suggested, has much to gain from conceptualizing its subject matter in a developmental framework. Most instructional programs designed to teach critical thinking do not draw on contemporary empirical research in cognitive development as a potential resource. The developmental model of critical thinking outlined here derives from contemporary empirical research on directions and processes of intellectual development in children and adolescents. It identifies three forms of second-order cognition (meta-knowing)--metacognitive, metastrategic, and epistemological--that constitute an essential part of what develops cognitively to make critical thinking possible.
Generative Code Modeling with Graphs
Generative models for source code are an interesting structured prediction problem, requiring to reason about both hard syntactic and semantic constraints as well as about natural, likely programs. We present a novel model for this problem that uses a graph to represent the intermediate state of the generated output. Our model generates code by interleaving grammar-driven expansion steps with graph augmentation and neural message passing steps. An experimental evaluation shows that our new model can generate semantically meaningful expressions, outperforming a range of strong baselines.
Improving Naive Bayes Classifier Using Conditional Probabilities
Naive Bayes classifier is the simplest among Bayesian Network classifiers. It has shown to be very efficient on a variety of data classification problems. However, the strong assumption that all features are conditionally independent given the class is often violated on many real world applications. Therefore, improvement of the Naive Bayes classifier by alleviating the feature independence assumption has attracted much attention. In this paper, we develop a new version of the Naive Bayes classifier without assuming independence of features. The proposed algorithm approximates the interactions between features by using conditional probabilities. We present results of numerical experiments on several real world data sets, where continuous features are discretized by applying two different methods. These results demonstrate that the proposed algorithm significantly improve the performance of the Naive Bayes classifier, yet at the same time maintains its robustness.
Relative abuse liability of indiplon and triazolam in humans: a comparison of psychomotor, subjective, and cognitive effects.
Indiplon [N-methyl-N-[3-[3-(2-thienylcarbonyl)-pyrazolo[1,5-alpha]pyrimidin-7-yl]phenyl]acetamide; NBI 34060] is a positive allosteric GABA(A) receptor modulator that is under development for the treatment of insomnia. This study compared the abuse potential of indiplon, a compound with preferential affinity for GABA(A) receptors containing an alpha(1) subunit, with triazolam in 21 volunteers with histories of drug abuse. Placebo, triazolam (0.25, 0.5, and 0.75 mg), and indiplon (30, 50, and 80 mg) were studied in counterbalanced order under double-blind conditions at two different residential research facilities. Both drugs impaired psychomotor and cognitive performance and produced similar dose-related increases in participant and observer ratings of drug strength. The onset of action of both drugs was rapid (30 min); however, the duration of action of indiplon (3-4 h) was shorter than that of triazolam (4-6 h). The profiles of subjective effects of triazolam and indiplon were similar; however, a maximum of 52% of participants identified indiplon as a benzodiazepine or barbiturate, compared with 81% of participants after 0.75 mg of triazolam. On participantrated subjective effects relevant to sedation, the slope of the triazolam dose-effect curve was significantly steeper than that of indiplon. Neither the largest doses of indiplon and triazolam nor the slope of the indiplon and triazolam dose-effect curves were significantly different from each other on any of the same-day or next-day measures of positive drug effects or next-day measures of reinforcing effects. Together, these data suggest that although the abuse potential of indiplon is not different from that of triazolam at these doses, psychomotor and cognitive impairment after large doses of indiplon might be less.
Artemisinin-based combination therapies are efficacious and safe for treatment of uncomplicated malaria in HIV-infected Ugandan children.
BACKGROUND Artemisinin-based combination therapies (ACTs) are highly efficacious and safe, but data from human immunodeficiency virus (HIV)-infected children concurrently receiving antiretroviral therapy (ART) and ACTs are limited. METHODS We evaluated 28-day outcomes following malaria treatment with artemether-lumefantrine (AL) or dihydroartemisinin-piperaquine (DP) in 2 cohorts of HIV-infected Ugandan children taking various ART regimens. In one cohort, children <6 years of age were randomized to lopinavir/ritonavir (LPV/r) or nonnucleoside reverse transcriptase inhibitor-based ART and treated with AL for uncomplicated malaria. In another cohort, children <12 months of age were started on nevirapine-based ART if they were eligible, and randomized to AL or DP for the treatment of their first and all subsequent uncomplicated malaria episodes. RESULTS There were 773 and 165 treatments for malaria with AL and DP, respectively. Initial response to therapy was excellent, with 99% clearance of parasites and <1% risk of repeat therapy within 3 days. Recurrent parasitemia within 28 days was common following AL treatment. The risk of recurrent parasitemia was significantly lower among children taking LPV/r-based ART compared with children taking nevirapine-based ART following AL treatment (15.3% vs 35.5%, P = .009), and those treated with DP compared with AL (8.6% vs 36.2%, P < .001). Both ACT regimens were safe and well tolerated. CONCLUSIONS Treatment of uncomplicated malaria with AL or DP was efficacious and safe in HIV-infected children taking ART. However, there was a high risk of recurrent parasitemia following AL treatment, which was significantly lower in children taking LPV/r-based ART compared with nevirapine-based ART.
Food security: the challenge of feeding 9 billion people.
Continuing population and consumption growth will mean that the global demand for food will increase for at least another 40 years. Growing competition for land, water, and energy, in addition to the overexploitation of fisheries, will affect our ability to produce food, as will the urgent requirement to reduce the impact of the food system on the environment. The effects of climate change are a further threat. But the world can produce more food and can ensure that it is used more efficiently and equitably. A multifaceted and linked global strategy is needed to ensure sustainable and equitable food security, different components of which are explored here.
Re-transplantation after bortezomib-based therapy.
Whilst the use of high dose alkylating agents and autologous stem cell transplantation (ASCT) has a fundamental role in consolidating initial anti-tumour induction therapy, its role in salvage therapy consolidation remains to be determined. Bortezomib has been shown to be an effective agent at first and subsequent relapse, with responses equivalent or better than the response to previously used conventional therapies in first-line therapy (Richardson et al, 2005; Laubach et al, 2009). Combining bortezomib re-induction with a second ASCT after maximal anti-tumour response is, therefore, an attractive concept. Accordingly, we undertook a retrospective review of patients proceeding to a second ASCT after bortezomib-based re-induction therapy. Patients undergoing a second ASCT after progression from an initial ASCT and subsequent bortezomib re-induction therapy in 12 centres were identified (n = 40). Detailed information on the patients was obtained through anonymized clinical data retrieval forms, capturing critical patient and disease-specific factors including response to initial induction therapy, response to first ASCT, time to progression, subsequent therapies, bortezomib-based re-induction therapy and response to second ASCT (including the type of transplant and stem cell source). Response to therapy was categorized according to the International Myeloma Working Group criteria (Durie et al, 2006). Kaplan Meier plots were made using the Statistical Package for the Social Sciences (spss) IBM, Chicago, Illinois, USA. There were insufficient cytogenetic data for analysis. A total of 40 patients were identified in this retrospective study. Two patients had planned reduced intensity allogeneic (RIC-Allo) transplants after their second autologous transplant and weren excluded from further analysis. Patient characteristics including age, sex, type of myeloma, therapy prior to their first and second transplants are shown in Table Ia. One patient who relapsed 10 months after their bortezomib therapy was transplanted in relapse. All other patients were transplanted prior to disease progression. Twenty-six patients were treated with a combination of bortezomib and dexamethasone, eight patients had PAD chemotherapy (bortezomib, adriamycin and dexamethasone (Oakervee et al, 2005). Two patients had bortezomib monotherapy, one patient had bortezomib plus intravenous melphalan and one patient had bortezomib plus cyclophosphamide, dexamethasone and idarubicin. The median number of cycles of bortezomib therapy was 4 (range 2–12). Patients receiving PAD chemotherapy also had a median of four cycles
Manifestations of Personality in Online Social Networks: Self-Reported Facebook-Related Behaviors and Observable Profile Information
Despite the enormous popularity of Online Social Networking sites (OSNs; e.g., Facebook and Myspace), little research in psychology has been done on them. Two studies examining how personality is reflected in OSNs revealed several connections between the Big Five personality traits and self-reported Facebook-related behaviors and observable profile information. For example, extraversion predicted not only frequency of Facebook usage (Study 1), but also engagement in the site, with extraverts (vs. introverts) showing traces of higher levels of Facebook activity (Study 2). As in offline contexts, extraverts seek out virtual social engagement, which leaves behind a behavioral residue in the form of friends lists and picture postings. Results suggest that, rather than escaping from or compensating for their offline personality, OSN users appear to extend their offline personalities into the domains of OSNs.
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Deep neural networks and machine-learning algorithms are pervasively used in several applications, ranging from computer vision to computer security. In most of these applications, the learning algorithm has to face intelligent and adaptive attackers who can carefully manipulate data to purposely subvert the learning process. As these algorithms have not been originally designed under such premises, they have been shown to be vulnerable to well-crafted, sophisticated attacks, including training-time poisoning and test-time evasion attacks (also known as adversarial examples). The problem of countering these threats and learning secure classifiers in adversarial settings has thus become the subject of an emerging, relevant research field known as adversarial machine learning. The purposes of this tutorial are: (a) to introduce the fundamentals of adversarial machine learning to the security community; (b) to illustrate the design cycle of a learning-based pattern recognition system for adversarial tasks; (c) to present novel techniques that have been recently proposed to assess performance of pattern classifiers and deep learning algorithms under attack, evaluate their vulnerabilities, and implement defense strategies that make learning algorithms more robust to attacks; and (d) to show some applications of adversarial machine learning to pattern recognition tasks like object recognition in images, biometric identity recognition, spam and malware detection.
Adaptive Deep Brain Stimulation In Advanced Parkinson Disease
OBJECTIVE Brain-computer interfaces (BCIs) could potentially be used to interact with pathological brain signals to intervene and ameliorate their effects in disease states. Here, we provide proof-of-principle of this approach by using a BCI to interpret pathological brain activity in patients with advanced Parkinson disease (PD) and to use this feedback to control when therapeutic deep brain stimulation (DBS) is delivered. Our goal was to demonstrate that by personalizing and optimizing stimulation in real time, we could improve on both the efficacy and efficiency of conventional continuous DBS. METHODS We tested BCI-controlled adaptive DBS (aDBS) of the subthalamic nucleus in 8 PD patients. Feedback was provided by processing of the local field potentials recorded directly from the stimulation electrodes. The results were compared to no stimulation, conventional continuous stimulation (cDBS), and random intermittent stimulation. Both unblinded and blinded clinical assessments of motor effect were performed using the Unified Parkinson's Disease Rating Scale. RESULTS Motor scores improved by 66% (unblinded) and 50% (blinded) during aDBS, which were 29% (p = 0.03) and 27% (p = 0.005) better than cDBS, respectively. These improvements were achieved with a 56% reduction in stimulation time compared to cDBS, and a corresponding reduction in energy requirements (p < 0.001). aDBS was also more effective than no stimulation and random intermittent stimulation. INTERPRETATION BCI-controlled DBS is tractable and can be more efficient and efficacious than conventional continuous neuromodulation for PD.
A modular control scheme for PMSM speed control with pulsating torque minimization
In this paper, a modular control approach is applied to a permanent-magnet synchronous motor (PMSM) speed control. Based on the functioning of the individual module, the modular approach enables the powerfully intelligent and robust control modules to easily replace any existing module which does not perform well, meanwhile retaining other existing modules which are still effective. Property analysis is first conducted for the existing function modules in a conventional PMSM control system: proportional-integral (PI) speed control module, reference current-generating module, and PI current control module. Next, it is shown that the conventional PMSM controller is not able to reject the torque pulsation which is the main hurdle when PMSM is used as a high-performance servo. By virtue of the internal model, to nullify the torque pulsation it is imperative to incorporate an internal model in the feed-through path. This is achieved by replacing the reference current-generating module with an iterative learning control (ILC) module. The ILC module records the cyclic torque and reference current signals over one entire cycle, and then uses those signals to update the reference current for the next cycle. As a consequence, the torque pulsation can be reduced significantly. In order to estimate the torque ripples which may exceed certain bandwidth of a torque transducer, a novel torque estimation module using a gain-shaped sliding-mode observer is further developed to facilitate the implementation of torque learning control. The proposed control system is evaluated through real-time implementation and experimental results validate the effectiveness.
The effects of game and training loads on perceptual responses of muscle soreness in Australian football.
Australian Football is an intense team sport played over ~120 min on a weekly basis. To determine the effects of game and training load on muscle soreness and the time frame of soreness dissipation, 64 elite Australian Football players (age 23.8 ± 1.8 y, height 183.9 ± 3.8 cm, weight 83.2 ± 5.0 kg; mean ± SD) recorded perceptions of muscle soreness, game intensity, and training intensity on scales of 1-10 on most mornings for up to 3 competition seasons. Playing and training times were also recorded in minutes. Data were analyzed with a mixed linear model, and magnitudes of effects on soreness were evaluated by standardization. All effects had acceptably low uncertainty. Game and training-session loads were 790 ± 182 and 229 ± 98 intensity-minutes (mean ± SD), respectively. General muscle soreness was 4.6 ± 1.1 units on d 1 postgame and fell to 1.9 ± 1.0 by d 6. There was a small increase in general muscle soreness (0.22 ± 0.07-0.50 ± 0.13 units) in the 3 d after high-load games relative to low-load games. Other soreness responses showed similar timelines and magnitudes of change. Training sessions made only small contributions to soreness over the 3 d after each session. Practitioners should be aware of these responses when planning weekly training and recovery programs, as it appears that game-related soreness dissipates after 3 d regardless of game load and increased training loads in the following week produce only small increases in soreness.
A Complex Event Processing Toolkit for Detecting Technical Chart Patterns
With the advent of large high volume data, we have seen need for real time analytic techniques like Complex Event Processing. This paper extends a Complex Event Processing Engine to support real time identification of technical chart patterns from streaming data. Technical chart patterns are known interesting recurring patterns on time series data, and they are used by experts in time series data analysis domains such as stock market and currency exchange rates. Yet the automated identification of these patterns is challenging due to the high volatility and noise of data. The paper focuses on identifying suitable technique to filter out volatility and a set of algorithms to query the data streams continuously and to identify patterns. The resulting solution is a toolkit for chart pattern recognition which is a composition of a set of complex CEP queries and a Kernel regression smoother applied on moving windows. Same toolkit can be used to detect chart patterns in other domains like Gold and Oil prices etc as well.
GeaBase: A High-Performance Distributed Graph Database for Industry-Scale Applications
Graph analytics have been gaining tractions rapidly in the past few years. It has a wide array of application areas in the industry, ranging from e-commerce, social network and recommendation systems to fraud detection and virtually any problem that requires insights into data connections, not just data itself. In this paper, we present GeaBase, a new distributed graph database that provides the capability to store and analyze graph-structured data in real-time at massive scale. We describe the details of the system and the implementation, including a novel update architecture, called Update Center (UC), and a new language that is suitable for both graph traversal and analytics. We also compare the performance of GeaBase to a widely used open-source graph database Titan. Experiments show that GeaBase is up to 182x faster than Titan in our testing scenarios. We also achieves 22x higher throughput on social network workloads in the comparison.
Intrinsic firing patterns of diverse neocortical neurons
Neurons of the neocortex differ dramatically in the patterns of action potentials they generate in response to current steps. Regular-spiking cells adapt strongly during maintained stimuli, whereas fast-spiking cells can sustain very high firing frequencies with little or no adaptation. Intrinsically bursting cells generate clusters of spikes (bursts), either singly or repetitively. These physiological distinctions have morphological correlates. RS and IB cells can be either pyramidal neurons or spiny stellate cells, and thus constitute the excitatory cells of the cortex. FS cells are smooth or sparsely spiny non-pyramidal cells, and are likely to be GABAergic inhibitory interneurons. The different firing properties of neurons in neocortex contribute significantly to its network behavior.
Gearing Up for the Next Industrial Revolution: 3D Printing, Home-Based Factories, and Modes of Social Control
While former industrial factories are being converted into modern living spaces in cities across the country, residential homes are being converted into modern factories thanks to advances in three-dimensional (“3D”) printing technology, an emerging “Maker Movement,” and the rise of online marketplaces like Etsy. Despite growing environmental, child-labor, and safety concerns, these “homebased factories” are largely unregulated. In the absence of traditional workplace protections, how can we gear up for the “next industrial revolution” while guarding against the sweatshop conditions of the last? How can we harness the Maker Movement’s commitment to do-it-yourself democracy in order to combat abuses by potential “corporate makers”? This Article analyzes the effectiveness of individual and collective “modes of social control” (e.g., law, ethical precepts, self regulation, affinity groups, vigilant and effective media, and direct action) in creating and sustaining just workplaces in an age of 3D printing and home-based factories.
Towards an Integrated View of Multi-Sided Platforms Evolution
How do Multi-Sided Platforms (MSPs) evolve over time? Although MSPs are perceived as highly evolvable socio-technical systems, Platform Evolution remains an elusive topic in the MSP literature with many unanswered questions. In particular, Platform Evolution (PE) as a concept has not been explicitly defined in the MSP literature. Rather, there is multiplicity of views, which contributes to the lack of conceptual clarity. In order to address this shortcoming, we put forward a new, integrated conceptualization of PE as a complex, multi-faceted and dynamic process. Rather than proposing yet another view on PE, we adopt a “concept reconstruction” approach, which allows us to integrate the existing work on PE in a coherent manner, and to propose a comprehensive conceptualization of PE.
A Non-Technical Survey on Deep Convolutional Neural Network Architectures
Artificial neural networks have recently shown great results in many disciplines and a variety of applications, including natural language understanding, speech processing, games and image data generation. One particular application in which the strong performance of artificial neural networks was demonstrated is the recognition of objects in images, where deep convolutional neural networks are commonly applied. In this survey, we give a comprehensive introduction to this topic (object recognition with deep convolutional neural networks), with a strong focus on the evolution of network architectures. Therefore, we aim to compress the most important concepts in this field in a simple and non-technical manner to allow for future researchers to have a quick general understanding. This work is structured as follows: 1) We will explain the basic ideas of (convolutional) neural networks and deep learning and examine their usage for three object recognition tasks: image classification, object localization and object detection. 2) We give a review on the evolution of deep convolutional neural networks by providing an extensive overview of the most important network architectures presented in chronological order of their appearances.
A Tutorial on Distance Metric Learning: Mathematical Foundations, Algorithms and Software
This paper describes the discipline of distance metric learning, a branch of machine learning that aims to learn distances from the data. Distance metric learning can be useful to improve similarity learning algorithms, and also has applications in dimensionality reduction. We describe the distance metric learning problem and analyze its main mathematical foundations. We discuss some of the most popular distance metric learning techniques used in classification, showing their goals and the required information to understand and use them. Furthermore, we present a Python package that collects a set of 17 distance metric learning techniques explained in this paper, with some experiments to evaluate the performance of the different algorithms. Finally, we discuss several possibilities of future work in this topic.
Human Interaction With Robot Swarms: A Survey
Recent advances in technology are delivering robots of reduced size and cost. A natural outgrowth of these advances are systems comprised of large numbers of robots that collaborate autonomously in diverse applications. Research on effective autonomous control of such systems, commonly called swarms, has increased dramatically in recent years and received attention from many domains, such as bioinspired robotics and control theory. These kinds of distributed systems present novel challenges for the effective integration of human supervisors, operators, and teammates that are only beginning to be addressed. This paper is the first survey of human-swarm interaction (HSI) and identifies the core concepts needed to design a human-swarm system. We first present the basics of swarm robotics. Then, we introduce HSI from the perspective of a human operator by discussing the cognitive complexity of solving tasks with swarm systems. Next, we introduce the interface between swarm and operator and identify challenges and solutions relating to human-swarm communication, state estimation and visualization, and human control of swarms. For the latter, we develop a taxonomy of control methods that enable operators to control swarms effectively. Finally, we synthesize the results to highlight remaining challenges, unanswered questions, and open problems for HSI, as well as how to address them in future works.
Active sampling for entity matching
In entity matching, a fundamental issue while training a classifier to label pairs of entities as either duplicates or non-duplicates is the one of selecting informative training examples. Although active learning presents an attractive solution to this problem, previous approaches minimize the misclassification rate (0-1 loss) of the classifier, which is an unsuitable metric for entity matching due to class imbalance (i.e., many more non-duplicate pairs than duplicate pairs). To address this, a recent paper [1] proposes to maximize recall of the classifier under the constraint that its precision should be greater than a specified threshold. However, the proposed technique requires the labels of all n input pairs in the worst-case. Our main result is an active learning algorithm that approximately maximizes recall of the classifier while respecting a precision constraint with provably sub-linear label complexity (under certain distributional assumptions). Our algorithm uses as a black-box any active learning module that minimizes 0-1 loss. We show that label complexity of our algorithm is at most log n times the label complexity of the black-box, and also bound the difference in the recall of classifier learnt by our algorithm and the recall of the optimal classifier satisfying the precision constraint. We provide an empirical evaluation of our algorithm on several real-world matching data sets that demonstrates the effectiveness of our approach.
Link Prediction Based on Graph Neural Networks
Link prediction is a key problem for network-structured data. Link prediction heuristics use some score functions, such as common neighbors and Katz index, to measure the likelihood of links. They have obtained wide practical uses due to their simplicity, interpretability, and for some of them, scalability. However, every heuristic has a strong assumption on when two nodes are likely to link, which limits their effectiveness on networks where these assumptions fail. In this regard, a more reasonable way should be learning a suitable heuristic from a given network instead of using predefined ones. By extracting a local subgraph around each target link, we aim to learn a function mapping the subgraph patterns to link existence, thus automatically learning a “heuristic” that suits the current network. In this paper, we study this heuristic learning paradigm for link prediction. First, we develop a novel γ-decaying heuristic theory. The theory unifies a wide range of heuristics in a single framework, and proves that all these heuristics can be well approximated from local subgraphs. Our results show that local subgraphs reserve rich information related to link existence. Second, based on the γ-decaying theory, we propose a new method to learn heuristics from local subgraphs using a graph neural network (GNN). Its experimental results show unprecedented performance, working consistently well on a wide range of problems.
Trust Me: Doubts and Concerns Living with the Internet of Things
An increasing number of everyday objects are now connected to the internet, collecting and sharing information about us: the "Internet of Things" (IoT). However, as the number of "social" objects increases, human concerns arising from this connected world are starting to become apparent. This paper presents the results of a preliminary qualitative study in which five participants lived with an ambiguous IoT device that collected and shared data about their activities at home for a week. In analyzing this data, we identify the nature of human and socio-technical concerns that arise when living with IoT technologies. Trust is identified as a critical factor - as trust in the entity/ies that are able to use their collected information decreases, users are likely to demand greater control over information collection. Addressing these concerns may support greater engagement of users with IoT technology. The paper concludes with a discussion of how IoT systems might be designed to better foster trust with their owners.
A hierarchical framework for evaluating simulation software
In simulation software selection problems, packages are evaluated either on their own merits or in comparison with other packages. In either method, a comprehensive list of criteria for evaluation of simulation software is essential for proper selection. Although various simulation software evaluation checklists do exist, there are di€erences in the lists provided and the terminologies used. This paper presents a hierarchical framework for simulation software evaluation consisting of seven main groups and several subgroups. An explanation for each criterion is provided and an analysis of the usability of the proposed framework is further discussed. Ó 1999 Elsevier Science B.V. All rights reserved.
Using Simulation and Evolutionary Algorithms to Evaluate the Design of Mix Strategies of Decoy and Jammers in Anti-Torpedo Tactics
When a submarine uses an anti-torpedo tactic, it is a matter of life or death. In terms of diesel submarine, the torpedo has the advantage of high speed, and acoustic homing to target. The disadvantages of submarine are the not-so-fast evasive speed, and the limited capability of torpedo counter-measure systems. There are two types of countermeasures: decoys and jammers. A successful anti-torpedo tactic should consist of the deployment of mixed decoys and jammers and the coordination with the submarine's maneuver. This paper would like to discuss the anti-torpedo tactics from the classical viewpoint. A simulation scenario is implemented in order to study the interaction among the submarine, torpedo, decoy and jammers. After applying the evolutionary algorithm, it is interesting to discover some points about anti-torpedo tactics using a mix of decoys and jammers that would make a significant contribution to the survivability of submarine in the torpedoes engagement scenario.
Upper Paleozoic-Lower Mesozoic in the Coqen basin, Tibet, China: A potential petroleum——bearing sedimentary sequence
The latest stratigraphic and paleontological studies indicate that in the Coqen basin there was no 75 Ma sedimentary hiatus during the Late Paleozoic-Early Mesozoic. Among the deposits, marine carbonate rocks were deposited during the Late Permian to Late Triassic Norian and continental-margin clastic rocks were deposited during the Late Triassic Rhetian to Early-Middle Jurassic. They are in unconformable contact. The Coqen basin was a marine carbonate basin during the Late Permian to Late Triassic Norian and still a low-lying area that received very thick deposits during the Late Triassic Rhetian to Early-Middle Jurassic. In the context of the strategic evaluation of macroscopic petroleum exploration, the Middle Permian Qixiaan to Late Triassic Norian carbonate rocks in the basin have the properties of source rocks and the Late Triassic Rhetian to Early-Mid Jurassic clastic rocks have the properties of cover rocks;the unconformity between them has the properties of reservoir rocks. The Middle Permian-Lower Jurassic in the Coqen basin form a favorable sequence for petroleum exploration, which is named the Guge sequence.
A Circuit Modeling Technique for the ISO 7637-3 Capacitive Coupling Clamp Test
In this paper, we propose a transmission-line modeling technique for the ISO 7637-3 capacitive coupling clamp (CCC) test. Besides modeling the test bench, special attention is devoted to the CCC itself, for which an equivalent circuit is constructed based on the concept of surface transfer impedance and surface transfer admittance. The overall model is validated by means of measurements using a nonlinear circuit as device-under-test, as such demonstrating the appositeness to mimick the CCC test in simulations during the design phase.
Factors affecting adoption of mobile banking in Pakistan : Empirical Evidence
In this research paper we investigated the determinants likely to influence the adoption of mobile banking services, with a special focus on under banked/unbanked low-income population of Pakistan. The adoption of mobile banking services has been a strategic goal, both for banks and telcos. For this purpose, Technology Acceptance Model (TAM) was used, with additional determinants of perceived risk and social influence. Data was collected by surveying 372 respondents from the two largest cities (Karachi and Hyderabad) of the province Sindh, in Pakistan using judgement sampling method. This study empirically concluded that consumers’ intention to adopt mobile banking services was significantly influenced by social influence, perceived risk, perceived usefulness, and perceived ease of use. The most significant positive impact was of social influence on consumers’ intention to adopt mobile banking services. The paper concluded with discussion on results, and several business implications for the banking industry of Pakistan.
Current and Future Trends in Mobile Device Forensics: A Survey
Contemporary mobile devices are the result of an evolution process, during which computational and networking capabilities have been continuously pushed to keep pace with the constantly growing workload requirements. This has allowed devices such as smartphones, tablets, and personal digital assistants to perform increasingly complex tasks, up to the point of efficiently replacing traditional options such as desktop computers and notebooks. However, due to their portability and size, these devices are more prone to theft, to become compromised, or to be exploited for attacks and other malicious activity. The need for investigation of the aforementioned incidents resulted in the creation of the Mobile Forensics (MF) discipline. MF, a sub-domain of digital forensics, is specialized in extracting and processing evidence from mobile devices in such a way that attacking entities and actions are identified and traced. Beyond its primary research interest on evidence acquisition from mobile devices, MF has recently expanded its scope to encompass the organized and advanced evidence representation and analysis of future malicious entity behavior. Nonetheless, data acquisition still remains its main focus. While the field is under continuous research activity, new concepts such as the involvement of cloud computing in the MF ecosystem and the evolution of enterprise mobile solutions—particularly mobile device management and bring your own device—bring new opportunities and issues to the discipline. The current article presents the research conducted within the MF ecosystem during the last 7 years, identifies the gaps, and highlights the differences from past research directions, and addresses challenges and open issues in the field.
Noise Adaptive Speech Enhancement using Domain Adversarial Training
In this study, we propose a novel noise adaptive speech enhancement (SE) system, which employs a domain adversarial training (DAT) approach to tackle the issue of noise type mismatch between training and testing conditions. Such a mismatch is a critical problem in deep-learning-based SE systems. A large mismatch may cause serious performance degradation to the SE performance. Since we generally use a well trained SE system to handle various unseen noise types, the noise type mismatch commonly happens in realworld scenarios. The proposed noise adaptive SE system contains an encoder-decoder-based enhancement model and a domain discriminator model. During adaptation, the DAT approach encourages the encoder to produce noise invariant features based on the information from the discriminator model and consequentially increases the robustness of the enhancement model to unseen noise types. Here we regard stationary noises as the source domain (with ground-truth clean speech) and non-stationary noises as the target domain (without ground truth). We evaluated the proposed system on the TMHINT sentences. Experimental results show that the proposed noise adaptive SE system successfully provide notable PESQ (55.9%) and SSNR (26.1%) relative improvements over the SE system without performing noise adaptation.