title
stringlengths
8
300
abstract
stringlengths
0
10k
Every Picture Tells a Story: Generating Sentences from Images
Humans can prepare concise descriptions of pictures, focusing on what they find important. We demonstrate that automatic methods can do so too. We describe a system that can compute a score linking an image to a sentence. This score can be used to attach a descriptive sentence to a given image, or to obtain images that illustrate a given sentence. The score is obtained by comparing an estimate of meaning obtained from the image to one obtained from the sentence. Each estimate of meaning comes from a discriminative procedure that is learned using data. We evaluate on a novel dataset consisting of human-annotated images. While our underlying estimate of meaning is impoverished, it is sufficient to produce very good quantitative results, evaluated with a novel score that can account for synecdoche.
Planarity Testing and Embedding
Universita’ di Roma Tre 3.1 Properties and Characterizations of Planar Graphs . . . . . 2 Basic Definitions • Properties • Characterizations 3.2 Planarity Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 3.3 History of Planarity Algorithms. . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 3.4 Common Algorithmic Techniques and Tools . . . . . . . . . . . . . 4 3.5 The Path-Addition Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Initialization Phase • Path Decomposition Phase • Path Embedding Phase 3.6 The Vertex-Addition Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Lempel, Even, and Cederbaum • PQ-Trees 3.7 The Block Embedding Approach . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Shih and Hsu algorithm • Boyer and Myrvold algorithm
Understanding Web Archiving Services and Their (Mis)Use on Social Media
Either by ensuring the continuing availability of information, or by deliberately caching content that might get deleted or removed, Web archiving services play an increasingly important role in today’s information ecosystem. Among these, the Wayback Machine has been proactively archiving, since 2001, versions of a large number of Web pages, while newer services like archive.is allow users to create on-demand snapshots of specific Web pages, which serve as time capsules that can be shared across the Web. In this paper, we present a large-scale analysis of Web archiving services and their use on social media, aiming to shed light on the actors involved in this ecosystem, the content that gets archived, and how it is shared. To this end, we crawl and study: 1) 21M URLs, spanning almost two years, from archive.is; and 2) 356K archive.is plus 391K Wayback Machine URLs that were shared on four social networks: Reddit, Twitter, Gab, and 4chan’s Politically Incorrect board (/pol/) over 14 months. We observe that news and social media posts are the most common types of content archived, likely due to their perceived ephemeral and/or controversial nature. Moreover, URLs of archiving services are extensively shared on “fringe” communities within Reddit and 4chan to preserve possibly contentious content. Lastly, we find evidence of moderators nudging or even forcing users to use archives, instead of direct links, for news sources with opposing ideologies, potentially depriving them of ad revenue.
News Sentiment Analysis Based on Cross-Domain Sentiment Word Lists and Content Classifiers
The main task of sentiment classification is to automatically judge sentiment polarity (positive or negative) of published sentiment data (e.g. news or reviews). Some researches have shown that supervised methods can achieve good performance for blogs or reviews. However, the polarity of a news report is hard to judge. Web news reports are different from other web documents. The sentiment features in news are less than the features in other Web documents. Besides, the same words in different domains have different polarity. So we propose a selfgrowth algorithm to generate a cross-domain sentiment word list, which is used in sentiment classification of Web news. This paper considers some previously undescribed features for automatically classifying Web news, examines the effectiveness of these techniques in isolation and when aggregated using classification algorithms, and also validates the selfgrowth algorithm for the cross-domain word list.
Comprehending figurative referential descriptions.
A common way of referring to people is with figurative language. People can be referred to metaphorically, as in calling a terrible boxer "a creampuff," or metonymically, as in calling a naval admiral "the brass." The present studies investigated the anaphoric inferences that occur during comprehension of figurative referential descriptions. Subjects read short narratives, each ending in either a literal or figurative description of another person. Immediately after the last line of each text, the anaphoric antecedent for the description of another person. Immediately after the last line of each text, the anaphoric antecedent for the description was presented in a probe recognition task. The results of three experiments indicated that metaphoric and metonymic referential descriptions reinstate their antecedents in the course of comprehension. Subjects were faster at reinstating the antecedents for literal referential descriptions than at reinstating metaphoric and metonymic descriptions. Moreover, people understand metaphoric referential descriptions more easily than they do metonymic ones. The implications of these findings for theories of anaphora resolution and figurative language comprehension are discussed.
Board test coverage: the value of prediction and how to compare numbers
Test coverage prediction for board assemblies has an important function in, among others, test engineering, test cost modeling, test strategy definition and product quality estimation. Introducing a method that defines how this coverage is calculated can increase the value of such prediction across the electronics industry. There are three aspects to test coverage calculation: fault modeling, coverage-per-fault and total coverage. An abstraction level for fault categories is introduced, called MPS (material, placement, soldering) that enables us to compare coverage results using different fault models. Additionally, the rule-based fault coverage estimation and the weighted coverage calculation are discussed. This paper was submitted under the ITC Special Board and System Test Call-for-Papers that had an extended due-date. As such, the full text of the paper was not available in time for inclusion in the general volume of the 2003 ITC Proceedings. The full text is available in 2003 ITC Proceedings— Board and System Test Track. ITC INTERNATIONAL TEST CONFERENCE Proceedings of the International Test Conference 2003 (ITC’03) 1089-3539/03 $ 17.00 © 2003 IEEE
Surveying instructor and learner attitudes toward e-learning
The trend of using e-learning as a learning and/or teaching tool is now rapidly expanding into education. Although e-learning environments are popular, there is minimal research on instructors’ and learners’ attitudes toward these kinds of learning environments. The purpose of this study is to explore instructors’ and learners’ attitudes toward e-learning usage. Accordingly, 30 instructors and 168 college students are asked to answer two different questionnaires for investigating their perceptions. After statistical analysis, the results demonstrate that instructors have very positive perceptions toward using e-learning as a teaching assisted tool. Furthermore, behavioral intention to use e-learning is influenced by perceived usefulness and self-efficacy. Regarding to learners’ attitudes, self-paced, teacher-led, and multimedia instruction are major factors to affect learners’ attitudes toward e-learning as an effective learning tool. Based on the findings, this research proposes guidelines for developing e-learning environments. 2006 Elsevier Ltd. All rights reserved.
Die Cortisol-Aufwachreaktion bei Patienten mit akuten und chronischen Rückenschmerzen
Auffälligkeiten in der hypothalamisch-hypophysär-adrenalen Achse bei stressassoziierten Schmerzerkrankungen und der Zusammenhang mit psychologischen Faktoren werden kontrovers diskutiert. Hinsichtlich ihrer Cortisol-Aufwachreaktion wurden 31 Patienten mit lumbalen Rückenschmerzen (14 akut, 17 chronisch) und 14 Gesunde verglichen. Der Zusammenhang mit chronischem Stress und Depressivität sowie erstmals auch mit maladaptiver Schmerzverarbeitung und -bewältigung wurde geprüft. Die Cortisol-Aufwachreaktionen der Gruppen unterschieden sich nicht. Chronischer Stress, Depressivität und kognitive Schmerzverarbeitung korrelierten nicht mit den Cortisolwerten. Es zeigte sich aber ein negativer Zusammenhang mit passiv-meidender sowie ein positiver mit aktiver behavioraler Schmerzbewältigung. Zu den psychologischen Faktoren, die bei stressassoziierten Schmerzerkrankungen mit der Aktivität der hypothalamisch-hypophysär-adrenalen Achse in Zusammenhang stehen, sollten verstärkt auch behaviorale Schmerzbewältigungsstrategien gerechnet werden. Peculiarities of the hypothalamic-pituitary-adrenal axis activity in stress-related pain-disorders and potential relations with psychological risk factors of pain chronicity have been discussed controversially. The cortisol awakening responses of 31 low back pain patients (14 acute, 17 chronic) and 14 healthy controls were compared. In addition the interrelations between awakening response and chronic stress as well as depressive mood and – for the first time – maladaptive painprocessing and -copingstrategies were investigated. The groups did not differ in their cortisol awakening responses. Chronic stress, depressive mood and maladaptive cognitive painprocessing did not correlate with the awakening response. There were, however, significant interrelations between awakening responses and the behavioral paincoping-strategies. Behavioral paincoping-strategies should be considered as a potentially important contributing psychological factor in the relation between the activity of the hypothalamic-pituitary-adrenal axis and stress-related pain disorders.
The effect of static scanning and mobility training on mobility in people with hemianopia after stroke: A randomized controlled trial comparing standardized versus non-standardized treatment protocols
BACKGROUND Visual loss following stroke impacts significantly on activities of daily living and is an independent risk factor for becoming dependent. Routinely, allied health clinicians provide training for visual field loss, mainly with eye movement based therapy. The effectiveness of the compensatory approach to rehabilitation remains inconclusive largely due to difficulty in validating functional outcome with the varied type and dosage of therapy received by an individual patient. This study aims to determine which treatment is more effective, a standardized approach or individualized therapy in patients with homonymous hemianopia post stroke. METHODS/DESIGN This study is a double-blind randomized controlled, multicenter trial. A standardised scanning rehabilitation program (Neuro Vision Technology (NVT) program) of 7 weeks at 3 times per week, is compared to individualized therapy recommended by clinicians. DISCUSSION The results of the trial will provide information that could potentially inform the allocation of resources in visual rehabilitation post stroke. TRIAL REGISTRATION Australia and New Zealand Clinical Trials Register (ANZCTR): ACTRN12610000494033.
Gender differences in the management and outcome of patients with acute coronary artery disease.
STUDY OBJECTIVE s: To compare the clinical management and health outcomes of men and women after admission with acute coronary syndromes, after adjusting for disease severity, sociodemographic, and cardiac risk factors. DESIGN Prospective national survey of acute cardiac admissions followed up by mailed patient questionnaire two to three years after initial admission. SETTING Random sample of 94 district general hospitals in the UK. PATIENTS 1064 patients under 70 years old recruited between April 1995 and November 1996. MAIN RESULTS Of the 1064 patients recruited, 126 (11.8%) died before follow up. Of the 938 survivors, 719 (76.7%) completed a follow up questionnaire. There were no gender differences in the use of cardiac investigations during the index admission or follow up period. However, male patients with hypertension were more likely to undergo rehabilitation compared with female hypertensive patients (OR 2.01, 95% CI 0.85 to 4.72). Men were also more likely to undergo coronary artery bypass grafting (CABG) than women (OR 1.90, 95%CI 1.21 to 3.00), but there was no gender difference in the use of revascularisation overall (p=0.14). An indirect indication that the gender differences in CABG were not attributable to bias was provided by the lack of gender differences in health outcomes, which implies that patients received the care they needed. CONCLUSIONS Despite the extensive international literature referring to a gender bias in favour of men with coronary heart disease, this national survey found no gender differences in the use of investigations or in revascularisation overall. However, the criteria used for selecting percutaneous transluminal coronary angioplasty compared with CABG requires further investigation as does the use of rehabilitation. It is unclear whether the clinical decisions to provide these procedures are made solely on the basis of clinical need.
DASH7 alliance protocol 1.0: Low-power, mid-range sensor and actuator communication
This paper presents the DASH7 Alliance Protocol 1.0. It is an industry alliance standard for wireless sensor and actuator communication using the unlicensed sub-1 GHz bands. The paper explains its historic relation to active RFID standards ISO 18000-7 for 433 MHz communication, the basic concepts and communication paradigms of the protocol. Since the protocol is a full OSI stack specification, the paper discusses the implementation of every OSI layer.
Acyclic sets of linear orders via the Bruhat orders
We describe Abello’s acyclic sets of linear orders [1] as the permutations visited by commuting equivalence classes of maximal reduced decompositions. This allows us to strengthen Abello’s structural result: we show that acyclic sets arising from this construction are distributive sublattices of the weak Bruhat order. This, in turn, shows that Abello’s acyclic sets are, in fact, the same as Chameni-Nembua’s distributive covering sublattices (S.T.D.C’s). Fishburn’s alternating scheme is shown to be a special case of the Abello/Chameni-Nembua acyclic sets. Any acyclic set that arises in this way can be represented by an arrangement of pseudolines, and we use this representation to derive a simple closed form for the cardinality of the alternating scheme. The higher Bruhat orders prove to be a natural mathematical framework for this approach to the acyclic sets problem.
Imaging case of the month: Abnormalities of the cochlear nerves and internal auditory canals in pontine tegmental cap dysplasia.
Pontine tegmental cap dysplasia (PTCD) is a recently described hindbrain malformation characterized by pontine hypoplasia and ectopic dorsal transverse pontine fibers (1). To date, a total of 19 cases of PTCD have been published, all patients had sensorineural hearing loss (SNHL). We contribute 1 additional case of PTCD with SNHL with and VIIIth cranial nerve and temporal bone abnormalities using dedicated magnetic resonance (MR) and high-resolution temporal bone computed tomographic (CT) images.
k-Sparse Autoencoders
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the “k sparse autoencoder”, which is an autoencoder with linear activation function, where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and RBMs. k -sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
Cut-and-paste text summarization
Cut-and-Paste Text Summarization
At a Crossroads: Will Aquaculture Fulfill the Promise of the Blue Revolution?
This report and the activities of the SeaWeb Aquaculture Clearinghouse are made possible with the generous funding provided by the Rockefeller Brothers Fund and The Curtis and Edith Munson Foundation. INTRODUCTION There is now little doubt that the world's fisheries are in crisis. Mounting scientific evidence points to dramatic declines in global catches. 1,2 Increasingly, many are making the case that farming fish offers a solution to meeting the growing demand for seafood that catching fish cannot provide. Aquaculture now accounts for roughly one-third of the world's total supply of food fish and undoubtedly the contribution of aquaculture to seafood supplies will increase in the future. Aquaculture has the potential to become a sustainable practice that can supplement capture fisheries and significantly contribute to feeding the world's growing population. However, instead of helping to ease the crisis in wild fisheries, unsustainable aquaculture development could exacerbate the problems and create new ones, damaging our important and already-stressed coastal areas. The vast majority of aquaculture takes place in Asia. In 2002, over 70% of worldwide aquaculture production was in China alone. 3 Most farmed fish and shellfish are grown in traditional small-scale systems that benefit local communities and minimize the environmental impact. Utilizing simple culture technologies and minimal inputs, these systems have been used for centuries. The net contribution of these traditional aquaculture systems can be great as they offer many benefits, including food security in developing nations. However, as happened with the " green revolution " of agriculture in the last century, the current " blue revolution " of aquaculture is becoming an industrial mode of food production. An emerging trend is toward the increased development of farming high-value carnivorous fish species using environmentally and socially damaging systems. Farming fish on an industrial scale, especially carnivorous fish is rapidly expanding; the number of different species farmed and geographic regions where they are farmed increases continually. Largely controlled by multinational corporations, industrialized farming of carnivorous fish such as salmon requires the intensive use of resources and exports problems to the surrounding environment, often resulting in environmental impacts and social conflicts. Some segments of the aquaculture industry are long overdue for reform. What is required is a paradigm shift in how we think about aquaculture, particularly its interaction with natural and social systems. This new paradigm should be based on sustainable development— " the management and conservation of the natural resource base, and …
The application visualization system: a computational environment for scientific visualization
A software system for developing interactive scientific visualization applications quickly, with a minimum of programming effort, is described. This application visualization system (AVS) is an application framework targeted at scientists and engineers. The goal of the system is to make applications that combine interactive graphics and high computational requirements easier to develop for both programmers and nonprogrammers. AVS is designed around the concept of software building blocks, or modules, which can be interconnected to form visualization applications. AVS allows flow networks of existing modules to be constructed using a direct-manipulation user interface, and it automatically generates a simple user interface to each module.<<ETX>>
Deep multimodal fusion for persuasiveness prediction
Persuasiveness is a high-level personality trait that quantifies the influence a speaker has on the beliefs, attitudes, intentions, motivations, and behavior of the audience. With social multimedia becoming an important channel in propagating ideas and opinions, analyzing persuasiveness is very important. In this work, we use the publicly available Persuasive Opinion Multimedia (POM) dataset to study persuasion. One of the challenges associated with this problem is the limited amount of annotated data. To tackle this challenge, we present a deep multimodal fusion architecture which is able to leverage complementary information from individual modalities for predicting persuasiveness. Our methods show significant improvement in performance over previous approaches.
Fingerprint Liveness Detection in the Presence of Capable Intruders
Fingerprint liveness detection methods have been developed as an attempt to overcome the vulnerability of fingerprint biometric systems to spoofing attacks. Traditional approaches have been quite optimistic about the behavior of the intruder assuming the use of a previously known material. This assumption has led to the use of supervised techniques to estimate the performance of the methods, using both live and spoof samples to train the predictive models and evaluate each type of fake samples individually. Additionally, the background was often included in the sample representation, completely distorting the decision process. Therefore, we propose that an automatic segmentation step should be performed to isolate the fingerprint from the background and truly decide on the liveness of the fingerprint and not on the characteristics of the background. Also, we argue that one cannot aim to model the fake samples completely since the material used by the intruder is unknown beforehand. We approach the design by modeling the distribution of the live samples and predicting as fake the samples very unlikely according to that model. Our experiments compare the performance of the supervised approaches with the semi-supervised ones that rely solely on the live samples. The results obtained differ from the ones obtained by the more standard approaches which reinforces our conviction that the results in the literature are misleadingly estimating the true vulnerability of the biometric system.
Compiling a high-level language for GPUs: (via language support for architectures and compilers)
Languages such as OpenCL and CUDA offer a standard interface for general-purpose programming of GPUs. However, with these languages, programmers must explicitly manage numerous low-level details involving communication and synchronization. This burden makes programming GPUs difficult and error-prone, rendering these powerful devices inaccessible to most programmers. We desire a higher-level programming model that makes GPUs more accessible while also effectively exploiting their computational power. This paper presents features of Lime, a new Java-compatible language targeting heterogeneous systems, that allow an optimizing compiler to generate high quality GPU code. The key insight is that the language type system enforces isolation and immutability invariants that allow the compiler to optimize for a GPU without heroic compiler analysis. Our compiler attains GPU speedups between 75% and 140% of the performance of native OpenCL code.
Suspect tracking based on call logs analysis and visualization
In Thailand, investigator can track and find the suspects by using call logs from suspects' phone numbers and their contacts. In many cases, the suspects changed their phone numbers to avoid tracking. The problem is that the investigators have difficulty to track these suspects from their call logs. Our hypothesis is that each user has a unique calling behavior pattern. The calling pattern is importance for tracking suspect's telephone number. To compare the calling patterns, we consider common contact groups. Thus, the aim of this project is to develop a call logs tracking system which can predict a set of new possible suspect's phone numbers and present their contacts' connection with our network diagram visualization based on Graph database (Neo4j). This system will be very necessary for investigators because it can save investigators' time from analyzing excessive call logs data. The system can predict the possible suspect's phone numbers. Furthermore, our visualization can enhance human's sight ability to connect the relation among related phone numbers. Finally, the experimental results on real call logs demonstrate that our method can track telephone number approximately 69% of single possible suspect phone number's matching while 89% of multiple possible suspect phone numbers' matching.
Experience with ICD-10/DSM-IV substance use disorders.
'Substance Use Disorders' represents a diagnostic subgroup in which ICD-10 and DSM-IV agree in most generalities and many particulars. Both systems' use disorders are based on the 'dependence syndrome' construct by Edwards and Gross and include a large and overlapping set of substance-induced syndromes. Differences at the category level are limited to DSM-IV 'abuse' vs. ICD-10 'harmful use', elimination of 'pathological alcohol intoxication' from DSM-IV and inclusion of substance-induced 'sexual dysfunctions and sleep disorders' in DSM-IV, but not in ICD-10. Cross-system reconciliation would entail few conceptual compromises but many criterion changes. Before embarking on a reconciliation process, the benefits of a common, world-wide nomenclature must be weighed against the many costs of changing either system. Even small changes can yield large differences in rates, reduce comparability across data gathered with different systems and incur considerable costs related to clinician retraining, changes in record keeping and changes in diagnostic interview schedules. Moreover, empirical data can have limited impact on the choices between the two systems because findings are either absent or equivocal, particularly for differences at the criterion level. Nevertheless, more research is needed particularly to examine the comparative reliability and validity of abuse and harmful use diagnoses.
Heterogeneity Image Patch Index and Its Application to Consumer Video Summarization
Automatic video summarization is indispensable for fast browsing and efficient management of large video libraries. In this paper, we introduce an image feature that we refer to as heterogeneity image patch (HIP) index. The proposed HIP index provides a new entropy-based measure of the heterogeneity of patches within any picture. By evaluating this index for every frame in a video sequence, we generate a HIP curve for that sequence. We exploit the HIP curve in solving two categories of video summarization applications: key frame extraction and dynamic video skimming. Under the key frame extraction framework, a set of candidate key frames is selected from abundant video frames based on the HIP curve. Then, a proposed patch-based image dissimilarity measure is used to create affinity matrix of these candidates. Finally, a set of key frames is extracted from the affinity matrix using a min-max based algorithm. Under video skimming, we propose a method to measure the distance between a video and its skimmed representation. The video skimming problem is then mapped into an optimization framework and solved by minimizing a HIP-based distance for a set of extracted excerpts. The HIP framework is pixel-based and does not require semantic information or complex camera motion estimation. Our simulation results are based on experiments performed on consumer videos and are compared with state-of-the-art methods. It is shown that the HIP approach outperforms other leading methods, while maintaining low complexity.
Exploring Monaural Features for Classification-Based Speech Segregation
Monaural speech segregation has been a very challenging problem for decades. By casting speech segregation as a binary classification problem, recent advances have been made in computational auditory scene analysis on segregation of both voiced and unvoiced speech. So far, pitch and amplitude modulation spectrogram have been used as two main kinds of time-frequency (T-F) unit level features in classification. In this paper, we expand T-F unit features to include gammatone frequency cepstral coefficients (GFCC), mel-frequency cepstral coefficients, relative spectral transform (RASTA) and perceptual linear prediction (PLP). Comprehensive comparisons are performed in order to identify effective features for classification-based speech segregation. Our experiments in matched and unmatched test conditions show that these newly included features significantly improve speech segregation performance. Specifically, GFCC and RASTA-PLP are the best single features in matched-noise and unmatched-noise test conditions, respectively. We also find that pitch-based features are crucial for good generalization to unseen environments. To further explore complementarity in terms of discriminative power, we propose to use a group Lasso approach to select complementary features in a principled way. The final combined feature set yields promising results in both matched and unmatched test conditions.
CloudFlame: Cyberinfrastructure for Combustion Research
Combustion experiments and chemical kinetics simulations generate huge data that is computationally and data intensive. A cloud-based cyber infrastructure known as Cloud Flame is implemented to improve the computational efficiency, scalability and availability of data for combustion research. The architecture consists of an application layer, a communication layer and distributed cloud servers running in a mix environment of Windows, Macintosh and Linux systems. The application layer runs software such as CHEMKIN modeling application. The communication layer provides secure transfer/archive of kinetic, thermodynamic, transport and gas surface data using private/public keys between clients and cloud servers. A robust XML schema based on the Process Informatics Model (Prime) combined with a workflow methodology for digitizing, verifying and uploading data from scientific graphs/tables to Prime is implemented for chemical molecular structures of compounds. The outcome of using this system by combustion researchers at King Abdullah University of Science and Technology (KAUST) Clean Combustion Research Center and its collaborating partners indicated a significant improvement in efficiency in terms of speed of chemical kinetics and accuracy in searching for the right chemical kinetic data.
Acute renal failure in critically ill patients: a multinational, multicenter study.
CONTEXT Although acute renal failure (ARF) is believed to be common in the setting of critical illness and is associated with a high risk of death, little is known about its epidemiology and outcome or how these vary in different regions of the world. OBJECTIVES To determine the period prevalence of ARF in intensive care unit (ICU) patients in multiple countries; to characterize differences in etiology, illness severity, and clinical practice; and to determine the impact of these differences on patient outcomes. DESIGN, SETTING, AND PATIENTS Prospective observational study of ICU patients who either were treated with renal replacement therapy (RRT) or fulfilled at least 1 of the predefined criteria for ARF from September 2000 to December 2001 at 54 hospitals in 23 countries. MAIN OUTCOME MEASURES Occurrence of ARF, factors contributing to etiology, illness severity, treatment, need for renal support after hospital discharge, and hospital mortality. RESULTS Of 29 269 critically ill patients admitted during the study period, 1738 (5.7%; 95% confidence interval [CI], 5.5%-6.0%) had ARF during their ICU stay, including 1260 who were treated with RRT. The most common contributing factor to ARF was septic shock (47.5%; 95% CI, 45.2%-49.5%). Approximately 30% of patients had preadmission renal dysfunction. Overall hospital mortality was 60.3% (95% CI, 58.0%-62.6%). Dialysis dependence at hospital discharge was 13.8% (95% CI, 11.2%-16.3%) for survivors. Independent risk factors for hospital mortality included use of vasopressors (odds ratio [OR], 1.95; 95% CI, 1.50-2.55; P<.001), mechanical ventilation (OR, 2.11; 95% CI, 1.58-2.82; P<.001), septic shock (OR, 1.36; 95% CI, 1.03-1.79; P = .03), cardiogenic shock (OR, 1.41; 95% CI, 1.05-1.90; P = .02), and hepatorenal syndrome (OR, 1.87; 95% CI, 1.07-3.28; P = .03). CONCLUSION In this multinational study, the period prevalence of ARF requiring RRT in the ICU was between 5% and 6% and was associated with a high hospital mortality rate.
An Improved Extended Kalman Filter for Localization of a Mobile Node with NLOS Anchors
Tracking a mobile node using a wireless sensor network under non-line of sight (NLOS) conditions, has been considered in this work, which is of interest to indoor positioning applications. A hybrid of time difference of arrival (TDOA) and angle of arrival (AOA) measurements, suitable for tracking asynchronous targets, is exploited. The NLOS biases of the TDOA measurements and the position and velocity of the target are included in the state vector. To track the latter, we use a modified form of the extended Kalman filter (EKF) with bound constraints on the NLOS biases, as derived from geometrical considerations. Through simulations, we show that our technique can outperform the EKF and the memoryless constrained optimization techniques. Keywords—Extended Kalman filter; localization; non-line of sight; ultra wideband.
Hotspots Detection in Photovoltaic Modules Using Infrared Thermography
An increased interest on generating power from renewable sources has led to an increase in solar photovoltaic (PV) system installations worldwide. Power generation of such systems is affected by factors that can be identified early on through efficient monitoring techniques. This study developed a non-invasive technique that can detect localized heating and quantify the area of the hotspots, a potential cause of degradation in photovoltaic systems. This is done by the use of infrared thermography, a well-accepted non-destructive evaluation technique that allows contactless, real-time inspection. In this approach, thermal images or thermograms of an operating PV module were taken using an infrared camera. These thermograms were analyzed by a Hotspot Detection algorithm implemented in MATLAB. Prior to image processing, images were converted to CIE L*a*b color space making kmeans clustering implementation computationally efficient. K-means clustering is an iterative technique that segments data into k clusters which was used to isolate hotspots. The devised algorithm detected hotspots in the modules being observed. In addition, average temperature and relative area is provided to quantify the hotspot. Various features and conditions leading to hotspots such as crack, junction box and shading were investigated in this
Convolutional Neural Networks using Logarithmic Data Representation
Recent advances in convolutional neural networks have considered model complexity and hardware efficiency to enable deployment onto embedded systems and mobile devices. For example, it is now well-known that the arithmetic operations of deep networks can be encoded down to 8-bit fixed-point without significant deterioration in performance. However, further reduction in precision down to as low as 3-bit fixed-point results in significant losses in performance. In this paper we propose a new data representation that enables state-of-the-art networks to be encoded to 3 bits with negligible loss in classification performance. To perform this, we take advantage of the fact that the weights and activations in a trained network naturally have non-uniform distributions. Using non-uniform, base-2 logarithmic representation to encode weights, communicate activations, and perform dot-products enables networks to 1) achieve higher classification accuracies than fixed-point at the same resolution and 2) eliminate bulky digital multipliers. Finally, we propose an end-to-end training procedure that uses log representation at 5-bits, which achieves higher final test accuracy than linear at 5-bits.
Map building with mobile robots in dynamic environments
The problem of generating maps with mobile robots has received considerable attention over the past years. Most of the techniques developed so far have been designed for situations in which the environment is static during the mapping process. Dynamic objects, however, can lead to serious errors in the resulting maps such as spurious objects or misalignments due to localization errors. In this paper we consider the problem of creating maps with mobile robots in dynamic environments. We present a new approach that interleaves mapping and localization with a probabilistic technique to identify spurious measurements. In several experiments we demonstrate that our algorithm generates accurate 2d and 3d in different kinds of dynamic indoor and outdoor environments. We also use our algorithm to isolate the dynamic objects and to generate three-dimensional representation of them.
Phone-level pronunciation scoring and assessment for interactive language learning
This paper investigates a method of automatic pronunciation scoring for use in computer-assisted language learning (CALL) systems. The method utilises a likelihood-based `Goodness of Pronunciation' (GOP) measure which is extended to include individual thresholds for each phone based on both averaged native con®dence scores and on rejection statistics provided by human judges. Further improvements are obtained by incorporating models of the subjectÕs native language and by augmenting the recognition networks to include expected pronunciation errors. The various GOP measures are assessed using a specially recorded database of non-native speakers which has been annotated to mark phone-level pronunciation errors. Since pronunciation assessment is highly subjective, a set of four performance measures has been designed, each of them measuring di€erent aspects of how well computer-derived phone-level scores agree with human scores. These performance measures are used to cross-validate the reference annotations and to assess the basic GOP algorithm and its re®nements. The experimental results suggest that a likelihood-based pronunciation scoring metric can achieve usable performance, especially after applying the various enhancements. Ó 2000 Elsevier Science B.V. All rights reserved.
A comparison of social, learning, and financial strategies on crowd engagement and output quality
A significant challenge for crowdsourcing has been increasing worker engagement and output quality. We explore the effects of social, learning, and financial strategies, and their combinations, on increasing worker retention across tasks and change in the quality of worker output. Through three experiments, we show that 1) using these strategies together increased workers' engagement and the quality of their work; 2) a social strategy was most effective for increasing engagement; 3) a learning strategy was most effective in improving quality. The findings of this paper provide strategies for harnessing the crowd to perform complex tasks, as well as insight into crowd workers' motivation.
A comparison between electromyography-driven robot and passive motion device on wrist rehabilitation for chronic stroke.
BACKGROUND The effect of using robots to improve motor recovery has received increased attention, even though the most effective protocol remains a topic of study. OBJECTIVE . The objective was to compare the training effects of treatments on the wrist joint of subjects with chronic stroke with an interactive rehabilitation robot and a robot with continuous passive motion. METHODS . This study was a single-blinded randomized controlled trial with a 3-month follow-up. Twenty-seven hemiplegic subjects with chronic stroke were randomly assigned to receive 20-session wrist training with a continuous electromyography (EMG)-driven robot (interactive group, n = 15) and a passive motion device (passive group, n = 12), completed within 7 consecutive weeks. Training effects were evaluated with clinical scores by pretraining and posttraining tests (Fugl-Meyer Assessment [FMA] and Modified Ashworth Score [MAS]) and with session-by-session EMG parameters (EMG activation level and co-contraction index). RESULTS . Significant improvements in FMA scores (shoulder/elbow and wrist/hand) were found in the interactive group (P < .05). Significant decreases in the MAS were observed in the wrist and elbow joints for the interactive group and in the wrist joint for the passive group (P < .05). These MAS changes were associated with the decrease in EMG activation level of the flexor carpi radialis and the biceps brachii for the interactive group (P < .05). The muscle coordination on wrist and elbow joints was improved in the interactive groups in the EMG co-contraction indexes across the training sessions (P < .05). CONCLUSIONS . The interactive treatment improved muscle coordination and reduced spasticity after the training for both the wrist and elbow joints, which persisted for 3 months. The passive mode training mainly reduced the spasticity in the wrist flexor.
Would Older Adults with Mild Cognitive Impairment Adhere to and Benefit from a Structured Lifestyle Activity Intervention to Enhance Cognition?: A Cluster Randomized Controlled Trial
BACKGROUND Epidemiologic evidence suggests that cognitive and physical activities are associated with better cognition in late life. The present study was conducted to examine the possible benefits of four structured lifestyle activity interventions and compare their effectiveness in optimizing cognition for older adults with mild cognitive impairment (MCI). METHOD AND FINDINGS This was a 12-month cluster randomized controlled trial. 555 community-dwelling Chinese older adults with MCI (295 with multiple-domain deficits (mdMCI), 260 with single-domain deficit (sdMCI)) were recruited. Participants were randomized into physical exercise (P), cognitive activity (C), integrated cognitive and physical exercise (CP), and social activity (S, active control) groups. Interventions comprised of one-hour structured activities three times per week. Primary outcome was Clinical Dementia Rating sum of boxes (CDR-SOB) scores. Secondary outcomes included Chinese versions of Alzheimer's Disease Assessment Scale - Cognitive subscale (ADAS-Cog), delayed recall, Mini-Mental State Examination, Category Verbal Fluency Test (CVFT) and Disability Assessment for Dementia - Instrumental Activities of Daily Living (DAD-IADL). Percentage adherence to programs and factors affecting adherence were also examined. At 12th month, 423 (76.2%) completed final assessment. There was no change in CDR-SOB and DAD-IADL scores across time and intervention groups. Multilevel normal model and linear link function showed improvement in ADAS-Cog, delayed recall and CVFT with time (p<0.05). Post-hoc subgroup analyses showed that the CP group, compared with other intervention groups, had more significant improvements of ADAS-Cog, delayed recall and CVFT performance with sdMCI participants (p<0.05). Overall adherence rate was 73.3%. Improvements in ADAS-Cog and delayed recall scores were associated with adherence after controlling for age, education, and intervention groups (univariate analyses). CONCLUSIONS Structured lifestyle activity interventions were not associated with changes in everyday functioning, albeit with some improvements in cognitive scores across time. Higher adherence was associated with greater improvement in cognitive scores. Factors to enhance adherence should be specially considered in the design of psychosocial interventions for older adults with cognitive decline. TRIAL REGISTRATION ClinicalTrials.gov ChiCTR-TRC-11001359.
Effects of pallidotomy and bilateral subthalamic stimulation on cognitive function in Parkinson disease
Unilateral pallidotomy and bilateral subthalamic deep brain stimulation (STN-DBS) for Parkinson’s disease (PD) have demonstrated a positive effect on motor functions. However, further studies are needed of the unintended cognitive effects accompanying these new surgical procedures. We studied the consequences of unilateral pallidotomy and STN-DBS on cognitive function in a controlled comparative design. Sixteen consecutive PD patients were assessed before and 6 months after unilateral pallidotomy (n = 8) and bilateral STN-DBS (n = 8). The same assessments were performed in a control group of eight non-operated matched PD patients recruited from surgery candidates who refused operation. The neuropsychological battery consisted of test measuring memory, attention, arithmetic, problem solving and language, as well as visuospatial, executive and premotor functions. An analysis of variance (factors time and treatment) was applied. No statistically significant differences were found in the presurgical evaluation of clinical and demographic data for the three treatment groups. The controlled comparison between presurgical and postsurgical performance revealed no significant changes in the cognitive domains tested in the pallidotomy group. The STN-DBS group showed a selective significant worsening of semantic verbal fluency (p = 0.005). This controlled comparative study suggests that neither unilateral pallidotomy nor bilateral STN-DBS have global adverse cognitive consequences, but bilateral STN-DBS may cause a selective decrease in verbal fluency.
Fast Marine Route Planning for UAV Using Improved Sparse A* Algorithm
This paper focuses on route planning, especially for unmanned aircrafts in marine environment. Firstly, new heuristic information is adopted such as threat-zone, turn maneuver and forbid-zone based on voyage heuristic information. Then, the cost function is normalized to obtain more flexible and reasonable routes. Finally, an improved sparse A* search algorithm is employed to enhance the planning efficiency and reduce the planning time. Experiment results showed that the improved algorithm for aircraft in maritime environment could find a combinational optimum route quickly, which detoured threat-zones, with fewer turn maneuver, totally avoiding forbid-zones, and shorter voyage.
Discrimination-aware Channel Pruning for Deep Neural Networks
Channel pruning is one of the predominant approaches for deep model compression. Existing pruning methods either train from scratch with sparsity constraints on channels, or minimize the reconstruction error between the pre-trained feature maps and the compressed ones. Both strategies suffer from some limitations: the former kind is computationally expensive and difficult to converge, whilst the latter kind optimizes the reconstruction error but ignores the discriminative power of channels. In this paper, we investigate a simple-yet-effective method called discrimination-aware channel pruning (DCP) to choose those channels that really contribute to discriminative power. To this end, we introduce additional discrimination-aware losses into the network to increase the discriminative power of intermediate layers and then select the most discriminative channels for each layer by considering the additional loss and the reconstruction error. Last, we propose a greedy algorithm to conduct channel selection and parameter optimization in an iterative way. Extensive experiments demonstrate the effectiveness of our method. For example, on ILSVRC-12, our pruned ResNet-50 with 30% reduction of channels outperforms the baseline model by 0.39% in top-1 accuracy.
Inferring Networks of Substitutable and Complementary Products
To design a useful recommender system, it is important to understand how products relate to each other. For example, while a user is browsing mobile phones, it might make sense to recommend other phones, but once they buy a phone, we might instead want to recommend batteries, cases, or chargers. In economics, these two types of recommendations are referred to as substitutes and complements: substitutes are products that can be purchased instead of each other, while complements are products that can be purchased in addition to each other. Such relationships are essential as they help us to identify items that are relevant to a user's search. Our goal in this paper is to learn the semantics of substitutes and complements from the text of online reviews. We treat this as a supervised learning problem, trained using networks of products derived from browsing and co-purchasing logs. Methodologically, we build topic models that are trained to automatically discover topics from product reviews that are successful at predicting and explaining such relationships. Experimentally, we evaluate our system on the Amazon product catalog, a large dataset consisting of 9 million products, 237 million links, and 144 million reviews.
A bank customer credit evaluation based on the decision tree and the simulated annealing algorithm
C4.5 is a learning algorithm that adopts local search strategy, and it cannot obtain the best decision rules. On the other hand, the simulated annealing algorithm is a globally optimized algorithm and it avoids the drawbacks of C4.5. This paper proposes a new credit evaluation method based on decision tree and simulated annealing algorithm. The experimental results demonstrate that the proposed method is effective.
A Hidden Markov Model based driver intention prediction system
Awareness of other vehicle's intention may help human drivers or autonomous vehicles judge the risk and avoid traffic accidents. This paper proposed an approach to predicting driver's intentions using Hidden Markov Model (HMM) which is able to access the control and the state of the vehicle. The driver performs maneuvers including stop/non-stop, change lane left/right and turn left/right in a simulator in both highway and urban environments. Moreover, the structure of the road (curved road) is also taken into account for classification. Experiments were conducted with different input sets (steering wheel data with and without vehicle state data) to compare the system performance.
P1B-10 Advantages of Capacitive Micromachined Ultrasonics Transducers (CMUTs) for High Intensity Focused Ultrasound (HIFU)
In the past ten years, high intensity focused ultrasound (HIFU) has become popular for minimally invasive and non-invasive therapies. Traditionally piezoelectric transducers have been used for HIFU applications, but capacitive micro- machined ultrasonic transducers (CMUTs) have been shown to have advantages, including ease of fabrication and efficient performance. In this paper, we show the fabrication and testing of CMUTs specifically designed for HIFU. We compare the operation of these designs with finite element models. In addition, we demonstrate that CMUTs can operate under high pressure and continuous wave (CW) conditions, with minimal self-heating, a problem that piezoelectric transducers often face. Finally, we demonstrate MR-temperature monitoring of the heating created by an unfocused HIFU CMUT.
The anuran calling repertoire in the light of social context
Frogs are immediately associated to their conspicuous vocalizations emitted during the breeding season. Therefore, many scientists were inspired to study their acoustic communication. Nowadays, many types of calls are described and we felt the need of reviewing the terminology currently and historically applied. As a result, we proposed the classification of anuran vocalization into three major categories: reproductive, aggressive, and defensive calls. These categories are subdivided according to the social context of emission mostly reflecting also acoustic differences among call types. Some call types are here proposed to be synonymies of the mostly used and inclusive terms. We also suggest terminologies for basic bioacoustical analyses, mostly applied in call descriptions. Furthermore, we present cases of complex calls, including call gradation. Finally, based on novel data (such as an unusual case of juvenile vocalizations), we discuss situations in which it is difficult to classify call types, reflecting the need of experimental studies.
Query by Committee
We propose an algorithm called query by commitee, in which a committee of students is trained on the same data set. The next query is chosen according to the principle of maximal disagreement. The algorithm is studied for two toy models: the high-low game and perceptron learning of another perceptron. As the number of queries goes to infinity, the committee algorithm yields asymptotically finite information gain. This leads to generalization error that decreases exponentially with the number of examples. This in marked contrast to learning from randomly chosen inputs, for which the information gain approaches zero and the generalization error decreases with a relatively slow inverse power law. We suggest that asymptotically finite information gain may be an important characteristic of good query algorithms.
Novel schemes for local area network emulation in passive optical networks with RF subcarrier multiplexed customer traffic
This paper proposes two novel optical layer schemes for intercommunication between customers in a passive optical network (PON). The proposed schemes use radio frequency (RF) subcarrier multiplexed transmission for intercommunication between customers in conjunction with upstream access to the central office (CO) at baseband. One scheme employs a narrowband fiber Bragg grating (FBG) placed close to the star coupler in the feeder fiber of the PON, while the other uses an additional short-length distribution fiber from the star coupler to each customer unit for the redirection of customer traffic. In both schemes, only one optical transmitter is required at each optical network unit (ONU) for the transmission of customer traffic and upstream access traffic. Moreover, downstream bandwidth is not consumed by customer traffic unlike in previously reported techniques. The authors experimentally verify the feasibility of both schemes with 1.25 Gb/s upstream baseband transmission to the CO and 155 Mb/s customer data transmission on the RF carrier. The experimental results obtained from both schemes are compared, and the power budgets are calculated to analyze the scalability of each scheme. Further, the proposed schemes were discussed in terms of upgradability of the transmission bit rates for the upstream access traffic, bandwidth requirements at the customer premises, dispersion tolerance, and stability issues for the practical implementations of the network.
Regular bipolar fuzzy graphs
We introduce the concepts of regular and totally regular bipolar fuzzy graphs. We prove necessary and sufficient condition under which regular bipolar fuzzy graph and totally bipolar fuzzy graph are equivalent. We introduce the notion of bipolar fuzzy line graphs and present some of their properties. We state a necessary and sufficient condition for a bipolar fuzzy graph to be isomorphic to its corresponding bipolar fuzzy line graph. We examine when an isomorphism between two bipolar fuzzy graphs follows from an isomorphism of their corresponding bipolar fuzzy line graphs.
DATA MINING APPROACH FOR CLASSIFYING TWITTER ’ S USERS
Social networks are the most important communication channels in recent years, which popular among the different social groups. These networks affected the ideas and policies of individuals, groups and communities. Every day, millions of tweets on Twitter are being published. These tweets reflect opinions and beliefs of their publishers and affect others as well. Therefore, it is important to analyze these tweets and identify and classify trends of different users. This research aims to classify social network to anomaly groups such as: Terrorist and dissident; by analyzing tweets data on the Twitter; then identify an anonymous user’s affiliation to these groups. To address this problem, we first extract a set of features to characterize each group using different data mining techniques and store these features in the database. Text mining, sentiment analysis, and opinion mining techniques will be used to accomplish this extraction. The objective of data extraction is to measure the similarity of selected user tweets with respect to extracted features. It will enable to determine high percentage of similarity between the user tweets and group characteristics to expose his/her affiliation to this group. Key word: Data mining, Social network, Twitter, Analysis, Classification. Cite this Article: Mashael Saeed Alqhtani and M. Rizwan Jameel Qureshi, Data Mining Approach for Classifying Twitter’s Users. International Journal of Computer Engineering & Technology, 8(5), 2017, pp. 42–53. http://www.iaeme.com/ijcet/issues.asp?JType=IJCET&VType=8&IType=5
Public history and popular memory: issues in the commemoration of the British militant suffrage campaign
Abstract Drawing on Raphael Samuel's work on the construction of historical knowledge, this article argues that British militant suffrage feminists had a strong sense of their role in history. Once the vote was won, militants became the first public historians of their own suffrage history by collecting ‘relics’ of the campaign and commemorating suffrage events. The work of curators, especially at the Museum of London and National Library of Australia, Canberra, also enabled wider access to the movement's ephemera. Subsequent generations have ‘remembered’ suffrage in different ways, including depiction in fiction, film, local histories and the physical landscape. An exploration of such depictions might help us start to understand the continuing fascination with this aspect of women's history.
Steep Subthreshold Slope n- and p-Type Tunnel-FET Devices for Low-Power and Energy-Efficient Digital Circuits
In this paper, novel n- and p-type tunnel field-effect transistors (T-FETs) based on heterostructure Si/intrinsic-SiGe channel layer are proposed, which exhibit very small subthreshold swings, as well as low threshold voltages. The design parameters for improvement of the characteristics of the devices are studied and optimized based on the theoretical principles and simulation results. The proposed devices are designed to have extremely low off currents on the order of 1 fA/mum and engineered to exhibit substantially higher on currents compared with previously reported T-FET devices. Subthreshold swings as low as 15 mV/dec and threshold voltages as low as 0.13 V are achieved in these devices. Moreover, the T-FETs are designed to exhibit input and output characteristics compatible with CMOS-type digital-circuit applications. Using the proposed n- and p-type devices, the implementation of an inverter circuit based on T-FETs is reported. The performance of the T-FET-based inverter is compared with the 65-nm low-power CMOS-based inverter, and a gain of ~104 is achieved in static power consumption for the T-FET-based inverter with smaller gate delay.
Constructive Language in News Comments
We discuss the characteristics of constructive news comments, and present methods to identify them. First, we define the notion of constructiveness. Second, we annotate a corpus for constructiveness. Third, we explore whether available argumentation corpora can be useful to identify constructiveness in news comments. Our model trained on argumentation corpora achieves a top accuracy of 72.59% (baseline=49.44%) on our crowdannotated test data. Finally, we examine the relation between constructiveness and toxicity. In our crowd-annotated data, 21.42% of the non-constructive comments and 17.89% of the constructive comments are toxic, suggesting that non-constructive comments are not much more toxic than constructive comments.
RF HARVESTING USING ANTENNA STRUCTURES ON FOIL
In this paper we present a device for RF harvesting, i.e. harvesting the energy contained in electromagnetic waves. We have designed, modeled and fabricated an RF harvester using optimized antenna structures. Energy densities in e.g. GSM or WiFi frequency bands are very low (< 1 μW/cm), so the harvesting antennas need to have a considerable area. An alternative to ambient RF energy harvesting is to use a dedicated RF source, which enables smaller antenna surfaces.
Machine Learning-Based Prototyping of Graphical User Interfaces for Mobile Apps
It is common practice for developers of user-facing software to transform a mock-up of a graphical user interface (GUI) into code. This process takes place both at an application’s inception and in an evolutionary context as GUI changes keep pace with evolving features. Unfortunately, this practice is challenging and time-consuming. In this paper, we present an approach that automates this process by enabling accurate prototyping of GUIs via three tasks: detection, classification, and assembly. First, logical components of a GUI are detected from a mock-up artifact using either computer vision techniques or mock-up metadata. Then, software repository mining, automated dynamic analysis, and deep convolutional neural networks are utilized to accurately classify GUI-components into domain-specific types (e.g., toggle-button). Finally, a data-driven, K-nearest-neighbors algorithm generates a suitable hierarchical GUI structure from which a prototype application can be automatically assembled. We implemented this approach for Android in a system called REDRAW. Our evaluation illustrates that REDRAW achieves an average GUI-component classification accuracy of 91% and assembles prototype applications that closely mirror target mock-ups in terms of visual affinity while exhibiting reasonable code structure. Interviews with industrial practitioners illustrate ReDraw’s potential to improve real development workflows.
A Deep Convolutional Neural Network-Based Framework for Automatic Fetal Facial Standard Plane Recognition
Ultrasound imaging has become a prevalent examination method in prenatal diagnosis. Accurate acquisition of fetal facial standard plane (FFSP) is the most important precondition for subsequent diagnosis and measurement. In the past few years, considerable effort has been devoted to FFSP recognition using various hand-crafted features, but the recognition performance is still unsatisfactory due to the high intraclass variation of FFSPs and the high degree of visual similarity between FFSPs and other non-FFSPs. To improve the recognition performance, we propose a method to automatically recognize FFSP via a deep convolutional neural network (DCNN) architecture. The proposed DCNN consists of 16 convolutional layers with small 3 × 3 size kernels and three fully connected layers. A global average pooling is adopted in the last pooling layer to significantly reduce network parameters, which alleviates the overfitting problems and improves the performance under limited training data. Both the transfer learning strategy and a data augmentation technique tailored for FFSP are implemented to further boost the recognition performance. Extensive experiments demonstrate the advantage of our proposed method over traditional approaches and the effectiveness of DCNN to recognize FFSP for clinical diagnosis.
Differential contribution of voltage-dependent potassium currents to neuronal excitability
The excitability of a neuron depends on the different inward and outward currents that flow across its membrane. The specific role of A-type and persistent K-currents in shaping neuronal excitability remains partially unexplained by electrophysiological data. Drosophila motor neurons provide a model system to study the differential contributions of voltage-dependent K-currents to the dynamics of the membrane potential. In this work, the theoretical plausibility of existing hypotheses about the differential involvement of A-type currents in delaying spiking activity is examined through a mathematical model constructed using known macroscopic biophysical properties of voltage-dependent, slowly inactivating and fast inactivating A-type Drosophila channels. The model is constrained first by electrophysiological data, and an analysis of the membrane dynamics is performed through systematic variation of the ratios of the maximal wholemembrane currents. Different ratios among the numbers of the different channels in the model capture the basic features of responses to square pulse stimulation previously observed in Drosophila motor neurons for embryo, larvae and adult motor neurons, Kenyon cells, and giant cultured cells. The model supports the notion that slowly inactivating potassium currents are necessary for sustained spiking activity. The model also supports the hypothesis that early inactivating A-type K (Shal) channels are responsible for experimentally observed delays in the onset of spiking. In contrast, Shaker A-type channels with more depolarized steady state inactivation also contribute to the delay to first spike, but less than Shal. Instead, Shaker channels gate single and repetitive spiking. Furthermore, the model elucidates a biophysical mechanism that allows neurons to diversify their function, in this case by combining additive and resonant properties. Our modeling results are consistent with experimental results from different preparations including Drosophila and lobster.
Innovative and Reliable Power Modules: A Future Trend and Evolution of Technologies
Since the introduction of the first power module by Semikron in 1975, many innovations have been made to improve the thermal, electrical, and mechanical performance of power modules. These innovations in packaging technology focus on the enhancement of the heat dissipation and thermal cycling capability of the modules. Thermal cycles, caused by varying load and environmental operating conditions, induce high mechanical stress in the interconnection layers of the power module due to the different coefficients of thermal expansion (CTE), leading to fatigue and growth of microcracks in the bonding materials. As a result, the lifetime of power modules can be severely limited in practical applications. Furthermore, to reduce the size and weight of converters, the semiconductors are being operated at higher junction temperatures. Higher temperatures are especially of great interest for use of wide-?bandgap materials, such as SiC and GaN, because these materials leverage their material characteristics, particularly at higher temperatures. To satisfy these tightened requirements, on the one hand, conventional power modules, i.e., direct bonded Cu (DBC)-based systems with bond wire contacts, have been further improved. On the other hand, alternative packaging techniques, e.g., chip embedding into printed circuit boards (PCBs) and power module packaging based on the selective laser melting (SLM) technique, have been developed, which might constitute an alternative to conventional power modules in certain applications.
Interactions between auditory and dorsal premotor cortex during synchronization to musical rhythms
When listening to music, we often spontaneously synchronize our body movements to a rhythm's beat (e.g. tapping our feet). The goals of this study were to determine how features of a rhythm such as metric structure, can facilitate motor responses, and to elucidate the neural correlates of these auditory-motor interactions using fMRI. Five variants of an isochronous rhythm were created by increasing the contrast in sound amplitude between accented and unaccented tones, progressively highlighting the rhythm's metric structure. Subjects tapped in synchrony to these rhythms, and as metric saliency increased across the five levels, louder tones evoked longer tap durations with concomitant increases in the BOLD response at auditory and dorsal premotor cortices. The functional connectivity between these regions was also modulated by the stimulus manipulation. These results show that metric organization, as manipulated via intensity accentuation, modulates motor behavior and neural responses in auditory and dorsal premotor cortex. Auditory-motor interactions may take place at these regions with the dorsal premotor cortex interfacing sensory cues with temporally organized movement.
"Constant, constant, multi-tasking craziness": managing multiple working spheres
Most current designs of information technology are based on the notion of supporting distinct tasks such as document production, email usage, and voice communication. In this paper we present empirical results that suggest that people organize their work in terms of much larger and thematically connected units of work. We present results of fieldwork observation of information workers in three different roles: analysts, software developers, and managers. We discovered that all of these types of workers experience a high level of discontinuity in the execution of their activities. People average about three minutes on a task and somewhat more than two minutes using any electronic tool or paper document before switching tasks. We introduce the concept of working spheres to explain the inherent way in which individuals conceptualize and organize their basic units of work. People worked in an average of ten different working spheres. Working spheres are also fragmented; people spend about 12 minutes in a working sphere before they switch to another. We argue that design of information technology needs to support people's continual switching between working spheres.
Once versus twice daily injection of enoxaparin for thromboprophylaxis in bariatric surgery: effects on antifactor Xa activity and procoagulant microparticles. A randomized controlled study.
BACKGROUND The optimal scheme of thromboprophylaxis in bariatric surgery remains uncertain, because clinical practice is different between countries and randomized trials are lacking. OBJECTIVES The primary objective of this randomized multicenter study was to determine the optimal regimen of enoxaparin providing an antifactor Xa peak activity between .3 and .5 IU/mL at equilibrium and to evaluate the course of procoagulant microparticles (MPs). SETTING University hospital. METHODS A total of 164 patients scheduled for gastric bypass were allocated to 3 groups (A, B, and C) of enoxaparin treatment (4000, 6000, or 2×4000 IU, respectively). Antifactor Xa activity was measured before and 4 hours after each injection from D0 to D2. Doppler screening of the lower limbs was performed at D1, D9, and D30. Bleeding (BE) and thrombotic events (TE) were recorded during the first postoperative month. Total MPs were measured at D0, D9, and D30. MPs of leucocyte, platelet, and granulocyte origin were assessed in one third of the patients from each group. The 3 groups were compared by ANOVA. RESULTS A total of 135 patients were analyzed. The equilibrium of antifactor Xa peak levels was obtained 52 hours after the presurgery injection and 12.8%, 56.4%, and 27.3% of the patients reached the target in groups A, B, and C, respectively (P<.001). No TE was detected. BE occurred in 1, 2, and 6 patients in groups A, B, and C, respectively). Total MPs remained unchanged over time. While no significant variation was observed in the other groups, platelet GP1 b(+)-MPs increased (P = .01) at D9 in group C, suggesting an incomplete control of anticoagulation leading to cell activation and procoagulant MP release that was confirmed by the higher MP levels measured at D30 (P = .04). CD66(+)-MPs were also highly elevated at J9 and D30 in group C indicating a granulocyte contribution. CONCLUSIONS This study shows that a single dose of enoxaparin 6000 IU/d allowed most of the patients to reach the target range of antifactor Xa activity without increasing the bleeding risk, with the most likely efficient reduction of procoagulant MPs. (Surg Obes Relat Dis 2015;0:000-000.) © 2015 American Society for Metabolic and Bariatric Surgery. All rights reserved.
A systematic survey on automated concurrency bug detection, exposing, avoidance, and fixing techniques
Currently, concurrent programs are becoming increasingly widespread to meet the demands of the rapid development of multi-core hardware. However, it could be quite expensive and challenging to guarantee the correctness and efficiency of concurrent programs. In this paper, we provide a systematic review of the existing research on fighting against concurrency bugs, including automated concurrency bug exposing, detection, avoidance, and fixing. These four categories cover the different aspects of concurrency bug problems and are complementary to each other. For each category, we survey the motivation, key issues, solutions, and the current state of the art. In addition, we summarize the classical benchmarks widely used in previous empirical studies and the contribution of active research groups. Finally, some future research directions on concurrency bugs are recommended. We believe this survey would be useful for concurrency programmers and researchers.
Financial Fraud , Director Reputation , and Shareholder Wealth *
We investigate the reputational impact of financial fraud for outside directors based on a sample of firms facing shareholder class action lawsuits. Following a financial fraud lawsuit, outside directors do not face abnormal turnover on the board of the sued firm but experience a significant decline in other board seats held. The decline in other directorships is greater for more severe cases of fraud and when the outside director bears greater responsibility for monitoring fraud. Interlocked firms that share directors with the sued firm exhibit valuation declines at the lawsuit filing. When fraud-affiliated directors depart from boards of interlocked firms, these firms experience a significant increase in valuation.
MDM/KDD 2002: Multimedia Data Mining between Promises and Problems
This report presents a brief overview of multimedia data mining and the corresponding workshop series at ACM SIGKDD conference series on data mining and knowledge discovery. It summarizes the presentations, conclusions and directions for future work that were discussed during the 3rd edition of the International Workshop on Multimedia Data Mining, conducted in conjunction with KDD-2002 in Edmonton, Alberta, Canada.
Nurses' Perception of Medication Administration Errors
Background: medication administration error (MAE) is one main component for safety healthcare services. The purpose of this study is to investigative factors associated with nurses’ medication administration errors. Design: A descriptive, correlational, cross-sectional design was used. Methods: 309 nurses at two regional hospitals we included and 288 hospital records of medication error analyzed. Medication administration error checklist and hospital records of medication errors were employed to measure the key variables. Results: rate of medication error among nurses was 1.4 times per month (SD = 1.3). The most common factors associated with errors were “Unit staffs do not receive enough in services on new medications” (69.6%, n = 215) and “Poor communication between nurses and physicians” (65.4%, n = 202), while the lowest reported factors was “Physicians change orders frequently” (23.3%, n = 72) and “Physicians' medication orders are not clear” (24.9, n =77). Items analysis also showed that miscommunication with physicians (M=4.51), work overload (staffing) (M= 4.42) had the highest means among all factors. The most reported type of error is the wrong timing of medication administration (30.9%, n = 89). Conclusion: communication, unclear medication orders, workload and medication pancakes were the main factors associate with Medication administration errors.
Fast Exact Search in Hamming Space With Multi-Index Hashing
There is growing interest in representing image data and feature descriptors using compact binary codes for fast near neighbor search. Although binary codes are motivated by their use as direct indices (addresses) into a hash table, codes longer than 32 bits are not being used as such, as it was thought to be ineffective. We introduce a rigorous way to build multiple hash tables on binary code substrings that enables exact k-nearest neighbor search in Hamming space. The approach is storage efficient and straight-forward to implement. Theoretical analysis shows that the algorithm exhibits sub-linear run-time behavior for uniformly distributed codes. Empirical results show dramatic speedups over a linear scan baseline for datasets of up to one billion codes of 64, 128, or 256 bits.
Spoken Language Recognition: From Fundamentals to Practice
Spoken language recognition refers to the automatic process through which we determine or verify the identity of the language spoken in a speech sample. We study a computational framework that allows such a decision to be made in a quantitative manner. In recent decades, we have made tremendous progress in spoken language recognition, which benefited from technological breakthroughs in related areas, such as signal processing, pattern recognition, cognitive science, and machine learning. In this paper, we attempt to provide an introductory tutorial on the fundamentals of the theory and the state-of-the-art solutions, from both phonological and computational aspects. We also give a comprehensive review of current trends and future research directions using the language recognition evaluation (LRE) formulated by the National Institute of Standards and Technology (NIST) as the case studies.
Named Entity Linking in #Tweets with KEA
This paper presents the KEA system at the #Microposts 2016 NEEL Challenge. Its task is to recognize and type mentions from English microposts and link them to their corresponding entries in DBpedia. For this task, we have adapted our Named Entity Disambiguation tool originally designed for natural language text to the special requirements of noisy, terse, and poorly worded tweets containing special functional terms and language.
Applying Algebraic Specifications on Digital Right Management Systems
Digital Right Management (DRM) Systems have been created to meet the need for digital content protection and distribution. In this paper we present some of the directions of our ongoing research to apply algebraic specification techniques on mobile DRM systems.
Image analysis by bidimensional empirical mode decomposition
Recent developments in analysis methods on the non-linear and non-stationary data have received large attention by the image analysts. In 1998, Huang introduced the empirical mode decomposition (EMD) in signal processing. The EMD approach, fully unsupervised, proved reliable monodimensional (seismic and biomedical) signals. The main contribution of our approach is to apply the EMD to texture extraction and image filtering, which are widely recognized as a difficult and challenging computer vision problem. We developed an algorithm based on bidimensional empirical mode decomposition (BEMD) to extract features at multiple scales or spatial frequencies. These features, called intrinsic mode functions, are extracted by a sifting process. The bidimensional sifting process is realized using morphological operators to detect regional maxima and thanks to radial basis function for surface interpolation. The performance of the texture extraction algorithms, using BEMD method, is demonstrated in the experiment with both synthetic and natural images. q 2003 Elsevier B.V. All rights reserved.
Primary care physicians' use of an informed decision-making process for prostate cancer screening.
PURPOSE Leading professional organizations acknowledge the importance of an informed decision-making process for prostate cancer screening. We describe primary care physicians' reports of their prescreening discussions about the potential harms and benefits of prostate cancer screening. METHODS Members of the American Academy of Family Physicians National Research Network responded to a survey that included (1) an indicator of practice styles related to discussing harms and benefits of prostate-specific antigen testing and providing a screening recommendation or letting patients decide, and (2) indicators reflecting physicians' beliefs about prostate cancer screening. The survey was conducted between July 2007 and January 2008. RESULTS Of 426 physicians 246 (57.7%) completed the survey questionnaire. Compared with physicians who ordered screening without discussion (24.3%), physicians who discussed harms and benefits with patients and then let them decide (47.7%) were more likely to endorse beliefs that scientific evidence does not support screening, that patients should be told about the lack of evidence, and that patients have a right to know the limitations of screening; they were also less likely to endorse the belief that there was no need to educate patients because they wanted to be screened. Concerns about medicolegal risk associated with not screening were more common among physicians who discussed the harms and benefits and recommended screening than among physicians who discussed screening and let their patients decide. CONCLUSIONS Much of the variability in physicians' use of an informed decision-making process can be attributed to beliefs about screening. Concerns about medicolegal risk remain an important barrier for shared decision making.
Determinants of Information Channel Choice: The Impact of Task Complexity and Dispositional Character Traits
During the last decade, a large set of innovative web-enabled technologies, such as social networks and mobile technologies has considerably changed the way people communicate and express their thoughts and ideas. At the same time, these developments significantly enlarged the number of information channels people have to choose from when looking for information in organizational contexts. This endeavor is further complicated by the fact that a new generation of young high potentials, which is used to naturally applying these technologies in daily life, will soon enter the job market. The goal of this quantitative study among 171 subjects is to (1) get a clear understanding of behavioral patterns with respect to the selection of information channels depending on task complexity and personality traits and to (2) substantiate that communication theories can be applied to IS-related topics in the field of human information behavior.
MVPNet: Multi-View Point Regression Networks for 3D Object Reconstruction from A Single Image
In this paper, we address the problem of reconstructing an object’s surface from a single image using generative networks. First, we represent a 3D surface with an aggregation of dense point clouds from multiple views. Each point cloud is embedded in a regular 2D grid aligned on an image plane of a viewpoint, making the point cloud convolution-favored and ordered so as to fit into deep network architectures. The point clouds can be easily triangulated by exploiting connectivities of the 2D grids to form mesh-based surfaces. Second, we propose an encoder-decoder network that generates such kind of multiple view-dependent point clouds from a single image by regressing their 3D coordinates and visibilities. We also introduce a novel geometric loss that is able to interpret discrepancy over 3D surfaces as opposed to 2D projective planes, resorting to the surface discretization on the constructed meshes. We demonstrate that the multi-view point regression network outperforms state-of-the-art methods with a significant improvement on challenging datasets.
Quality of life several years after myocardial infarction: comparing the MONICA/KORA registry to the general population.
AIMS The aim of this study was to assess the impact of myocardial infarction (MI) on health-related quality of life (HRQL) in MI survivors measured by EuroQol (EQ-5D) and to compare it with the general population. METHODS AND RESULTS A follow-up study of all MI survivors included in the MONICA/KORA registry was performed. About 2950 (67.1%) patients responded. Moderate or severe problems were most frequent in EQ-5D dimension pain/discomfort (55.0%), anxiety/depression (29.2%), and mobility (27.9%). Mean EQ VAS score was 65.8 (SD 18.5). Main predictors of lower HRQL included older age, diabetes, increasing body mass index, current smoking, and experience of re-infarction. Type of revascularizational treatment showed no impact on HRQL. Compared with the general population, adjusted EQ VAS was 6.2 (95% confidence interval 3.4-8.9) points lower in 45-year-old MI patients converging with growing age up to the age of 80. With regard to HRQL dimensions, MI survivors had a significantly higher risk of incurring problems in the dimension pain/discomfort, usual activities, and especially in anxiety/depression which was more pronounced in younger age. Mobility was the single dimension, in which MI showed an inverse effect. CONCLUSION MI is combined with significant reduction in HRQL compared with the general population. The main impairments occur in the dimension pain/discomfort, usual activities, and particularly anxiety/depression. The relative impairment decreases with higher ages.
Late Cenozoic deep weathering patterns on the Fennoscandian shield in northern Finland : A window on ice sheet bed conditions at the onset of Northern Hemisphere glaciation
Abstract The nature of the regolith that existed on the shields of the Northern Hemisphere at the onset of ice sheet glaciation is poorly constrained. In this paper, we provide the first detailed account of an exceptionally preserved, deeply weathered late Neogene landscape in the ice sheet divide zone in northern Finland. We mine data sets of drilling and pitting records gathered by the Geological Survey of Finland to reconstruct regional preglacial deep weathering patterns within a GIS framework. Using a large geochemical data set, we give standardised descriptions of saprolite geochemistry using a variant of the Weathering Index of Parker (WIP) as a proxy to assess the intensity of weathering. We also focus on mineral prospects and mines with dense pit and borehole data coverage in order to identify links between geology, topography, and weathering. Geology is closely linked to topography on the preglacial shield landscape of northern Finland and both factors influence weathering patterns. Upstanding, resistant granulite, granite, gabbro, metabasalt, and quartzite rocks were associated with fresh rock outcrops, including tors, or with thin ( fines values above 3000 and 4000. Beneath valley floors developed along mineralised shear and fracture zones, weathering penetrated locally to depths of > 50 m and included intensely weathered kaolinitic clays with WIP fines values below 1000. Late Neogene weathering profiles were varied in character. Tripartite clay–gruss–saprock profiles occur only in limited areas. Bipartite gruss–saprock profiles were widespread, with saprock thicknesses of > 10 m. Weathering profiles included two discontinuities in texture, materials and resistance to erosion, between saprolite and saprock and between saprock and rock. Limited core recovery when drilling below the soil base in mixed rocks of the Tana Belt indicates that weathering locally penetrated deep below upper fresh rock layers. Such deep-seated weathered bands in rock represent a third set of discontinuities. Incipient weathering and supergene mineralisation also extended to depths of > 100 m in mineralised fracture zones. The thin weathering crusts found extensively beneath till may represent types of early or middle Pleistocene palaeosols. We confirm that glacial erosion has been very limited (
CRF framework for supervised preference aggregation
We develop a flexible Conditional Random Field framework for supervised preference aggregation, which combines preferences from multiple experts over items to form a distribution over rankings. The distribution is based on an energy comprised of unary and pairwise potentials allowing us to effectively capture correlations between both items and experts. We describe procedures for learning in this modelnand demonstrate that inference can be done much more efficiently thannin analogous models. Experiments on benchmark tasks demonstrate significant performance gains over existing rank aggregation methods.
ASR error management for improving spoken language understanding
This paper addresses the problem of automatic speech recognition (ASR) error detection and their use for improving spoken language understanding (SLU) systems. In this study, the SLU task consists in automatically extracting, from ASR transcriptions, semantic concepts and concept/values pairs in a e.g touristic information system. An approach is proposed for enriching the set of semantic labels with error specific labels and by using a recently proposed neural approach based on word embeddings to compute well calibrated ASR confidence measures. Experimental results are reported showing that it is possible to decrease significantly the Concept/Value Error Rate with a state of the art system, outperforming previously published results performance on the same experimental data. It also shown that combining an SLU approach based on conditional random fields with a neural encoder/decoder attention based architecture, it is possible to effectively identifying confidence islands and uncertain semantic output segments useful for deciding appropriate error handling actions by the dialogue manager strategy.
Signed networks in social media
Relations between users on social media sites often reflect a mixture of positive (friendly) and negative (antagonistic) interactions. In contrast to the bulk of research on social networks that has focused almost exclusively on positive interpretations of links between people, we study how the interplay between positive and negative relationships affects the structure of on-line social networks. We connect our analyses to theories of signed networks from social psychology. We find that the classical theory of structural balance tends to capture certain common patterns of interaction, but that it is also at odds with some of the fundamental phenomena we observe --- particularly related to the evolving, directed nature of these on-line networks. We then develop an alternate theory of status that better explains the observed edge signs and provides insights into the underlying social mechanisms. Our work provides one of the first large-scale evaluations of theories of signed networks using on-line datasets, as well as providing a perspective for reasoning about social media sites.
Effects of crystal structure on the uptake of metals by lake trout (Salvelinus namaycush) otoliths
This is the first study to report spectroscopic and elemental analysis of aragonite and vaterite growing simultaneously and separately in both the core and the edges of the same otolith. Our investigations focused on understanding differential trace metal uptake, including the influence of the metal itself (i.e., ionic radii), the crystalline structure, and the development state of the fish. Chemistry and crystal structure of sagittal otoliths from lake trout (Salvelinus namaycush) were studied using laser ablation combined with inductively coupled plasma mass spectrometry (LA-ICPMS) and Raman spectroscopy, respectively. Analyses of the composition of vaterite and aragonite growing in the same growth ring show that smaller cations like Mg (0.86 Å) (1 Å = 0.1 nm) and Mn (0.81 Å) were more abundant in the vaterite hexagonal crystal structure, whereas larger cations such as Sr (1.32 Å) and Ba (1.49 Å) were preferentially incorporated in aragonite (orthorhombic). Similarly, the coprecipitation of aragonite and vaterite in cores and edges allowed us to demonstrate that the uptake rates (as determined by element-specific partition coefficients) for Sr and Ba were greater in aragonite than vaterite, whereas those of Mg and Mn were higher in vaterite than in aragonite. Résumé : C’est la première fois qu’on reporte la cristallisation de l’aragonite et de la vatérite séparément dans le cœur et les anneaux du même otolithe. Nos recherches se concentrent sur la compréhension de l’inclusion des métaux dans l’otolithe. L’incorporation peut varier selon la nature du métal (rayon ionique), la structure cristalline et le stade de développement du poisson. Nous avons étudié la chimie et la composition des cristaux de CaCO3 dans les otolithes sagittales de truites (Salvelinus namaycush) par ablation laser couplée avec un spectromètre de masse atomique à plasma induit (LA-ICP-MS) et par spectroscopie Raman, respectivement. Les analyses sur la composition chimique des deux polymorphes de CaCO3 dans le même anneau d’otolithe, correspondant au même environnement, montrent que les cations de petites tailles tel que Mg (0,86 Å) (1 Å = 0,1 nm) et Mn (0,81 Å) sont plus abondants dans la vatérite, tandis que les cations volumineux comme Sr (1,32 Å) et Ba (1,49 Å) sont préférentiellement incorporés dans l’aragonite. Similairement, la co-précipitation de l’aragonite et de la vatérite dans le cœur et les extrémités de l’otolithe nous a permis de démontrer que les taux d’incorporation (déterminés par les coefficients de partition) de Sr et Ba sont plus élevés dans l’aragonite que dans la vatérite, tandis que ceux de Mg et Mn sont plus élevés dans la vatérite comparativement à l’aragonite. Melancon et al. 2619
Personalizing Robot Tutors to Individuals’ Learning Differences
In education research, there is a widely-cited result called "Bloom's two sigma" that characterizes the differences in learning outcomes between students who receive one-on-one tutoring and those who receive traditional classroom instruction. Tutored students scored in the 95th percentile, or two sigmas above the mean, on average, compared to students who received traditional classroom instruction. In human-robot interaction research, however, there is relatively little work exploring the potential benefits of personalizing a robot's actions to an individual's strengths and weaknesses. In this study, participants solved grid-based logic puzzles with the help of a personalized or non-personalized robot tutor. Participants' puzzle solving times were compared between two non-personalized control conditions and two personalized conditions (n=80). Although the robot's personalizations were less sophisticated than what a human tutor can do, we still witnessed a "one-sigma" improvement (68th percentile) in post-tests between treatment and control groups. We present these results as evidence that even relatively simple personalizations can yield significant benefits in educational or assistive human-robot interactions.
Modeling Multibody Dynamic Systems With Uncertainties . Part I : Theoretical and Computational Aspects
This study explores the use of generalized polynomial chaos theory for modeling complex nonlinear multibody dynamic systems in the presence of parametric and external uncertainty. The polynomial chaos framework has been chosen because it offers an efficient computational approach for the large, nonlinear multibody models of engineering systems of interest, where the number of uncertain parameters is relatively small, while the magnitude of uncertainties can be very large (e.g., vehicle-soil interaction). The proposed methodology allows the quantification of uncertainty distributions in both time and frequency domains, and enables the simulations of multibody systems to produce results with “error bars”. The first part of this study presents the theoretical and computational aspects of the polynomial chaos methodology. Both unconstrained and constrained formulations of multibody dynamics are considered. Direct stochastic collocation is proposed as less expensive alternative to the traditional Galerkin approach. It is established that stochastic collocation is equivalent to a stochastic response surface approach. We show that multi-dimensional basis functions are constructed as tensor products of one-dimensional basis functions and discuss the treatment of polynomial and trigonometric nonlinearities. Parametric uncertainties are modeled by finite-support probability densities. Stochastic forcings are discretized using truncated Karhunen-Loeve expansions. The companion paper “Modeling Multibody Dynamic Systems With Uncertainties. Part II: Numerical Applications” illustrates the use of the proposed methodology on a selected set of test problems. The overall conclusion is that despite its limitations, polynomial chaos is a powerful approach for the simulation of multibody systems with uncertainties.
Risk Factors Associated With Mammalian Target of Rapamycin Inhibitor Withdrawal in Kidney Transplant Recipients.
BACKGROUND Mammalian target of rapamycin inhibitors (mTORi) play an essential role as novel immunosuppressive agents in kidney transplantation (KT). Treatment cessation usually occurs after adverse effects occur. We investigated the risk factors associated with withdrawal of mTORi in KT recipients and evaluated the outcomes related to the withdrawal. METHODS The study enrolled KT recipients being followed up in a medical center in southern Taiwan from January 1999 through December 2014. RESULTS Risk factors associated with mTORi withdrawal were initial proteinuria level, higher initial serum creatinine level posttransplantation, and history of glomerulonephritis as the primary etiology of renal failure. mTORi withdrawal was associated with increased risk of graft failure (hazard ratio [HR], 9.97 [95% confidence interval (CI), 1.03-96.8]; P = .047). Higher body mass index (HR, 11.2 [95% CI, 1.63-76.6]; P = .01) and tacrolimus usage (HR, 8.30 [95% CI, 1.14-60.7]; P = .037) were associated with increased risk of new-onset diabetes after transplantation in mTORi withdrawal groups. CONCLUSIONS Proteinuria, poor graft function, and primary glomerulonephritis were associated with cessation of mTORi treatment. Earlier identification of these risk factors may prevent further adverse events and optimize transplantation outcomes after mTORi conversion.
Pseudo-task Augmentation: From Deep Multitask Learning to Intratask Sharing - and Back
Deep multitask learning boosts performance by sharing learned structure across related tasks. This paper adapts ideas from deep multitask learning to the setting where only a single task is available. The method is formalized as pseudo-task augmentation, in which models are trained with multiple decoders for each task. Pseudo-tasks simulate the effect of training towards closelyrelated tasks drawn from the same universe. In a suite of experiments, pseudo-task augmentation improves performance on single-task learning problems. When combined with multitask learning, further improvements are achieved, including state-of-the-art performance on the CelebA dataset, showing that pseudo-task augmentation and multitask learning have complementary value. All in all, pseudo-task augmentation is a broadly applicable and efficient way to boost performance in deep learning systems.
Data Provenance: A Categorization of Existing Approaches
In many application areas like e-science and data-warehousing detailed information about the origin of data is required. This kind of information is often referred to as data provenance or data lineage. The provenance of a data item includes information about the processes and source data items that lead to its creation and current representation. The diversity of data representation models and application domains has lead to a number of more or less formal definitions of provenance. Most of them are limited to a special application domain, data representation model or data processing facility. Not surprisingly, the associated implementations are also restricted to some application domain and depend on a special data model. In this paper we give a survey of data provenance models and prototypes, present a general categorization scheme for provenance models and use this categorization scheme to study the properties of the existing approaches. This categorization enables us to distinguish between different kinds of provenance information and could lead to a better understanding of provenance in general. Besides the categorization of provenance types, it is important to include the storage, transformation and query requirements for the different kinds of provenance information and application domains in our considerations. The analysis of existing approaches will assist us in revealing open research problems in the area of data provenance.
Ongoing Evolution of Visual SLAM from Geometry to Deep Learning: Challenges and Opportunities
Visual simultaneous localization and mapping (SLAM) has been investigated in the robotics community for decades. Significant progress and achievements on visual SLAM have been made, with geometric model-based techniques becoming increasingly mature and accurate. However, they tend to be fragile under challenging environments. Recently, there is a trend to develop data-driven approaches, e.g., deep learning, for visual SLAM problems with more robust performance. This paper aims to witness the ongoing evolution of visual SLAM techniques from geometric model-based to data-driven approaches by providing a comprehensive technical review. Our contribution is not only just a compilation of state-of-the-art end-to-end deep learning SLAM work, but also an insight into the underlying mechanism of deep learning SLAM. For such a purpose, we provide a concise overview of geometric model-based approaches first. Next, we identify visual depth estimation using deep learning is a starting point of the evolution. It is from depth estimation that ego-motion or pose estimation techniques using deep learning flourish rapidly. In addition, we strive to link semantic segmentation using deep learning with emergent semantic SLAM techniques to shed light on simultaneous estimation of ego-motion and high-level understanding. Finally, we visualize some further opportunities in this research direction.
basic human values an overview
Introduction to the Values Theory When we think of our values, we think of what is important to us in our lives (e.g., security, independence, wisdom, success, kindness, pleasure). Each of us holds numerous values with varying degrees of importance. A particular value may be very important to one person, but unimportant to another. Consensus regarding the most useful way to conceptualize basic values has emerged gradually since the 1950’s. We can summarize the main features of the conception of basic values implicit in the writings of many theorists and researchers as follows:
A review of feature selection methods on synthetic data
With the advent of high dimensionality, adequate identification of relevant features of the data has become indispensable in real-world scenarios. In this context, the importance of feature selection is beyond doubt and different methods have been developed. However, with such a vast body of algorithms available, choosing the adequate feature selection method is not an easy-to-solve question and it is necessary to check their effectiveness on different situations. Nevertheless, the assessment of relevant features is difficult in real datasets and so an interesting option is to use artificial data. In this paper, several synthetic datasets are employed for this purpose, aiming at reviewing the performance of feature selection methods in the presence of a crescent number or irrelevant features, noise in the data, redundancy and interaction between attributes, as well as a small ratio between number of samples and number of features. Seven filters, two embedded methods, and two wrappers are applied over eleven synthetic datasets, tested by four classifiers, so as to be able to choose a robust method, paving the way for its application to real datasets.
Neuroevolution: from architectures to learning
Artificial neural networks (ANNs) are applied to many real-world problems, ranging from pattern classification to robot control. In order to design a neural network for a particular task, the choice of an architecture (including the choice of a neuron model), and the choice of a learning algorithm have to be addressed. Evolutionary search methods can provide an automatic solution to these problems. New insights in both neuroscience and evolutionary biology have led to the development of increasingly powerful neuroevolution techniques over the last decade. This paper gives an overview of the most prominent methods for evolving ANNs with a special focus on recent advances in the synthesis of learning architectures.
GLOBALIZED SECURITIES MARKETS AND ACCOUNTING: HOW MANY STANDARDS? *
SYNOPSIS This paper examines the relationship between globalized securities markets and accounting systems. After describing that globalization, I discuss the reasons for the increase in globalization over the past few decades: changes in government policies and rapid improvements in the technologies-telecommunications and data processing-that underlie finance. I then develop the idea of a financial reporting system as a "network", with the accounting system providing the standards that determine the compatibility between the components of the network. Using this framework, I show the pluses and minuses of a single accounting system versus multiple accounting systems and illustrate the current "systems competition" among national securities markets and their accounting systems. Though a single accounting system decreases comparability costs and thereby encourages globalization, while multiple accounting systems increase comparability costs and thereby impede globalization, those multiple accounting systems also permit national adaptation to national circumstances and permit greater opportunities for experimentation and innovation. On balance, a competitive framework is preferable. The current limited and muted competition between accounting systems could be enhanced by the introduction of the IASB's International Accounting Standards as a second allowable system in the U.S. alongside U.S. GAAP.
Space and attention in parietal cortex.
The space around us is represented not once but many times in parietal cortex. These multiple representations encode locations and objects of interest in several egocentric reference frames. Stimulus representations are transformed from the coordinates of receptor surfaces, such as the retina or the cochlea, into the coordinates of effectors, such as the eye, head, or hand. The transformation is accomplished by dynamic updating of spatial representations in conjunction with voluntary movements. This direct sensory-to-motor coordinate transformation obviates the need for a single representation of space in environmental coordinates. In addition to representing object locations in motoric coordinates, parietal neurons exhibit strong modulation by attention. Both top-down and bottom-up mechanisms of attention contribute to the enhancement of visual responses. The saliance of a stimulus is the primary factor in determining the neural response to it. Although parietal neurons represent objects in motor coordinates, visual responses are independent of the intention to perform specific motor acts.
Design, Analysis, and Optimization of Ironless Stator Permanent Magnet Machines
This paper presents a methodology for the design, analysis, and graphical optimization of ironless brushless permanent magnet machines primarily for generator applications. Magnetic flux in this class of electromagnetic machine tends to be 3-D due to the lack of conventional iron structures and the absence of a constrained magnetic flux path. The proposed methodology includes comprehensive geometric, magnetic and electrical dimensioning followed by detailed 3-D finite element (FE) modeling of a base machine for which parameters are determined. These parameters are then graphically optimized within sensible volumetric and electromagnetic constraints to arrive at improved design solutions. This paper considers an ironless machine design to validate the 3-D FE model to optimize power conversion for the case of a low-speed, ironless stator generator. The machine configuration investigated in this paper has concentric arrangement of the rotor and the stator, solenoid-shaped coils, and a simple mechanical design considered for ease of manufacture and maintenance. Using performance and material effectiveness as the overriding optimization criteria, this paper suggests optimal designs configurations featuring two different winding arrangements, i.e., radial and circumferentially mounted. Performance and material effectiveness of the studied ironless stator designs are compared to published ironless machine configurations.
Face Recognition for Social Media with Mobile Cloud Computing
Social Networking has become today’s lifestyle and anyone can easily receive information about everyone in the world. It is very useful if a personal identity can be obtained from the mobile device and also connected to social networking. Therefore, we proposed a face recognition system on mobile devices by combining cloud computing services. Our system is designed in the form of an application developed on Android mobile devices which utilized the Face.com API as an image data processor for cloud computing services. We also applied the Augmented Reality as an information viewer to the users. The result of testing shows that the system is able to recognize face samples with the average percentage of 85% with the total computation time for the face recognition system reached 7.45 seconds, and the average augmented reality translation time is 1.03 seconds to get someone’s information.
Power decoupling control method for an isolated single-phase ac-to-dc converter based on high-frequency cycloconverter topology
This paper proposes a new power decoupling method for a high-frequency cycloconverter. The cycloconverter consists of two half-bridge inverters, two input filter capacitors, and a series-resonant circuit, which enables to convert the single-phase line-frequency ac input to the high-frequency ac output directly. The proposed power decoupling method stores the input power ripple at double the line frequency in the filter capacitors. Therefore, the proposed method achieves a unity power factor in the ac input and a constant amplitude in the high-frequency output without any additional switching device or energy storage element. This paper theoretically discusses the principle and operating performance of the proposed method and confirms the effectiveness of the proposed method in experiments using an isolated ac-to-dc converter based on the high-frequency cycloconverter. As a result, the proposed power decoupling method effectively improved the displacement factor of the ac-input current to more than 0.99 and reduced the voltage ripple in the dc output to 7%.
Using Content-Based Filtering for Recommendation
Finding information on a large web site can be a difficult and time-consuming process. Recommender systems can help users find information by providing them with personalized suggestions. In this paper the recommender system PRES is described that uses content-based filtering techniques to suggest small articles about home improvements. A domain such as this implicates that the user model has to be very dynamic and learned from positive feedback only. The relevance feedback method seems to be a good candidate for learning such a user model, as it is both efficient and dynamic.
Structure and function of the global ocean microbiome
Microbes are dominant drivers of biogeochemical processes, yet drawing a global picture of functional diversity, microbial community structure, and their ecological determinants remains a grand challenge. We analyzed 7.2 terabases of metagenomic data from 243 Tara Oceans samples from 68 locations in epipelagic and mesopelagic waters across the globe to generate an ocean microbial reference gene catalog with >40 million nonredundant, mostly novel sequences from viruses, prokaryotes, and picoeukaryotes. Using 139 prokaryote-enriched samples, containing >35,000 species, we show vertical stratification with epipelagic community composition mostly driven by temperature rather than other environmental factors or geography. We identify ocean microbial core functionality and reveal that >73% of its abundance is shared with the human gut microbiome despite the physicochemical differences between these two ecosystems.
Study on stock price prediction based on BP Neural Network
In this paper, two kinds of methods, namely additional momentum method and self-adaptive learning rate adjustment method, are used to improve the BP algorithm. Considering the diversity of factors which affect stock prices, Single-input and Multi-input Prediction Model (SIPM and MIPM) are established respectively to implement short-term forecasts for SDIC Electric Power (600886) shares and Bank of China (601988) shares in 2009. Experiments indicate that the improved BP model has superior performance to the basic BP model, and MIPM is also better than SIPM. However, the best performance is obtained by using MIPM and improved prediction model cohesively.
A short and very short form of the physical self-inventory for adolescents: Development and factor validity
Objectives: The Physical Self-Inventory (PSI)—a French adaptation of the Fox and Corbin’s [1989. The Physical SelfPerception Profile: Development and preliminary validation. Journal of Sport and Exercise Psychology, 11, 408–430] Physical Self-Perception Profile—was originally developed for use with adults and no study has systematically verified its psychometric properties in adolescent populations. Additionally, this instrument remains too long to be efficiently completed in combination with multiple other instruments within extensive longitudinal or idiographic studies. The purpose of the present investigation was thus threefold: (a) testing the factor validity and reliability of the original PSI in a sample of adolescents; (b) developing and testing the factor validity and reliability of a very short (i.e., two items per scale) form of the PSI in a sample of adolescents; and (c) testing the equivalence of the factor pattern, structural parameters, latent mean structure, and criterion-related validity of both forms of the PSI. Design: Structural equation modeling approach. Method: Two samples participated in this series of studies. In Study 1, a sample of 1018 adolescents completed the adult PSI (25 items) and was randomly split in two sub-samples. In Study 2, a new sample of 320 adolescents completed a very short form of the PSI (PSI-VSF). Factorial validity and gender and multigroup invariance of these instruments (PSI, PSIVSF) were tested using confirmatory factorial analysis (CFA) and structural equation modeling (SEM). Results: In Study 1, CFA and SEM analyses provided evidence for the factor validity and reliability of a short (PSI-SF: 18 items) and very short (PSI-VSF: 12 items) form of the PSI for adolescents. In Study 2, CFAs and SEMs supported the equivalence of the factor pattern, structural parameters, latent mean structure, and criterion-related validity of both forms of the PSI (i.e., PSI-SF, PSI-VSF). e front matter r 2007 Elsevier Ltd. All rights reserved. ychsport.2007.10.003 ing author. Tel.: +33 491 759 653; fax: +33 491 170 415. ess: [email protected] (C. Maı̈ano).
Cryptology and the origins of spread spectrum: Engineers during World War II developed an unbreakable scrambler to guarantee secure communications between Allied leaders; actress Hedy Lamarr played a role in the technology
The author describes the development of the scrambler called SIGSALY during World War II that was used for conversations between Churchill and Roosevelt and provides an early example of spread-spectrum communication.
Neighborhood poverty and suicidal thoughts and attempts in late adolescence.
BACKGROUND Suicide tends to concentrate in disadvantaged neighborhoods, and neighborhood disadvantage is associated with many important risk factors for youth suicide. However, no study has directly investigated the link between neighborhood poverty and youth suicidal behaviors, while controlling for pre-existing vulnerabilities. The objective of this study was to determine whether living in a poor neighborhood is associated with suicidal thoughts and attempts in late adolescence over and above background vulnerabilities, and whether this association can be explained by late-adolescence psychosocial risks: depression, social support, negative life events (NLEs), delinquent activities, substance abuse and exposure to suicide. The potential moderating role of neighborhood poverty was also examined. METHOD A subset of 2776 participants was selected from the Canadian National Longitudinal Survey of Children and Youth (NLSCY). Late-adolescence suicidal behaviors and risk factors were self-reported. The 2001 Canadian Census was used to characterize neighborhoods during early and middle adolescence. Late-childhood family and individual controls were assessed through parent-report. RESULTS At the bivariate level, the odds of reporting suicidal thoughts were about twice as high in poor than non-poor neighborhoods, and the odds of attempting suicide were about four times higher. After controlling for background vulnerabilities, neighborhood poverty remained significantly associated with both suicidal thoughts and attempts. However, these associations were not explained by late-adolescence psychosocial risks. Rather, youth living in poor neighborhoods may be at greater risk through the amplification of other risk factors in disadvantaged neighborhoods. CONCLUSIONS Potential explanations for the increased vulnerability of youth living in poor neighborhoods are discussed.
Newborn Body Fat: Associations with Maternal Metabolic State and Placental Size
BACKGROUND Neonatal body composition has implications for the health of the newborn both in short and long term perspective. The objective of the current study was first to explore the association between maternal BMI and metabolic parameters associated with BMI and neonatal percentage body fat and to determine to which extent any associations were modified if adjusting for placental weight. Secondly, we examined the relations between maternal metabolic parameters associated with BMI and placental weight. METHODS The present work was performed in a subcohort (n = 207) of the STORK study, an observational, prospective study on the determinants of fetal growth and birthweight in healthy pregnancies at Oslo University Hospital, Norway. Fasting glucose, insulin, triglycerides, free fatty acids, HDL- and total cholesterol were measured at week 30-32. Newborn body composition was determined by Dual-Energy X-Ray Absorptiometry (DXA). Placenta was weighed at birth. Linear regression models were used with newborn fat percentage and placental weight as main outcomes. RESULTS Maternal BMI, fasting glucose and gestational age were independently associated with neonatal fat percentage. However, if placental weight was introduced as a covariate, only placental weight and gestational age remained significant. In the univariate model, the determinants of placenta weight included BMI, insulin, triglycerides, total- and HDL-cholesterol (negatively), gestational weight gain and parity. In the multivariable model, BMI, total cholesterol HDL-cholesterol, gestational weight gain and parity remained independent covariates. CONCLUSION Maternal BMI and fasting glucose were independently associated with newborn percentage fat. This effect disappeared by introducing placental weight as a covariate. Several metabolic factors associated with maternal BMI were associated with placental weight, but not with neonatal body fat. Our findings are consistent with a concept that the effects of maternal BMI and a number of BMI-related metabolic factors on fetal fat accretion to a significant extent act by modifying placental weight.
Anterior corpectomy via the mini-open, extreme lateral, transpsoas approach combined with short-segment posterior fixation for single-level traumatic lumbar burst fractures: analysis of health-related quality of life outcomes and patient satisfaction.
OBJECTIVE The authors present clinical outcome data and satisfaction of patients who underwent minimally invasive vertebral body corpectomy and cage placement via a mini-open, extreme lateral, transpsoas approach and posterior short-segment instrumentation for lumbar burst fractures. METHODS Patients with unstable lumbar burst fractures who underwent corpectomy and anterior column reconstruction via a mini-open, extreme lateral, transpsoas approach with short-segment posterior fixation were reviewed retrospectively. Demographic information, operative parameters, perioperative radiographic measurements, and complications were analyzed. Patient-reported outcome instruments (Oswestry Disability Index [ODI], 12-Item Short Form Health Survey [SF-12]) and an anterior scar-specific patient satisfaction questionnaire were recorded at the latest follow-up. RESULTS Twelve patients (7 men, 5 women, average age 42 years, range 22-68 years) met the inclusion criteria. Lumbar corpectomies with anterior column support were performed (L-1, n = 8; L-2, n = 2; L-3, n = 2) and supplemented with short-segment posterior instrumentation (4 open, 8 percutaneous). Four patients had preoperative neurological deficits, all of which improved after surgery. No new neurological complications were noted. The anterior incision on average was 6.4 cm (range 5-8 cm) in length, caused mild pain and disability, and was aesthetically acceptable to the large majority of patients. Three patients required chest tube placement for pleural violation, and 1 patient required reoperation for cage subsidence/hardware failure. Average clinical follow-up was 38 months (range 16-68 months), and average radiographic follow-up was 37 months (range 6-68 months). Preoperative lumbar lordosis and focal lordosis were significantly improved/maintained after surgery. Patients were satisfied with their outcomes, had minimal/moderate disability (average ODI score 20, range 0-52), and had good physical (SF-12 physical component score 41.7% ± 10.4%) and mental health outcomes (SF-12 mental component score 50.2% ± 11.6%) after surgery. CONCLUSIONS Anterior corpectomy and cage placement via a mini-open, extreme lateral, transpsoas approach supplemented by short-segment posterior instrumentation is a safe, effective alternative to conventional approaches in the treatment of single-level unstable burst fractures and is associated with excellent functional outcomes and patient satisfaction.
Learning an Executable Neural Semantic Parser
This article describes a neural semantic parser that maps natural language utterances onto logical forms that can be executed against a task-specific environment, such as a knowledge base or a database, to produce a response. The parser generates tree-structured logical forms with a transition-based approach, combining a generic tree-generation algorithm with domain-general grammar defined by the logical language. The generation process is modeled by structured recurrent neural networks, which provide a rich encoding of the sentential context and generation history for making predictions. To tackle mismatches between natural language and logical form tokens, various attention mechanisms are explored. Finally, we consider different training settings for the neural semantic parser, including fully supervised training where annotated logical forms are given, weakly supervised training where denotations are provided, and distant supervision where only unlabeled sentences and a knowledge base are available. Experiments across a wide range of data sets demonstrate the effectiveness of our parser.
Strategies for high-tech academia uprising in 21 century
Bing Sheu, Chair Professor & University Strategy Advisor, National Chiao Tung University and Chung-Yu Wu, Dean of EECS College, Chin-Teng Lin, Associate Dean of EECS College, National Chiao Tung University, Bin-Du Liu, Professor, National Cheng Kung University, Murk Liuo, Professor, Institute of Information Sciences, Academia Sinica, Bill Tai, Jui-Lin Lui, Rong-Jim Chen, Professors, National United University
Prevention of migraine in the pill-free interval of combined oral contraceptives: a double-blind, placebo-controlled pilot study using natural oestrogen supplements.
CONTEXT Migraine in the pill-free interval of combined oral contraceptives is reported by many women, but there is little published information on possible mechanisms and treatments. OBJECTIVE To determine whether the use of natural oestrogen patches affected the occurrence and severity of migraine during the pill-free interval. DESIGN A double-blind, placebo-controlled, randomised, crossover study. SETTING The City of London Migraine Clinic. PARTICIPANTS Fourteen women with migraine during the pill-free interval. INTERVENTIONS 50 microg oestradiol patches (Evorel) used during the pill-free interval for two cycles versus placebo for two cycles (total four cycles). MAIN OUTCOME MEASURES Number of pill-free intervals (zero, one or two) during which migraine occurred; number of days of migraine; severity of migraine; number of days of migraine accompanied by nausea, vomiting and/or photophobia. RESULTS Complete data were available for 12 women and for two cycles for one woman. Use of 50 microg oestrogen patches during the pill-free interval showed a trend towards reducing the frequency and severity of migraine. DISCUSSION These results were not as good as expected. However, we had originally aimed for 20 eligible women to participate in the trial, but only 14 were recruited and only 12 completed the study with full data for analysis. CONCLUSION The results of this pilot study suggest that use of 50 microg oestrogen patches during the pill-free interval may reduce the frequency and severity of migraine at that time. This study should be repeated with larger numbers of women and a higher dose of oestrogen.