query_id
stringlengths
32
32
query
stringlengths
6
3.9k
positive_passages
listlengths
1
21
negative_passages
listlengths
10
100
subset
stringclasses
7 values
de5fcd65a604267b4204cd914a8492ae
Darknet and deepnet mining for proactive cybersecurity threat intelligence
[ { "docid": "c70466f8b1e70fcdd4b7fe3f2cb772b2", "text": "We present Tor, a circuit-based low-latency anonymous communication service. This second-generation Onion Routing system addresses limitations in the original design. Tor adds perfect forward secrecy, congestion control, directory servers, integrity checking, configurable exit policies, and a practical design for rendezvous points. Tor works on the real-world Internet, requires no special privileges or kernel modifications, requires little synchronization or coordination between nodes, and provides a reasonable tradeoff between anonymity, usability, and efficiency. We briefly describe our experiences with an international network of more than a dozen hosts. We close with a list of open problems in anonymous communication.", "title": "" }, { "docid": "88421f4be8de411ce0fe0c5e2e4e60c0", "text": "The organization of HTML into a tag tree structure, which is rendered by browsers as roughly rectangular regions with embedded text and HREF links, greatly helps surfers locate and click on links that best satisfy their information need. Can an automatic program emulate this human behavior and thereby learn to predict the relevance of an unseen HREF target page w.r.t. an information need, based on information limited to the HREF source page? Such a capability would be of great interest in focused crawling and resource discovery, because it can fine-tune the priority of unvisited URLs in the crawl frontier, and reduce the number of irrelevant pages which are fetched and discarded.", "title": "" }, { "docid": "09dfe9e0123dd3f48898b172dc446415", "text": "Cybersecurity is a problem of growing relevance that impacts all facets of society. As a result, many researchers have become interested in studying cybercriminals and online hacker communities in order to develop more effective cyber defenses. In particular, analysis of hacker community contents may reveal existing and emerging threats that pose great risk to individuals, businesses, and government. Thus, we are interested in developing an automated methodology for identifying tangible and verifiable evidence of potential threats within hacker forums, IRC channels, and carding shops. To identify threats, we couple machine learning methodology with information retrieval techniques. Our approach allows us to distill potential threats from the entirety of collected hacker contents. We present several examples of identified threats found through our analysis techniques. Results suggest that hacker communities can be analyzed to aid in cyber threat detection, thus providing promising direction for future work.", "title": "" } ]
[ { "docid": "0994065c757a88373a4d97e5facfee85", "text": "Scholarly literature suggests digital marketing skills gaps in industry, but these skills gaps are not clearly identified. The research aims to specify any digital marketing skills gaps encountered by professionals working in communication industries. In-depth interviews were undertaken with 20 communication industry professionals. A focus group followed, testing the rigour of the data. We find that a lack of specific technical skills; a need for best practice guidance on evaluation metrics, and a lack of intelligent futureproofing for dynamic technological change and development are skills gaps currently challenging the communication industry. However, the challenge of integrating digital marketing approaches with established marketing practice emerges as the key skills gap. Emerging from the key findings, a Digital Marketer Model was developed, highlighting the key competencies and skills needed by an excellent digital marketer. The research concludes that guidance on best practice, focusing upon evaluation metrics, futureproofing and strategic integration, needs to be developed for the communication industry. The Digital Marketing Model should be subject to further testing in industry and academia. Suggestions for further research are discussed.", "title": "" }, { "docid": "c4d8e95a72bbd6e6a35b947b7b1f8548", "text": "Deep Convolutional Neural Networks (DCNN) have proven to be very effective in many pattern recognition applications, such as image classification and speech recognition. Due to their computational complexity, DCNNs demand implementations that utilize custom hardware accelerators to meet performance and energy-efficiency constraints. In this paper we propose an FPGA-based accelerator architecture which leverages all sources of parallelism in DCNNs. We develop analytical feasibility and performance estimation models that take into account various design and platform parameters. We also present a design space exploration algorithm for obtaining the implementation with the highest performance on a given platform. Simulation results with a real-life DCNN demonstrate that our accelerator outperforms other competing approaches, which disregard some sources of parallelism in the application. Most notably, our accelerator runs 1.9× faster than the state-of-the-art DCNN accelerator on the same FPGA device.", "title": "" }, { "docid": "d647fc2b5635a3dfcebf7843fef3434c", "text": "Touch is our primary non-verbal communication channel for conveying intimate emotions and as such essential for our physical and emotional wellbeing. In our digital age, human social interaction is often mediated. However, even though there is increasing evidence that mediated touch affords affective communication, current communication systems (such as videoconferencing) still do not support communication through the sense of touch. As a result, mediated communication does not provide the intense affective experience of co-located communication. The need for ICT mediated or generated touch as an intuitive way of social communication is even further emphasized by the growing interest in the use of touch-enabled agents and robots for healthcare, teaching, and telepresence applications. Here, we review the important role of social touch in our daily life and the available evidence that affective touch can be mediated reliably between humans and between humans and digital agents. We base our observations on evidence from psychology, computer science, sociology, and neuroscience with focus on the first two. Our review shows that mediated affective touch can modulate physiological responses, increase trust and affection, help to establish bonds between humans and avatars or robots, and initiate pro-social behavior. We argue that ICT mediated or generated social touch can (a) intensify the perceived social presence of remote communication partners and (b) enable computer systems to more effectively convey affective information. However, this research field on the crossroads of ICT and psychology is still embryonic and we identify several topics that can help to mature the field in the following areas: establishing an overarching theoretical framework, employing better researchmethodologies, developing basic social touch building blocks, and solving specific ICT challenges.", "title": "" }, { "docid": "34f7878d3c4775899bbc189ac192004a", "text": "The Dutch-Belgian Randomized Lung Cancer Screening Trial (Dutch acronym: NELSON study) was designed to investigate whether screening for lung cancer by low-dose multidetector computed tomography (CT) in high-risk subjects will lead to a decrease in 10-year lung cancer mortality of at least 25% compared with a control group without screening. Since the start of the NELSON study in 2003, 7557 participants underwent CT screening, with scan rounds in years 1, 2, 4 and 6. In the current review, the design of the NELSON study including participant selection and the lung nodule management protocol, as well as results on validation of CT screening and first results on lung cancer screening are described.", "title": "" }, { "docid": "f6f6f322118f5240aec5315f183a76ab", "text": "Learning from data sets that contain very few instances of the minority class usually produces biased classifiers that have a higher predictive accuracy over the majority class, but poorer predictive accuracy over the minority class. SMOTE (Synthetic Minority Over-sampling Technique) is specifically designed for learning from imbalanced data sets. This paper presents a modified approach (MSMOTE) for learning from imbalanced data sets, based on the SMOTE algorithm. MSMOTE not only considers the distribution of minority class samples, but also eliminates noise samples by adaptive mediation. The combination of MSMOTE and AdaBoost are applied to several highly and moderately imbalanced data sets. The experimental results show that the prediction performance of MSMOTE is better than SMOTEBoost in the minority class and F-values are also improved.", "title": "" }, { "docid": "02c698f2509f87014539a17d8ad1d487", "text": "Foot-and-mouth disease (FMD) is a highly contagious disease of cloven-hoofed animals. The disease affects many areas of the world, often causing extensive epizootics in livestock, mostly farmed cattle and swine, although sheep, goats and many wild species are also susceptible. In countries where food and farm animals are essential for subsistence agriculture, outbreaks of FMD seriously impact food security and development. In highly industrialized developed nations, FMD endemics cause economic and social devastation mainly due to observance of health measures adopted from the World Organization for Animal Health (OIE). High morbidity, complex host-range and broad genetic diversity make FMD prevention and control exceptionally challenging. In this article we review multiple vaccine approaches developed over the years ultimately aimed to successfully control and eradicate this feared disease.", "title": "" }, { "docid": "5974317a06cdd3031308ee9d62f856f8", "text": "A new method is presented for performing rapid and accurate numerical estimation. The method is derived from an area of human cognitive psychology called preattentive processing. Preattentive processing refers to an initial organization of the visual field based on cognitive operations believed to be rapid, automatic, and spatially parallel. Examples of visual features that can be detected in this way include hue, intensity, orientation, size, and motion. We beleive that studies from preattentive vision should be used to assist in the design of visualization tools, especially those for which high-speed target detection, boundary identification, and region detection are important. In our present study, we investigated two known preattentive features (hue and orientation) in the context of a new task (numerical estimation) in order to see whether preattentive estimation was possible. Our experiments tested displays that were designed to visualize data from salmon migration simulations. The results showed that rapid and accurate estimation was indeed possible using either hue or orientation. Furthermore, random variation in one of these features resulted in no interference when subjects estimated the percentage of the other. To test the generality of our results, we varied two important display parameters—display duration and feature difference—and found boundary conditions for each. Implications of our results for application to real-world data and tasks are discussed.", "title": "" }, { "docid": "6d0c4e7f69169b98484e9acc3c3ffdd9", "text": "Motion capture is a prevalent technique for capturing and analyzing human articulations. A common problem encountered in motion capture is that some marker positions are often missing due to occlusions or ambiguities. Most methods for completing missing markers may quickly become ineffective and produce unsatisfactory results when a significant portion of the markers are missing for extended periods of time. We propose a data-driven, piecewise linear modeling approach to missing marker estimation that is especially beneficial in this scenario. We model motion sequences of a training set with a hierarchy of low-dimensional local linear models characterized by the principal components. For a new sequence with missing markers, we use a pre-trained classifier to identify the most appropriate local linear model for each frame and then recover the missing markers by finding the least squares solutions based on the available marker positions and the principal components of the associated model. Our experimental results demonstrate that our method is efficient in recovering the full-body motion and is robust to heterogeneous motion data.", "title": "" }, { "docid": "405cd4bacbcfddc9b4254aee166ee394", "text": "A fundamental problem for the visual perception of 3D shape is that patterns of optical stimulation are inherently ambiguous. Recent mathematical analyses have shown, however, that these ambiguities can be highly constrained, so that many aspects of 3D structure are uniquely specified even though others might be underdetermined. Empirical results with human observers reveal a similar pattern of performance. Judgments about 3D shape are often systematically distorted relative to the actual structure of an observed scene, but these distortions are typically constrained to a limited class of transformations. These findings suggest that the perceptual representation of 3D shape involves a relatively abstract data structure that is based primarily on qualitative properties that can be reliably determined from visual information.", "title": "" }, { "docid": "e8d4a806f1515d9cbbe2b7924dfba92e", "text": "How to use, and influence, consumer social communications to improve business performance, reputation, and profit.", "title": "" }, { "docid": "d74299248da9cb4238118ad4533d5d99", "text": "The plug-in hybrid electric vehicles (PHEVs) are specialized hybrid electric vehicles that have the potential to obtain enough energy for average daily commuting from batteries. The PHEV battery would be recharged from the power grid at home or at work and would thus allow for a reduction in the overall fuel consumption. This paper proposes an integrated power electronics interface for PHEVs, which consists of a novel Eight-Switch Inverter (ESI) and an interleaved DC/DC converter, in order to reduce the cost, the mass and the size of the power electronics unit (PEU) with high performance at any operating mode. In the proposed configuration, a novel Eight-Switch Inverter (ESI) is able to function as a bidirectional single-phase AC/DC battery charger/ vehicle to grid (V2G) and to transfer electrical energy between the DC-link (connected to the battery) and the electric traction system as DC/AC inverter. In addition, a bidirectional-interleaved DC/DC converter with dual-loop controller is proposed for interfacing the ESI to a low-voltage battery pack in order to minimize the ripple of the battery current and to improve the efficiency of the DC system with lower inductor size. To validate the performance of the proposed configuration, the indirect field-oriented control (IFOC) based on particle swarm optimization (PSO) is proposed to optimize the efficiency of the AC drive system in PHEVs. The maximum efficiency of the motor is obtained by the evaluation of optimal rotor flux at any operating point, where the PSO is applied to evaluate the optimal flux. Moreover, an improved AC/DC controller based Proportional-Resonant Control (PRC) is proposed in order to reduce the THD of the input current in charger/V2G modes. The proposed configuration is analyzed and its performance is validated using simulated results obtained in MATLAB/ SIMULINK. Furthermore, it is experimentally validated with results obtained from the prototypes that have been developed and built in the laboratory based on TMS320F2808 DSP.", "title": "" }, { "docid": "1c4942dc3cccf7dc2424be450d9be143", "text": "PURPOSE\nTo perform a large-scale systematic comparison of the accuracy of all commonly used perfusion computed tomography (CT) data postprocessing methods in the definition of infarct core and penumbra in acute stroke.\n\n\nMATERIALS AND METHODS\nThe collection of data for this study was approved by the institutional ethics committee, and all patients gave informed consent. Three hundred fourteen patients with hemispheric ischemia underwent perfusion CT within 6 hours of stroke symptom onset and magnetic resonance (MR) imaging at 24 hours. CT perfusion maps were generated by using six different postprocessing methods. Pixel-based analysis was used to calculate sensitivity and specificity of different perfusion CT thresholds for the penumbra and infarct core with each postprocessing method, and receiver operator characteristic (ROC) curves were plotted. Area under the ROC curve (AUC) analysis was used to define the optimum threshold.\n\n\nRESULTS\nDelay-corrected singular value deconvolution (SVD) with a delay time of more than 2 seconds most accurately defined the penumbra (AUC = 0.86, P = .046, mean volume difference between acute perfusion CT and 24-hour diffusion-weighted MR imaging = 1.7 mL). A double core threshold with a delay time of more than 2 seconds and cerebral blood flow less than 40% provided the most accurate definition of the infarct core (AUC = 0.86, P = .038). The other SVD measures (block circulant, nondelay corrected) were more accurate than non-SVD methods.\n\n\nCONCLUSION\nThis study has shown that there is marked variability in penumbra and infarct prediction among various deconvolution techniques and highlights the need for standardization of perfusion CT in stroke.\n\n\nSUPPLEMENTAL MATERIAL\nhttp://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.12120971/-/DC1.", "title": "" }, { "docid": "aa4cf46ab2be3be46de9dc770e7fa2f7", "text": "M C Walter, J A Petersen, R Stucka, D Fischer, R Schröder, M Vorgerd, A Schroers, H Schreiber, C O Hanemann, U Knirsch, A Rosenbohm, A Huebner, N Barisic, R Horvath, S Komoly, P Reilich, W Müller-Felber, D Pongratz, J S Müller, E A Auerswald, H Lochmüller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .", "title": "" }, { "docid": "af4518476ae2cadd264f7288768c99a7", "text": "In multivariate pattern analysis of neuroimaging data, 'second-level' inference is often performed by entering classification accuracies into a t-test vs chance level across subjects. We argue that while the random-effects analysis implemented by the t-test does provide population inference if applied to activation differences, it fails to do so in the case of classification accuracy or other 'information-like' measures, because the true value of such measures can never be below chance level. This constraint changes the meaning of the population-level null hypothesis being tested, which becomes equivalent to the global null hypothesis that there is no effect in any subject in the population. Consequently, rejecting it only allows to infer that there are some subjects in which there is an information effect, but not that it generalizes, rendering it effectively equivalent to fixed-effects analysis. This statement is supported by theoretical arguments as well as simulations. We review possible alternative approaches to population inference for information-based imaging, converging on the idea that it should not target the mean, but the prevalence of the effect in the population. One method to do so, 'permutation-based information prevalence inference using the minimum statistic', is described in detail and applied to empirical data.", "title": "" }, { "docid": "6d285e0e8450791f03f95f58792c8f3c", "text": "Basic psychology research suggests the possibility that confessions-a potent form of incrimination-may taint other evidence, thereby creating an appearance of corroboration. To determine if this laboratory-based phenomenon is supported in the high-stakes world of actual cases, we conducted an archival analysis of DNA exoneration cases from the Innocence Project case files. Results were consistent with the corruption hypothesis: Multiple evidence errors were significantly more likely to exist in false-confession cases than in eyewitness cases; in order of frequency, false confessions were accompanied by invalid or improper forensic science, eyewitness identifications, and snitches and informants; and in cases containing multiple errors, confessions were most likely to have been obtained first. We believe that these findings underestimate the problem and have important implications for the law concerning pretrial corroboration requirements and the principle of \"harmless error\" on appeal.", "title": "" }, { "docid": "e541ae262655b7f5affefb32ce9267ee", "text": "Internet of Things (IoT) is a revolutionary technology for the modern society. IoT can connect every surrounding objects for various applications like security, medical fields, monitoring and other industrial applications. This paper considers the application of IoT in the field of medicine. IoT in E-medicine can take the advantage of emerging technologies to provide immediate treatment to the patient as well as monitors and keeps track of health record for healthy person. IoT then performs complex computations on these collected data and can provide health related advice. Though IoT can provide a cost effective medical services to any people of all age groups, there are several key issues that need to be addressed. System security, IoT interoperability, dynamic storage facility and unified access mechanisms are some of the many fundamental issues associated with IoT. This paper proposes a system level design solution for security and flexibility aspect of IoT. In this paper, the functional components are bound in security function group which ensures the management of privacy and secure operation of the system. The security function group comprises of components which offers secure communication using Ciphertext-Policy Attribute-Based Encryption (CP-ABE). Since CP-ABE are delegated to unconstrained devices with the assumption that these devices are trusted, the producer encrypts data using AES and the ABE scheme is protected through symmetric key solutions.", "title": "" }, { "docid": "4646848b959a356bb4d7c0ef14d53c2c", "text": "Consumerization of IT (CoIT) is a key trend affecting society at large, including organizations of all kinds. A consensus about the defining aspects of CoIT has not yet been reached. Some refer to CoIT as employees bringing their own devices and technologies to work, while others highlight different aspects. While the debate about the nature and consequences of CoIT is still ongoing, many definitions have already been proposed. In this paper, we review these definitions and what is known about CoIT thus far. To guide future empirical research in this emerging area, we also review several established theories that have not yet been applied to CoIT but in our opinion have the potential to shed a deeper understanding on CoIT and its consequences. We discuss which elements of the reviewed theories are particularly relevant for understanding CoIT and thereby provide targeted guidance for future empirical research employing these theories. Overall, our paper may provide a useful starting point for addressing the lack of theorization in the emerging CoIT literature stream and stimulate discussion about theorizing CoIT.", "title": "" }, { "docid": "444f3d74db2111ef42072448b7e31368", "text": "Deep convolutional neural networks (DCNN) have been widely adopted for research on super resolution recently, however previous work focused mainly on stacking as many layers as possible in their model, in this paper, we present a new perspective regarding to image restoration problems that we can construct the neural network model reflecting the physical significance of the image restoration process, that is, embedding the a priori knowledge of image restoration directly into the structure of our neural network model, we employed a symmetric non-linear colorspace, the sigmoidal transfer, to replace traditional transfers such as, sRGB, Rec.709, which are asymmetric non-linear colorspaces, we also propose a “reuse plus patch” method to deal with super resolution of different scaling factors, our proposed methods and model show generally superior performance over previous work even though our model was only roughly trained and could still be underfitting the training set.", "title": "" }, { "docid": "5a5ae4ab9b802fe6d5481f90a4aa07b7", "text": "High-dimensional pattern classification was applied to baseline and multiple follow-up MRI scans of the Alzheimer's Disease Neuroimaging Initiative (ADNI) participants with mild cognitive impairment (MCI), in order to investigate the potential of predicting short-term conversion to Alzheimer's Disease (AD) on an individual basis. MCI participants that converted to AD (average follow-up 15 months) displayed significantly lower volumes in a number of grey matter (GM) regions, as well as in the white matter (WM). They also displayed more pronounced periventricular small-vessel pathology, as well as an increased rate of increase of such pathology. Individual person analysis was performed using a pattern classifier previously constructed from AD patients and cognitively normal (CN) individuals to yield an abnormality score that is positive for AD-like brains and negative otherwise. The abnormality scores measured from MCI non-converters (MCI-NC) followed a bimodal distribution, reflecting the heterogeneity of this group, whereas they were positive in almost all MCI converters (MCI-C), indicating extensive patterns of AD-like brain atrophy in almost all MCI-C. Both MCI subgroups had similar MMSE scores at baseline. A more specialized classifier constructed to differentiate converters from non-converters based on their baseline scans provided good classification accuracy reaching 81.5%, evaluated via cross-validation. These pattern classification schemes, which distill spatial patterns of atrophy to a single abnormality score, offer promise as biomarkers of AD and as predictors of subsequent clinical progression, on an individual patient basis.", "title": "" }, { "docid": "baad21f223ecee8e2ed07e184e672b3b", "text": "The across the board reception of android devices and their ability to get to critical private and secret data have brought about these devices being focused by malware engineers. Existing android malware analysis techniques categorized into static and dynamic analysis. In this paper, we introduce two machine learning supported methodologies for static analysis of android malware. The First approach based on statically analysis, content is found through probability statistics which reduces the uncertainty of information. Feature extraction were proposed based on the analysis of existing dataset. Our both approaches were used to high-dimension data into low-dimensional data so as to reduce the dimension and the uncertainty of the extracted features. In training phase the complexity was reduced 16.7% of the original time and detect the unknown malware families were improved.", "title": "" } ]
scidocsrr
239bef27acccb3a190aeb17c63bd06ed
The Why and How of Nonnegative Matrix Factorization
[ { "docid": "79be4c64b46eca3c64bdcfbec12720a9", "text": "We present several new variations on the theme of nonnegative matrix factorization (NMF). Considering factorizations of the form X = FGT, we focus on algorithms in which G is restricted to containing nonnegative entries, but allowing the data matrix X to have mixed signs, thus extending the applicable range of NMF methods. We also consider algorithms in which the basis vectors of F are constrained to be convex combinations of the data points. This is used for a kernel extension of NMF. We provide algorithms for computing these new factorizations and we provide supporting theoretical analysis. We also analyze the relationships between our algorithms and clustering algorithms, and consider the implications for sparseness of solutions. Finally, we present experimental results that explore the properties of these new methods.", "title": "" }, { "docid": "91d8e79b31a07aff4c1ee16570ae49ad", "text": "Endmember extraction is a process to identify the hidden pure source signals from the mixture. In the past decade, numerous algorithms have been proposed to perform this estimation. One commonly used assumption is the presence of pure pixels in the given image scene, which are detected to serve as endmembers. When such pixels are absent, the image is referred to as the highly mixed data, for which these algorithms at best can only return certain data points that are close to the real endmembers. To overcome this problem, we present a novel method without the pure-pixel assumption, referred to as the minimum volume constrained nonnegative matrix factorization (MVC-NMF), for unsupervised endmember extraction from highly mixed image data. Two important facts are exploited: First, the spectral data are nonnegative; second, the simplex volume determined by the endmembers is the minimum among all possible simplexes that circumscribe the data scatter space. The proposed method takes advantage of the fast convergence of NMF schemes, and at the same time eliminates the pure-pixel assumption. The experimental results based on a set of synthetic mixtures and a real image scene demonstrate that the proposed method outperforms several other advanced endmember detection approaches", "title": "" }, { "docid": "ab01dc16d6f31a423b68fca2aeb8e109", "text": "Matrix factorization techniques have been frequently applied in information retrieval, computer vision, and pattern recognition. Among them, Nonnegative Matrix Factorization (NMF) has received considerable attention due to its psychological and physiological interpretation of naturally occurring data whose representation may be parts based in the human brain. On the other hand, from the geometric perspective, the data is usually sampled from a low-dimensional manifold embedded in a high-dimensional ambient space. One then hopes to find a compact representation,which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure. In this paper, we propose a novel algorithm, called Graph Regularized Nonnegative Matrix Factorization (GNMF), for this purpose. In GNMF, an affinity graph is constructed to encode the geometrical information and we seek a matrix factorization, which respects the graph structure. Our empirical study shows encouraging results of the proposed algorithm in comparison to the state-of-the-art algorithms on real-world problems.", "title": "" } ]
[ { "docid": "0fa55762a86f658aa2936cd63f2db838", "text": "Mindfulness has received considerable attention as a correlate of psychological well-being and potential mechanism for the success of mindfulness-based interventions (MBIs). Despite a common emphasis of mindfulness, at least in name, among MBIs, mindfulness proves difficult to assess, warranting consideration of other common components. Self-compassion, an important construct that relates to many of the theoretical and practical components of MBIs, may be an important predictor of psychological health. The present study compared ability of the Self-Compassion Scale (SCS) and the Mindful Attention Awareness Scale (MAAS) to predict anxiety, depression, worry, and quality of life in a large community sample seeking self-help for anxious distress (N = 504). Multivariate and univariate analyses showed that self-compassion is a robust predictor of symptom severity and quality of life, accounting for as much as ten times more unique variance in the dependent variables than mindfulness. Of particular predictive utility are the self-judgment and isolation subscales of the SCS. These findings suggest that self-compassion is a robust and important predictor of psychological health that may be an important component of MBIs for anxiety and depression.", "title": "" }, { "docid": "2f92cde5a194a4cabdebebe2c7cc11ba", "text": "The expressive power of neural networks is important for understanding deep learning. Most existing works consider this problem from the view of the depth of a network. In this paper, we study how width affects the expressiveness of neural networks. Classical results state that depth-bounded (e.g. depth-2) networks with suitable activation functions are universal approximators. We show a universal approximation theorem for width-bounded ReLU networks: width-(n+ 4) ReLU networks, where n is the input dimension, are universal approximators. Moreover, except for a measure zero set, all functions cannot be approximated by width-n ReLU networks, which exhibits a phase transition. Several recent works demonstrate the benefits of depth by proving the depth-efficiency of neural networks. That is, there are classes of deep networks which cannot be realized by any shallow network whose size is no more than an exponential bound. Here we pose the dual question on the width-efficiency of ReLU networks: Are there wide networks that cannot be realized by narrow networks whose size is not substantially larger? We show that there exist classes of wide networks which cannot be realized by any narrow network whose depth is no more than a polynomial bound. On the other hand, we demonstrate by extensive experiments that narrow networks whose size exceed the polynomial bound by a constant factor can approximate wide and shallow network with high accuracy. Our results provide more comprehensive evidence that depth may be more effective than width for the expressiveness of ReLU networks.", "title": "" }, { "docid": "973da8a50b1250688fceb94611a4f0f7", "text": "Experts in sport benefit from some cognitive mechanisms and strategies which enables them to reduce response times and increase response accuracy.Reaction time is mediated by different factors including type of sport that athlete is participating in and expertise status. The present study aimed to investigate the relationship between CRTs and expertise level in collegiate athletes, as well as evaluating the role of sport and gender differences.44 male and female athletesrecruited from team and individual sports at elite and non-elite levels. The Lafayette multi-choice reaction time was used to collect data.All subjectsperformed a choice reaction time task that required response to visual and auditory stimuli. Results demonstrated a significant overall choice reaction time advantage for maleathletes, as well as faster responses to stimuli in elite participants.Athletes of team sportsdid not showmore accurate performance on the choice reaction time tasks than athletes of individual sports. These findings suggest that there is a relation between choice reaction time and expertise in athletes and this relationship can be mediated by gender differences. Overall, athletes with intrinsic perceptualmotor advantages such as faster reaction times are potentially more equipped for participation in high levels of sport.", "title": "" }, { "docid": "af983aa7ac103dd41dfd914af452758f", "text": "The fast-growing nature of instant messaging applications usage on Android mobile devices brought about a proportional increase on the number of cyber-attack vectors that could be perpetrated on them. Android mobile phones store significant amount of information in the various memory partitions when Instant Messaging (IM) applications (WhatsApp, Skype, and Facebook) are executed on them. As a result of the enormous crimes committed using instant messaging applications, and the amount of electronic based traces of evidence that can be retrieved from the suspect’s device where an investigation could convict or refute a person in the court of law and as such, mobile phones have become a vulnerable ground for digital evidence mining. This paper aims at using forensic tools to extract and analyse left artefacts digital evidence from IM applications on Android phones using android studio as the virtual machine. Digital forensic investigation methodology by Bill Nelson was applied during this research. Some of the key results obtained showed how digital forensic evidence such as call logs, contacts numbers, sent/retrieved messages, and images can be mined from simulated android phones when running these applications. These artefacts can be used in the court of law as evidence during cybercrime investigation.", "title": "" }, { "docid": "7e004a7b6a39ff29176dd19a07c15448", "text": "All humans will become presbyopic as part of the aging process where the eye losses the ability to focus at different depths. Progressive additive lenses (PALs) allow a person to focus on objects located at near versus far by combing lenses of different strengths within the same spectacle. However, it is unknown why some patients easily adapt to wearing these lenses while others struggle and complain of vertigo, swim, and nausea as well as experience difficulties with balance. Sixteen presbyopes (nine who adapted to PALs and seven who had tried but could not adapt) participated in this study. This research investigated vergence dynamics and its adaptation using a short-term motor learning experiment to asses the ability to adapt. Vergence dynamics were on average faster and the ability to change vergence dynamics was also greater for presbyopes who adapted to progressive lenses compared to those who could not. Data suggest that vergence dynamics and its adaptation may be used to predict which patients will easily adapt to progressive lenses and discern those who will have difficulty.", "title": "" }, { "docid": "dc6ef4268b98d212392e79441f64c98a", "text": "This paper investigates the framework of encoder-decoder with attention for sequence labelling based spoken language understanding. We introduce Bidirectional Long Short Term Memory - Long Short Term Memory networks (BLSTM-LSTM) as the encoder-decoder model to fully utilize the power of deep learning. In the sequence labelling task, the input and output sequences are aligned word by word, while the attention mechanism cannot provide the exact alignment. To address this limitation, we propose a novel focus mechanism for encoder-decoder framework. Experiments on the standard ATIS dataset showed that BLSTM-LSTM with focus mechanism defined the new state-of-the-art by outperforming standard BLSTM and attention based encoder-decoder. Further experiments also show that the proposed model is more robust to speech recognition errors.", "title": "" }, { "docid": "74a4c7c7282466026c354b3e1a9c859c", "text": "An automatic algorithm capable of segmenting the whole vessel tree and calculate vessel diameter and orientation in a digital ophthalmologic image is presented in this work. The algorithm is based on a parametric model of a vessel that can assume arbitrarily complex shape and a simple measure of match that quantifies how well the vessel model matches a given angiographic image. An automatic vessel tracing algorithm is described that exploits the geometric model and actively seeks vessel bifurcation, without user intervention. The proposed algorithm uses the geometric vessel model to determine the vessel diameter at each detected central axis pixel. For this reason, the algorithm is fine tuned using a subset of ophthalmologic images of the publically available DRIVE database, by maximizing vessel segmentation accuracy. The proposed algorithm is then applied to the remaining ophthalmological images of the DRIVE database. The segmentation results of the proposed algorithm compare favorably in terms of accuracy with six other well established vessel detection techniques, outperforming three of them in the majority of the available ophthalmologic images. The proposed algorithm achieves subpixel root mean square central axis positioning error that outperforms the non-expert based vessel segmentation, whereas the accuracy of vessel diameter estimation is comparable to that of the non-expert based vessel segmentation.", "title": "" }, { "docid": "bd41083b19e2d542b3835c3a008b30e6", "text": "Formalizations are used in systems development to support the description of artifacts and to shape and regulate developer behavior. The limits to applying formalizations in these two ways are discussed based on examples from systems development practice. It is argued that formalizations, for example in the form of methods, are valuable in some situations, but inappropriate in others. The alternative to uncritically using formalizations is that systems developers reflect on the situations in which they find themselves and manage based on a combination of formal and informal approaches.", "title": "" }, { "docid": "4c1dd5cdf03e618f4ac1923c4fbcc251", "text": "With the rapid expansion of computer usage and computer network the security of the computer system has became very important. Every day new kind of attacks are being faced by industries. As the threat becomes a serious matter year by year, intrusion detection technologies are indispensable for network and computer security. A variety of intrusion detection approaches be present to resolve this severe issue but the main problem is performance. It is important to increase the detection rates and reduce false alarm rates in the area of intrusion detection. In order to detect the intrusion, various approaches have been developed and proposed over the last decade. In this paper, a detailed survey of intrusion detection based various techniques has been presented. Here, the techniques are classified as follows: i) papers related to Neural network ii) papers related to Support vector machine iii) papers related to K-means classifier iv) papers related to hybrid technique and v) paper related to other detection techniques. For comprehensive analysis, detection rate, time and false alarm rate from various research papers have been taken.", "title": "" }, { "docid": "91c0c976d7344d38f9b6219cd615f85a", "text": "Renewed motives for space exploration have inspired NASA to work toward the goal of establishing a virtual presence in space, through heterogeneous fleets of robotic explorers. Information technology, and Artificial Intelligence in particular, will play a central role in this endeavor by endowing these explorers with a form of computational intelligence that we call remote ayems. In this paper we describe the Remote Agent, a specific autonomous agent architecture based on the principles of model-based programming, on-board deduction and search, and goaldirected closed-loop commanding, that takes a significant step toward enabling this future. This architecture addresses the unique characteristics of the spacecraft domain that require highly reliable autonomous operations over long periods of time with tight deadlines, resource constraints, and concurrent activity among tightly coupled subsystems. The Remote Agent integrates constraintbased temporal planning and scheduling, robust multi-threaded execution, and model-based mode identification and reconfiguration. The demonstration of the integrated system as an on-board controller for Deep Space One, NASA’s first New Millennium mission, is scheduled for a period of a week in mid 1999. The development of the Remote Agent also provided the opportunity to reassess some of AI’s conventional wisdom about the challenges of implementing embedded systems, tractable reasoning, and knowledge representation. We discuss these issues, and our often contrary experiences, throughout the paper.", "title": "" }, { "docid": "4e9d95b3fa34f6165b9315b2384f624c", "text": "Distance metric learning (DML) approaches learn a transformation to a representation space where distance is in correspondence with a predefined notion of similarity. While such models offer a number of compelling benefits, it has been difficult for these to compete with modern classification algorithms in performance and even in feature extraction. In this work, we propose a novel approach explicitly designed to address a number of subtle yet important issues which have stymied earlier DML algorithms. It maintains an explicit model of the distributions of the different classes in representation space. It then employs this knowledge to adaptively assess similarity, and achieve local discrimination by penalizing class distribution overlap. We demonstrate the effectiveness of this idea on several tasks. Our approach achieves state-of-the-art classification results on a number of fine-grained visual recognition datasets, surpassing the standard softmax classifier and outperforming triplet loss by a relative margin of 30-40%. In terms of computational performance, it alleviates training inefficiencies in the traditional triplet loss, reaching the same error in 5-30 times fewer iterations. Beyond classification, we further validate the saliency of the learnt representations via their attribute concentration and hierarchy recovery properties, achieving 10-25% relative gains on the softmax classifier and 25-50% on triplet loss in these tasks.", "title": "" }, { "docid": "0bd91a36d282a08759d5e7ad0b2aee97", "text": "We carry out a systematic study of existing visual CAPTCHAs based on distorted characters that are augmented with anti-segmentation techniques. Applying a systematic evaluation methodology to 15 current CAPTCHA schemes from popular web sites, we find that 13 are vulnerable to automated attacks. Based on this evaluation, we identify a series of recommendations for CAPTCHA designers and attackers, and possible future directions for producing more reliable human/computer distinguishers.", "title": "" }, { "docid": "9df40a85b2e8cc8244afd60ebf2cdb35", "text": "UNLABELLED\nCytoscape Web is a web-based network visualization tool-modeled after Cytoscape-which is open source, interactive, customizable and easily integrated into web sites. Multiple file exchange formats can be used to load data into Cytoscape Web, including GraphML, XGMML and SIF.\n\n\nAVAILABILITY AND IMPLEMENTATION\nCytoscape Web is implemented in Flex/ActionScript with a JavaScript API and is freely available at http://cytoscapeweb.cytoscape.org/.", "title": "" }, { "docid": "d170d7cf20b0a848bb0d81c5d163b505", "text": "The organizational and social issues associated with the development, implementation and use of computer-based information systems have increasingly attracted the attention of information systems researchers. Interest in qualitative research methods such as action research, case study research and ethnography, which focus on understanding social phenomena in their natural setting, has consequently grown. Case study research is the most widely used qualitative research method in information systems research, and is well suited to understanding the interactions between information technology-related innovations and organizational contexts. Although case study research is useful as ameans of studying information systems development and use in the field, there can be practical difficulties associated with attempting to undertake case studies as a rigorous and effective method of research. This paper addresses a number of these difficulties and offers some practical guidelines for successfully completing case study research. The paper focuses on the pragmatics of conducting case study research, and draws from the discussion at a panel session conducted by the authors at the 8th Australasian Conference on Information Systems, September 1997 (ACIS 97), from the authors' practical experiences, and from the case study research literature.", "title": "" }, { "docid": "330e0c60c4d491b2f824cb4da8467cc4", "text": "We investigate the usage of convolutional neural networks (CNNs) for the slot filling task in spoken language understanding. We propose a novel CNN architecture for sequence labeling which takes into account the previous context words with preserved order information and pays special attention to the current word with its surrounding context. Moreover, it combines the information from the past and the future words for classification. Our proposed CNN architecture outperforms even the previously best ensembling recurrent neural network model and achieves state-of-the-art results with an F1-score of 95.61% on the ATIS benchmark dataset without using any additional linguistic knowledge and resources.", "title": "" }, { "docid": "b39b0b07e6195ae47295e38aea9d6dad", "text": "Simulation theories of social cognition abound in the literature, but it is often unclear what simulation means and how it works. The discovery of mirror neurons, responding both to action execution and observation, suggested an embodied approach to mental simulation. Over the past few years this approach has been hotly debated and alternative accounts have been proposed. We discuss these accounts and argue that they fail to capture the uniqueness of embodied simulation (ES). ES theory provides a unitary account of basic social cognition, demonstrating that people reuse their own mental states or processes represented with a bodily format in functionally attributing them to others.", "title": "" }, { "docid": "e3dc44074fe921f4d42135a7e05bf051", "text": "This paper presents a 60 GHz antenna structure built on glass and flip-chipped on a ceramic module. A single antenna and a two antenna array have been fabricated and demonstrated good performances. The single antenna shows a return loss below −10 dB and a gain of 6–7 dBi over a 7 GHz bandwidth. The array shows a gain of 7–8 dBi over a 3 GHz bandwidth.", "title": "" }, { "docid": "e1f18a123aca51f989ee2e526eb815b2", "text": "Deep Learning has had a transformative impact on Computer Vision, but for all of the success there is also a significant cost. This is that the models and procedures used are so complex and intertwined that it is often impossible to distinguish the impact of the individual design and engineering choices each model embodies. This ambiguity diverts progress in the field, and leads to a situation where developing a state-of-the-art model is as much an art as a science. As a step towards addressing this problem we present a massive exploration of the effects of the myriad architectural and hyperparameter choices that must be made in generating a state-of-the-art model. The model is of particular interest because it won the 2017 Visual Question Answering Challenge. We provide a detailed analysis of the impact of each choice on model performance, in the hope that it will inform others in developing models, but also that it might set a precedent that will accelerate scientific progress in the field.", "title": "" }, { "docid": "b5333ce046458594490b42de142dacb1", "text": "This paper presents a Maximum Current Point Tracking (MCPT) Controller for SIC MOSFET based high power solid state 2 MHz RF inverter for RF driven H- ion source. This RF Inverter is based on a class-D, half-bridge with series resonance LC topology, operating slightly above the resonance frequency (near to 2 MHz). Since plasma systems have a dynamic behavior which affects the RF antenna impedance, hence the RF antenna voltage and current changes, according to change in plasma parameters. In order to continuously yield maximum current through an antenna, it has to operate at its maximum current point, despite the inevitable changes in the antenna impedance due to changes in plasma properties. An MCPT controller simulated using LT-spice, wherein the antenna current sensed, tracked to maximum point current in a close loop by varying frequency of the voltage controlled oscillator. Thus, impedance matching network redundancy is established for maximum RF power coupling to the antenna.", "title": "" }, { "docid": "e464859fd25c6bdcf266ceec090af9f2", "text": "AC ◦ MOD2 circuits are AC circuits augmented with a layer of parity gates just above the input layer. We study AC ◦MOD2 circuit lower bounds for computing the Boolean Inner Product functions. Recent works by Servedio and Viola (ECCC TR12-144) and Akavia et al. (ITCS 2014) have highlighted this problem as a frontier problem in circuit complexity that arose both as a first step towards solving natural special cases of the matrix rigidity problem and as a candidate for constructing pseudorandom generators of minimal complexity. We give the first superlinear lower bound for the Boolean Inner Product function against AC ◦MOD2 of depth four or greater. Specifically, we prove a superlinear lower bound for circuits of arbitrary constant depth, and an Ω̃(n) lower bound for the special case of depth-4 AC ◦MOD2. Our proof of the depth-4 lower bound employs a new “moment-matching” inequality for bounded, nonnegative integer-valued random variables that may be of independent interest: we prove an optimal bound on the maximum difference between two discrete distributions’ values at 0, given that their first d moments match. ∗Simons Institute for the Theory of Computing, University of California, Berkeley, CA. Email: [email protected]. Supported by a Qualcomm fellowship. †Department of Computer Science, Purdue University, West Lafayette, IN. Email: [email protected]. ‡Department of Computer Science and Engineering, Washington University, St. Louis, MO. Email: [email protected]. Supported by an AFOSR Young Investigator award. §Department of Mathematics, Duquesne University, Pittsburgh, PA. Email: [email protected]. Supported by NSF award CCF-1117079. ¶SCIS, Florida International University, Miami, FL. Email: [email protected]. Research supported in part by NSF grant 1423034.", "title": "" } ]
scidocsrr
2a68f4b2c5e2e218ab56b30aaa5d2fb1
Crocodiles in the Regulatory Swamp: Navigating The Dangers of Outsourcing, SaaS and Shadow IT
[ { "docid": "88bf67ec7ff0cfa3f1dc6af12140d33b", "text": "Cloud computing is set of resources and services offered through the Internet. Cloud services are delivered from data centers located throughout the world. Cloud computing facilitates its consumers by providing virtual resources via internet. General example of cloud services is Google apps, provided by Google and Microsoft SharePoint. The rapid growth in field of “cloud computing” also increases severe security concerns. Security has remained a constant issue for Open Systems and internet, when we are talking about security cloud really suffers. Lack of security is the only hurdle in wide adoption of cloud computing. Cloud computing is surrounded by many security issues like securing data, and examining the utilization of cloud by the cloud computing vendors. The wide acceptance www has raised security risks along with the uncountable benefits, so is the case with cloud computing. The boom in cloud computing has brought lots of security challenges for the consumers and service providers. How the end users of cloud computing know that their information is not having any availability and security issues? Every one poses, Is their information secure? This study aims to identify the most vulnerable security threats in cloud computing, which will enable both end users and vendors to know about the key security threats associated with cloud computing. Our work will enable researchers and security professionals to know about users and vendors concerns and critical analysis about the different security models and tools proposed.", "title": "" } ]
[ { "docid": "2292c60d69c94f31c2831c2f21c327d8", "text": "With the abundance of raw data generated from various sources, Big Data has become a preeminent approach in acquiring, processing, and analyzing large amounts of heterogeneous data to derive valuable evidences. The size, speed, and formats in which data is generated and processed affect the overall quality of information. Therefore, Quality of Big Data (QBD) has become an important factor to ensure that the quality of data is maintained at all Big data processing phases. This paper addresses the QBD at the pre-processing phase, which includes sub-processes like cleansing, integration, filtering, and normalization. We propose a QBD model incorporating processes to support Data quality profile selection and adaptation. In addition, it tracks and registers on a data provenance repository the effect of every data transformation happened in the pre-processing phase. We evaluate the data quality selection module using large EEG dataset. The obtained results illustrate the importance of addressing QBD at an early phase of Big Data processing lifecycle since it significantly save on costs and perform accurate data analysis.", "title": "" }, { "docid": "a892cb2c4ae93e460ea988a09ffd9089", "text": "The concept of \"Industry 4.0\" considers smart factories as data-driven and knowledge enabled enterprise intelligence. In such kind of factory, manufacturing processes and final products are accompanied by virtual models -- Digital Twins. To support Digital Twins concept, a simulation model for each process or system should be implemented as independent computational service. The only way to implement an orchestration of a set of independent services and provide scalability for simulation is to use a cloud computing platform as a provider of the computing infrastructure. In this paper, we describe a Digital Twin-as-a-Service (DTaaS) model for simulation and prediction of industrial processes using Digital Twins.", "title": "" }, { "docid": "6c0f3240b86677a0850600bf68e21740", "text": "In this article, we revisit two popular convolutional neural networks in person re-identification (re-ID): verification and identification models. The two models have their respective advantages and limitations due to different loss functions. Here, we shed light on how to combine the two models to learn more discriminative pedestrian descriptors. Specifically, we propose a Siamese network that simultaneously computes the identification loss and verification loss. Given a pair of training images, the network predicts the identities of the two input images and whether they belong to the same identity. Our network learns a discriminative embedding and a similarity measurement at the same time, thus taking full usage of the re-ID annotations. Our method can be easily applied on different pretrained networks. Albeit simple, the learned embedding improves the state-of-the-art performance on two public person re-ID benchmarks. Further, we show that our architecture can also be applied to image retrieval. The code is available at https://github.com/layumi/2016_person_re-ID.", "title": "" }, { "docid": "914daf0fd51e135d6d964ecbe89a5b29", "text": "Large-scale parallel programming environments and algorithms require efficient group-communication on computing systems with failing nodes. Existing reliable broadcast algorithms either cannot guarantee that all nodes are reached or are very expensive in terms of the number of messages and latency. This paper proposes Corrected-Gossip, a method that combines Monte Carlo style gossiping with a deterministic correction phase, to construct a Las Vegas style reliable broadcast that guarantees reaching all the nodes at low cost. We analyze the performance of this method both analytically and by simulations and show how it reduces the latency and network load compared to existing algorithms. Our method improves the latency by 20% and the network load by 53% compared to the fastest known algorithm on 4,096 nodes. We believe that the principle of corrected-gossip opens an avenue for many other reliable group communication operations.", "title": "" }, { "docid": "b9e8007220be2887b9830c05c283f8a5", "text": "INTRODUCTION\nHealth-care professionals are trained health-care providers who occupy a potential vanguard position in human immunodeficiency virus (HIV)/acquired immune deficiency syndrome (AIDS) prevention programs and the management of AIDS patients. This study was performed to assess HIV/AIDS-related knowledge, attitude, and practice (KAP) and perceptions among health-care professionals at a tertiary health-care institution in Uttarakhand, India, and to identify the target group where more education on HIV is needed.\n\n\nMATERIALS AND METHODS\nA cross-sectional KAP survey was conducted among five groups comprising consultants, residents, medical students, laboratory technicians, and nurses. Probability proportional to size sampling was used for generating random samples. Data analysis was performed using charts and tables in Microsoft Excel 2016, and statistical analysis was performed using the Statistical Package for the Social Science software version 20.0.\n\n\nRESULTS\nMost participants had incomplete knowledge regarding the various aspects of HIV/AIDS. Attitude in all the study groups was receptive toward people living with HIV/AIDS. Practical application of knowledge was best observed in the clinicians as well as medical students. Poor performance by technicians and nurses was observed in prevention and prophylaxis. All groups were well informed about the National AIDS Control Policy except technicians.\n\n\nCONCLUSION\nPoor knowledge about HIV infection, particularly among the young medical students and paramedics, is evidence of the lacunae in the teaching system, which must be kept in mind while formulating teaching programs. As suggested by the respondents, Information Education Communication activities should be improvised making use of print, electronic, and social media along with interactive awareness sessions, regular continuing medical educations, and seminars to ensure good quality of safe modern medical care.", "title": "" }, { "docid": "f9a4cea63b2df8b0a93d1652f17ff095", "text": "The current virtual machine(VM) resources scheduling in cloud computing environment mainly considers the current state of the system but seldom considers system variation and historical data, which always leads to load imbalance of the system. In view of the load balancing problem in VM resources scheduling, this paper presents a scheduling strategy on load balancing of VM resources based on genetic algorithm. According to historical data and current state of the system and through genetic algorithm, this strategy computes ahead the influence it will have on the system after the deployment of the needed VM resources and then chooses the least-affective solution, through which it achieves the best load balancing and reduces or avoids dynamic migration. This strategy solves the problem of load imbalance and high migration cost by traditional algorithms after scheduling. Experimental results prove that this method is able to realize load balancing and reasonable resources utilization both when system load is stable and variant.", "title": "" }, { "docid": "a6e2652aa074719ac2ca6e94d12fed03", "text": "■ Lincoln Laboratory led the nation in the development of high-power wideband radar with a unique capability for resolving target scattering centers and producing three-dimensional images of individual targets. The Laboratory fielded the first wideband radar, called ALCOR, in 1970 at Kwajalein Atoll. Since 1970 the Laboratory has developed and fielded several other wideband radars for use in ballistic-missile-defense research and space-object identification. In parallel with these radar systems, the Laboratory has developed high-capacity, high-speed signal and data processing techniques and algorithms that permit generation of target images and derivation of other target features in near real time. It has also pioneered new ways to realize improved resolution and scatterer-feature identification in wideband radars by the development and application of advanced signal processing techniques. Through the analysis of dynamic target images and other wideband observables, we can acquire knowledge of target form, structure, materials, motion, mass distribution, identifying features, and function. Such capability is of great benefit in ballistic missile decoy discrimination and in space-object identification.", "title": "" }, { "docid": "a85803f14639bef7f4539bad631d088c", "text": "5.", "title": "" }, { "docid": "799bc245ecfabf59416432ab62fe9320", "text": "This study examines resolution skills in phishing email detection, defined as the abilities of individuals to discern correct judgments from incorrect judgments in probabilistic decisionmaking. An illustration of the resolution skills is provided. A number of antecedents to resolution skills in phishing email detection, including familiarity with the sender, familiarity with the email, online transaction experience, prior victimization of phishing attack, perceived selfefficacy, time to judgment, and variability of time in judgments, are examined. Implications of the study are further discussed.", "title": "" }, { "docid": "bcea2f75005dc7333ee6660b8803d871", "text": "This paper investigates whether the increased flexibility afforded by e-commerce has allowed firms to increase their tax planning activities. We specifically address the incentives of firms to use exports rather than foreign subsidiary sales. Using proxies for e-commerce activity in both company-level and country-level tests, we find evidence consistent with greater sensitivity to tax incentives when e-commerce measures are high. This research is an important first step in understanding the larger impact of e-commerce on international tax planning behavior.", "title": "" }, { "docid": "72be75e973b6a843de71667566b44929", "text": "We think that hand pose estimation technologies with a camera should be developed for character conversion systems from sign languages with a not so high performance terminal. Fingernail positions can be used for getting finger information which can’t be obtained from outline information. Therefore, we decided to construct a practical fingernail detection system. The previous fingernail detection method, using distribution density of strong nail-color pixels, was not good at removing some skin areas having gloss like finger side area. Therefore, we should use additional information to remove them. We thought that previous method didn’t use boundary information and this information would be available. Color continuity information is available for getting it. In this paper, therefore, we propose a new fingernail detection method using not only distribution density but also color continuity to improve accuracy. We investigated the relationship between wrist rotation angles and percentages of correct detection. The number of users was three. As a result, we confirmed that our proposed method raised accuracy compared with previous method and could detect only fingernails with at least 85% probability from -90 to 40 degrees and from 40 to 90 degrees. Therefore, we concluded that our proposed method was effective.", "title": "" }, { "docid": "4de0ef0504e20571df6491a7d487042f", "text": "In this paper, we propose a novel reduced-reference quality assessment metric for image super-resolution (RRIQA-SR) based on the low-resolution (LR) image information. First, we use the Markov Random Field (MRF) to model the pixel correspondence between LR and high-resolution (HR) images. Based on the pixel correspondence, we predict the perceptual similarity between image patches of LR and HR images by two components: the energy change and texture variation. The overall quality of HR images is estimated by the perceptual similarity between local image patches of LR and HR images. Experimental results demonstrate that the proposed method can obtain better performance of quality prediction for HR images than other existing ones, even including some full-reference (FR) metrics.", "title": "" }, { "docid": "a3ae9af5962d5df8a001da8964edfe3b", "text": "The problem of blind demodulation of multiuser information symbols in a high-rate code-division multiple-access (CDMA) network in the presence of both multiple-access interference (MAI) and intersymbol interference (ISI) is considered. The dispersive CDMA channel is first cast into a multipleinput multiple-output (MIMO) signal model framework. By applying the theory of blind MIMO channel identification and equalization, it is then shown that under certain conditions the multiuser information symbols can be recovered without any prior knowledge of the channel or the users’ signature waveforms (including the desired user’s signature waveform), although the algorithmic complexity of such an approach is prohibitively high. However, in practice, the signature waveform of the user of interest is always available at the receiver. It is shown that by incorporating this knowledge, the impulse response of each user’s dispersive channel can be identified using a subspace method. It is further shown that based on the identified signal subspace parameters and the channel response, two linear detectors that are capable of suppressing both MAI and ISI, i.e., a zeroforcing detector and a minimum-mean-square-errror (MMSE) detector, can be constructed in closed form, at almost no extra computational cost. Data detection can then be furnished by applying these linear detectors (obtained blindly) to the received signal. The major contribution of this paper is the development of these subspace-based blind techniques for joint suppression of MAI and ISI in the dispersive CDMA channels.", "title": "" }, { "docid": "c5b9053b1b22d56dd827009ef529004d", "text": "An integrated receiver with high sensitivity and low walk error for a military purpose pulsed time-of-flight (TOF) LADAR system is proposed. The proposed receiver adopts a dual-gain capacitive-feedback TIA (C-TIA) instead of widely used resistive-feedback TIA (R-TIA) to increase the sensitivity. In addition, a new walk-error improvement circuit based on a constant-delay detection method is proposed. Implemented in 0.35 μm CMOS technology, the receiver achieves an input-referred noise current of 1.36 pA/√Hz with bandwidth of 140 MHz and minimum detectable signal (MDS) of 10 nW with a 5 ns pulse at SNR=3.3, maximum walk-error of 2.8 ns, and a dynamic range of 1:12,000 over the operating temperature range of -40 °C to +85 °C.", "title": "" }, { "docid": "18e77bde932964655ba7df73b02a3048", "text": "In this paper, we propose a mathematical framework to jointly model related activities with both motion and context information for activity recognition and anomaly detection. This is motivated from observations that activities related in space and time rarely occur independently and can serve as context for each other. The spatial and temporal distribution of different activities provides useful cues for the understanding of these activities. We denote the activities occurring with high frequencies in the database as normal activities. Given training data which contains labeled normal activities, our model aims to automatically capture frequent motion and context patterns for each activity class, as well as each pair of classes, from sets of predefined patterns during the learning process. Then, the learned model is used to generate globally optimum labels for activities in the testing videos. We show how to learn the model parameters via an unconstrained convex optimization problem and how to predict the correct labels for a testing instance consisting of multiple activities. The learned model and generated labels are used to detect anomalies whose motion and context patterns deviate from the learned patterns. We show promising results on the VIRAT Ground Dataset that demonstrates the benefit of joint modeling and recognition of activities in a wide-area scene and the effectiveness of the proposed method in anomaly detection.", "title": "" }, { "docid": "75fa00064a01ee22546622adc206f5a5", "text": "Generative adversarial networks (GANs) have achieved significant success in generating realvalued data. However, the discrete nature of text hinders the application of GAN to textgeneration tasks. Instead of using the standard GAN objective, we propose to improve textgeneration GAN via a novel approach inspired by optimal transport. Specifically, we consider matching the latent feature distributions of real and synthetic sentences using a novel metric, termed the feature-mover’s distance (FMD). This formulation leads to a highly discriminative critic and easy-to-optimize objective, overcoming the mode-collapsing and brittle-training problems in existing methods.", "title": "" }, { "docid": "f3f1ecee3dcf0d0eb1eeddd0ed2e3247", "text": "Automatic semantic annotation of video streams allows both to extract significant clips for production logging and to index video streams for posterity logging. Automatic annotation for production logging is particularly demanding, as it is applied to non-edited video streams and must rely only on visual information. Moreover, annotation must be computed in quasi real-time. In this paper, we present a system that performs automatic annotation of the principal highlights in soccer video, suited for both production and posterity logging. The knowledge of the soccer domain is encoded into a set of finite state machines, each of which models a specific highlight. Highlight detection exploits visual cues that are estimated from the video stream, and particularly, ball motion, the currently framed playfield zone, players’ positions and colors of players’ uniforms. The highlight models are checked against the current observations, using a model checking algorithm. The system has been developed within the EU ASSAVID project.", "title": "" }, { "docid": "0c1672cb538bfbc50136c5365f04282b", "text": "We propose DeepMiner, a framework to discover interpretable representations in deep neural networks and to build explanations for medical predictions. By probing convolutional neural networks (CNNs) trained to classify cancer in mammograms, we show that many individual units in the final convolutional layer of a CNN respond strongly to diseased tissue concepts specified by the BI-RADS lexicon. After expert annotation of the interpretable units, our proposed method is able to generate explanations for CNN mammogram classification that are correlated with ground truth radiology reports on the DDSM dataset. We show that DeepMiner not only enables better understanding of the nuances of CNN classification decisions, but also possibly discovers new visual knowledge relevant to medical diagnosis.", "title": "" }, { "docid": "3783324e48c5c46bc961bf82576d9974", "text": "We present a fully Bayesian approach to modeling in functional magnetic resonance imaging (FMRI), incorporating spatio-temporal noise modeling and haemodynamic response function (HRF) modeling. A fully Bayesian approach allows for the uncertainties in the noise and signal modeling to be incorporated together to provide full posterior distributions of the HRF parameters. The noise modeling is achieved via a nonseparable space-time vector autoregressive process. Previous FMRI noise models have either been purely temporal, separable or modeling deterministic trends. The specific form of the noise process is determined using model selection techniques. Notably, this results in the need for a spatially nonstationary and temporally stationary spatial component. Within the same full model, we also investigate the variation of the HRF in different areas of the activation, and for different experimental stimuli. We propose a novel HRF model made up of half-cosines, which allows distinct combinations of parameters to represent characteristics of interest. In addition, to adaptively avoid over-fitting we propose the use of automatic relevance determination priors to force certain parameters in the model to zero with high precision if there is no evidence to support them in the data. We apply the model to three datasets and observe matter-type dependence of the spatial and temporal noise, and a negative correlation between activation height and HRF time to main peak (although we suggest that this apparent correlation may be due to a number of different effects).", "title": "" }, { "docid": "e7a6ddeb076fdc3ac3cc8800a8913d9c", "text": "Future weather radar systems will need to provide rapid updates within a flexible multifunctional overall radar network. This naturally leads to the use of electronically scanned phased array antennas. However, the traditional multifaced planar antenna approaches suffer from having radiation patterns that are variant in both beam shape and polarization as a function of electronic scan angle; even with practically challenging angle-dependent polarization correction, this places limitations on how accurately weather can be measured. A cylindrical array with commutated beams, on the other hand, can theoretically provide patterns that are invariant with respect to azimuth scanning with very pure polarizations. This paper summarizes recent measurements of the cylindrical polarimetric phased array radar demonstrator, a system designed to explore the benefits and limitations of a cylindrical array approach to these future weather radar applications.", "title": "" } ]
scidocsrr
82bfbe3a3611faae7c872c8df0e635f8
A hierarchical edge cloud architecture for mobile computing
[ { "docid": "12fe1e2edd640b55a769e5c881822aa6", "text": "In this paper we introduce a runtime system to allow unmodified multi-threaded applications to use multiple machines. The system allows threads to migrate freely between machines depending on the workload. Our prototype, COMET (Code Offload by Migrating Execution Transparently), is a realization of this design built on top of the Dalvik Virtual Machine. COMET leverages the underlying memory model of our runtime to implement distributed shared memory (DSM) with as few interactions between machines as possible. Making use of a new VM-synchronization primitive, COMET imposes little restriction on when migration can occur. Additionally, enough information is maintained so one machine may resume computation after a network failure. We target our efforts towards augmenting smartphones or tablets with machines available in the network. We demonstrate the effectiveness of COMET on several real applications available on Google Play. These applications include image editors, turn-based games, a trip planner, and math tools. Utilizing a server-class machine, COMET can offer significant speed-ups on these real applications when run on a modern smartphone. With WiFi and 3G networks, we observe geometric mean speed-ups of 2.88X and 1.27X relative to the Dalvik interpreter across the set of applications with speed-ups as high as 15X on some applications.", "title": "" }, { "docid": "6a91b8c6fd3d358ce5be3f0096a43549", "text": "Cloud computing services are becoming ubiquitous, and are starting to serve as the primary source of computing power for both enterprises and personal computing applications. We consider a stochastic model of a cloud computing cluster, where jobs arrive according to a stochastic process and request virtual machines (VMs), which are specified in terms of resources such as CPU, memory and storage space. While there are many design issues associated with such systems, here we focus only on resource allocation problems, such as the design of algorithms for load balancing among servers, and algorithms for scheduling VM configurations. Given our model of a cloud, we first define its capacity, i.e., the maximum rates at which jobs can be processed in such a system. Then, we show that the widely-used Best-Fit scheduling algorithm is not throughput-optimal, and present alternatives which achieve any arbitrary fraction of the capacity region of the cloud. We then study the delay performance of these alternative algorithms through simulations.", "title": "" } ]
[ { "docid": "5d1aade84adfefda28203f7d82f80d0f", "text": "Soft robotics is an emerging field that focuses on the development and application of soft robots. Due to their highly deformable features, it is difficult to model and control such robots. In this paper, we proposed a simplified model to simulate a fluidic elastomer actuator (FEA). The model consists of a series of line segments connected by viscoelastic joints. Pneumatic inputs were modeled as active torques acting at each joint. The Lagrangian dynamic equations were derived. An optimization-based method was proposed to identify the unknown model parameters. Experiments were conducted using three-dimensional (3D) printed FEAs. Calibration results of a single FEA showed the repeatability of the pressure actuated bending angles, and the proposed dynamic model can precisely reproduce the deformation behavior of the FEA. Grasping experiments showed that the proposed dynamic model can predict the grasping forces, which was validated by a separate experiment of grasping force measurement. The presented methods can be extended to model other soft robots.", "title": "" }, { "docid": "68aad74ce40e9f44997a078df5e54a23", "text": "A wideband circularly polarized (CP) rectangular dielectric resonator antenna (DRA) based on the concept of traveling-wave excitation is presented. A lumped resistively loaded monofilar-spiral-slot is used to excite the rectangular DRA. The proposed DRA is theoretically and experimentally analyzed, including design concept, design guideline, parameter study, and experimental verification. It is found that by using such an excitation, a wide 3-dB axial-ratio (AR) bandwidth of 18.7% can be achieved.", "title": "" }, { "docid": "cf3048e512d5d4eab62eef01627fe8d7", "text": "In this paper, we present simulation results and analysis of 3-D magnetic flux leakage (MFL) signals due to the occurrence of a surface-breaking defect in a ferromagnetic specimen. The simulations and analysis are based on a magnetic dipole-based analytical model, presented in a previous paper. We exploit the tractability of the model and its amenability to simulation to analyze properties of the model as well as of the MFL fields it predicts, such as scale-invariance, effect of lift-off and defect shape, the utility of the tangential MFL component, and the sensitivity of MFL fields to parameters. The simulations and analysis show that the tangential MFL component is indeed a potentially critical part of MFL testing. It is also shown that the MFL field of a defect varies drastically with lift-off. We also exploit the model to develop a lift-off compensation technique which enables the prediction of the size of the defect for a range of lift-off values.", "title": "" }, { "docid": "cdf4f5074ec86db3948df3497f9896ec", "text": "This paper investigates algorithms to automatically adapt the learning rate of neural networks (NNs). Starting with stochastic gradient descent, a large variety of learning methods has been proposed for the NN setting. However, these methods are usually sensitive to the initial learning rate which has to be chosen by the experimenter. We investigate several features and show how an adaptive controller can adjust the learning rate without prior knowledge of the learning problem at hand. Introduction Due to the recent successes of Neural Networks for tasks such as image classification (Krizhevsky, Sutskever, and Hinton 2012) and speech recognition (Hinton et al. 2012), the underlying gradient descent methods used for training have gained a renewed interest by the research community. Adding to the well known stochastic gradient descent and RMSprop methods (Tieleman and Hinton 2012), several new gradient based methods such as Adagrad (Duchi, Hazan, and Singer 2011) or Adadelta (Zeiler 2012) have been proposed. However, most of the proposed methods rely heavily on a good choice of an initial learning rate. Compounding this issue is the fact that the range of good learning rates for one problem is often small compared to the range of good learning rates across different problems, i.e., even an experienced experimenter often has to manually search for good problem-specific learning rates. A tempting alternative to manually searching for a good learning rate would be to learn a control policy that automatically adjusts the learning rate without further intervention using, for example, reinforcement learning techniques (Sutton and Barto 1998). Unfortunately, the success of learning such a controller from data is likely to depend heavily on the features made available to the learning algorithm. A wide array of reinforcement learning literature has shown the importance of good features in tasks ranging from Tetris (Thiery and Scherrer 2009) to haptile object identification (Kroemer, Lampert, and Peters 2011). Thus, the first step towards applying RL methods to control learning rates is to find good features. Subsequently, the main contributions of this paper are Copyright c © 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. • Identifying informative features for the automatic control of the learning rate. • Proposing a learning setup for a controller that automatically adapts the step size of NN training algorithms. • Showing that the resulting controller generalizes across different tasks and architectures. Together, these contributions enable robust and efficient training of NNs without the need of manual step size tuning. Method The goal of this paper is to develop an adaptive controller for the learning rate used in training algorithms such as Stochastic Gradient Descent (SGD) or RMSprop (Tieleman and Hinton 2012). We start with a general statement of the problem we are aiming to solve. Problem Statement We are interested in finding the minimizer ω∗ = arg min ω F (X;ω), (1) where in our case ω represents the weight vector of the NN and X = {x1, . . . ,xN} is the set of N training examples (e.g., images and labels). The function F (·) sums over the function values induced by the individual inputs such that", "title": "" }, { "docid": "d1b5b74db9e1a9fef2f91d3917940d94", "text": "Relational databases are providing storage for several decades now. However for today's interactive web and mobile applications the importance of flexibility and scalability in data model can not be over-stated. The term NoSQL broadly covers all non-relational databases that provide schema-less and scalable model. NoSQL databases which are also termed as Internetage databases are currently being used by Google, Amazon, Facebook and many other major organizations operating in the era of Web 2.0. Different classes of NoSQL databases namely key-value pair, document, column-oriented and graph databases enable programmers to model the data closer to the format as used in their application. In this paper, data modeling and query syntax of relational and some classes of NoSQL databases have been explained with the help of an case study of a news website like Slashdot.", "title": "" }, { "docid": "ed3044439e2ca81cbe57a6d4d2e7707a", "text": "ness. Second, every attribute specified for a concept is shared by more than one instance of the concept. Thus, the information contained in a concept is an abstraction across instances of the concept. The overlapping networks of shared attributes thus formed hold conceptual categories together. In this respect, the family resemblance view is like the classical view: Both maintain that the instances of a concept cohere because they are similar to one another by virtueof sharing certain attributes. Weighted attributes. An object that shares attributes with many members of a category bears greater family resemblance to that category than an object that shares attributes with few members. This suggests that attributes that are shared by many members confer a greater degree of family resemblance than those that are shared by a few. A third characteristic of the family resemblance view is that it assumes that concept attributes are \"weighted\" according to their relevance for conferring family resemblance to the category. In general, that relevance is taken to be a function of the number of category instances (and perhaps noninstances) that share the attribute. Presumably, if the combined relevance weights of the attributes of some novel object exceed a certain level (what might be called the membership threshold or criterion), that object will be 2 Here and throughout, I use relevance to include both relevance and salience as used by Ortony, Vondruska, Foss, and Jones (1985). 504 LLOYD K. KOMATSU considered an instance of the category (Medin, 1983; Rosch & Mervis, 1975; E. E. Smith & Medin, 1981). The greater the degree to which the combined relevance weights exceed the threshold, the more typical an instance it is (see also Shafir, Smith, & Osherson, 1990). By this measure, an object must have a large number of heavily weighted attributes to be judged highly typical of a given category. Because such heavily weighted attributes are probably shared by many category instances and relatively few noninstances, an object highly typical of a category is likely to lie near the central tendencies of the category (see Retention of Central Tendencies, below), and is not likely to be typical of or lie near the central tendencies of any other category. Independence and additive combination of weights: Linear separability. Attribute weights can be combined using a variety of methods (cf. Medin & Schaffer, 1978; Reed, 1972). In the method typically associated with the family resemblance view (adapted from Tversky's, 1977, contrast model of similarity), attribute weights are assumed to be independent and combined by adding (Rosch & Mervis, 1975; E. E. Smith & Medin, 1981). This leads to a fourth characteristic of the (modal) family resemblance view: It predicts that instances and noninstances of a concept can be perfectly partitioned by a linear discriminant function (i.e., if one was to plot a set of objects by the combined weights of their attributes, all instances would fall to one side of a line, and all noninstances would fall on the other side; Medin & SchafFer, 1978; Medin & Schwanenflugel, 1981; Nakamura, 1985; Wattenmaker, Dewey, Murphy, & Medin, 1986). Thus the (modal) family resemblance view predicts that concepts are \"linearly separable.\" Retention of central tendencies. The phrase family resemblance is used in two ways. In the sense that I have focused on until now, the family resemblance of an object to a category increases as the similarity between that object and all other members of the category increases and the similarity between that object and all nonmembers of the category decreases. This use of family resemblance (probably the use more reflective of Wittgenstein's, 1953, original ideas) has an extensional emphasis: It describes a relationship among objects and makes no assumptions about how the category of objects is represented mentally (i.e., about the intension of the word or what I have been calling the concept). In the second sense, family resemblance increases as the similarity between an object and the central tendencies of the category increases (Hampton, 1979). This use of family resemblance has an intentional emphasis: It describes a relationship between objects and a mental representation (of the central tendencies of a category). Although these two ways of thinking about family resemblance, average similarity to all instances and similarity to a central tendency, are different (cf. Reed, 1972), Barsalou (1985, 1987) points out that they typically yield roughly the same outcome, much as the average difference between a number and a set of other numbers is roughly the same as the difference between that number and the average of that set of other numbers. (For example, consider the number 2 and the set of numbers 3, 5, and 8. The average difference between 2 and 3, 5, and 8 is 3.33, and the difference between 2 and the average of 3,5, and 8 is 3.33.) Barsalou argues that although for most purposes the two ways of thinking about family resemblance are equivalent (one of the reasons the exemplar and family resemblance views are often difficult to distinguish empirically; see below), computation in terms of central tendencies may be more plausible psychologically (because fewer comparisons are involved in comparing an object with the central tendencies of a concept than with every instance and noninstance of the concept; see also Barresi, Robbins, & Shain, 1975). This suggests a fifth characteristic of the family resemblance view: A concept provides a summary of a category in terms of the central tendencies of the members of that category rather than in terms of the representations of individual instances. Economy, Informativeness, Coherence, and Naturalness Both the classical and the family resemblance views explain conceptual coherence in terms of the attributes shared by the members of a category (i.e., the similarity among the instances of a concept). The critical difference between the two views lies in the constraints placed on the attributes shared. In the classical view, all instances are similar in that they share a set of necessary and sufficient attributes (i.e., the definition). The family resemblance view relaxes this constraint and requires only that every attribute specified by the concept be shared by more than one instance. Although this requirement confers a certain amount of economy to the family resemblance view (every piece of information applies to several instances), removing the definitional constraint allows family resemblance representations to include nondefinitional information. In particular, concepts are likely to specify information beyond that true of all instances or beyond that strictly needed to understand what Medin and Smith (1984) call linguistic meaning (the different kinds of relations that hold among words such as synonymy, antynomy, hyponomy, anomaly, and contradiction as usually understood; cf. Katz, 1972; Katz & Fodor, 1963) to include information about how the objects referred to may relate to one another and to the world. It is not clear whether this loss in economy results in a concomitant increase in informativeness: Although in the family resemblance view more information may be associated with a concept than in the classical, not all of that information applies to every instance of the concept. In the family resemblance view, attributes can be inferred to inhere in different instances only with some level of probability. Thus the informativeness of the individual attributes specified is somewhat compromised. With no a priori constraint on the nature (or level) of similar3 There are several different ways to approach the representation of the central tendencies of a category. E. E. Smith and Medin (1981), for example, identified three approaches to what they called the probablistic view: the featural, the dimensional, and the holistic. E. E. Smith and Medin provided ample evidence for rejecting the holistic approach on both empirical and theoretical grounds (see also McNamara & Miller, 1989). They also argued that the similarities between the featural and dimensional approaches suggest that they might profitably be combined into a single position that could be called the \"component\" approach (E. E. Smith & Medin, 1981, p. 164) and concluded that the component approach is the only viable variant. RECENT VIEWS OF CONCEPTS 505 ity shared by the instances of a concept, the family resemblance view has difficulty specifying which similarities count and which do not when it comes to setting the boundaries between concepts. A Great Dane and a Bedlington terrier appear to share few similarities, but they share enough so that both are dogs. But a Bedlington terrier seems to share as many similarities with a lamb as it does with a Great Dane. Why is a Bedlington terrier a dog and not a lamb? Presumably, the family resemblance view would predict that the summed weights of Bedlington terrier attributes lead to its being more similar to other dogs than to lambs and result in its being categorized as a dog rather than a lamb. But to determine those weights, we need to know how common those attributes are among dogs and lambs. This implies that the categorization of Bedlington terriers must be preceded by the partitioning of the world into dog and lamb. Without that prior partitioning, the dog versus lamb weights of Bedlington terrier attributes cannot be determined. To answer the question of what privileges the categorization of a Bedlington terrier with the Great Dane rather than the lamb requires answering what privileges the partitioning of the world into dogs and lambs. Rosch (Rosch, 1978; Rosch & Mervis, 1975) argues that certain partitionings of the world (including, presumably, into dogs and lambs) are privileged, more immediate or direct, and arise naturally from the interaction of our perceptual apparatus and the environment. Thus whereas the classical view", "title": "" }, { "docid": "55b958f0fca9b626b42648c80f72952e", "text": "K etamine has a special position among anesthetic drugs. It was introduced into clinical practice >30 yr ago with the hope that it would function as a “monoanesthetic” drug: inducing analgesia, amnesia, loss of consciousness, and immobility. This dream was not fulfilled because significant side effects were soon reported. With the introduction of other IV anesthetic drugs, ketamine’s role diminished rapidly. However, it is still used clinically for indications such as induction of anesthesia in patients in hemodynamic shock; induction of anesthesia in patients with active asthmatic disease; IM sedation of uncooperative patients, particularly children; supplementation of incomplete regional or local anesthesia; sedation in the intensive care setting; and short, painful procedures, such as dressing changes in burn patients. However, recent insights into ketamine’s anesthetic mechanism of action and its neuronal effects, as well as a reevaluation of its profound analgesic properties, offer the potential of expanding this range of indications. In addition, studies with the S(+) ketamine isomer suggest that its use may be associated with fewer side effects than the racemic mixture. In this article, we review the mechanism of action of ketamine anesthesia, the pharmacologic properties of its stereoisomers, and the potential uses of ketamine for preemptive analgesia and neuroprotection. Several aspects discussed herein have been reviewed previously (l-4).", "title": "" }, { "docid": "9d3c3a3fa17f47da408be1e24d2121cc", "text": "In this letter, compact substrate integrated waveguide (SIW) power dividers are presented. Both equal and unequal power divisions are considered. A quarter-wavelength long wedge shape SIW structure is used for the power division. Direct coaxial feed is used for the input port and SIW-tomicrostrip transitions are used for the output ports. Four-way equal, unequal and an eight-way equal division power dividers are presented. The four-way and the eight-way power dividers provide -10 dB input matching bandwidth of 39.3% and 13%, respectively, at the design frequency f0 = 2.4 GHz. The main advantage of the power dividers is their compact sizes. Including the microstrip to SIW transitions, size is reduced by at least 46% compared to other reported miniaturized SIW power dividers.", "title": "" }, { "docid": "d08a0be6ede7394b1e836a4a1a3e7ab6", "text": "In this talk, I will present a novel academic search and mining system, AMiner, the second generation of the ArnetMiner system. Different from traditional academic search systems that focus on document (paper) search, AMiner aims to provide a systematic modeling approach for researchers (authors), ultimately to gain a deep understanding of the big (heterogeneous) network formed by authors, papers they have published, and venues they published those papers. The system extracts researchers' profiles automatically from the Web and integrates the researcher profiles with publication papers after name disambiguation. For now, the system has collected a big scholar data with more than 130,000,000 researcher profiles and 100,000,000 papers from multiple publication databases. We also developed an approach named COSNET to connect AMiner with several professional social networks such as LinkedIn and VideoLectures, which significantly enriches the metadata of the scholarly data. Based on the integrated big scholar data, we devise a unified topic modeling approach for modeling the different entities (authors, papers, venues) simultaneously and provide a topic-level expertise search by leveraging the modeling results. In addition, AMiner offers a set of researcher-centered functions including social influence analysis, influence visualization, collaboration recommendation, relationship mining, similarity analysis and community evolution. The system has been put into operation since 2006 and has attracted more than 7,000,000 independent IP accesses from over 200 countries/regions.", "title": "" }, { "docid": "b9634528b2a9eaf7b4d128a261a5e789", "text": "Students choose to use flashcard applications available on the Internet to help memorize word-meaning pairs. This is helpful for tests such as GRE, TOEFL or IELTS, which emphasize on verbal skills. However, monotonous nature of flashcard applications can be diminished with the help of Cognitive Science through Testing Effect. Experimental evidences have shown that memory tests are an important tool for long term retention (Roediger and Karpicke, 2006). Based on these evidences, we developed a novel flashcard application called “V for Vocab” that implements short answer based tests for learning new words. Furthermore, we aid this by implementing our short answer grading algorithm which automatically scores the user’s answer. The algorithm makes use of an alternate thesaurus instead of traditional Wordnet and delivers state-of-theart performance on popular word similarity datasets. We also look to lay the foundation for analysis based on implicit data collected from our application.", "title": "" }, { "docid": "29f820ea99905ad1ee58eb9d534c89ab", "text": "Basic results in the rigorous theory of weighted dynamical zeta functions or dynamically defined generalized Fredholm determinants are presented. Analytic properties of the zeta functions or determinants are related to statistical properties of the dynamics via spectral properties of dynamical transfer operators, acting on Banach spaces of observables.", "title": "" }, { "docid": "b6b49120c2567f844c657cc267dda14f", "text": "Hadoop is a popular system for storing, managing,and processing large volumes of data, but it has bare-bonesinternal support for metadata, as metadata is a bottleneck andless means more scalability. The result is a scalable platform withrudimentary access control that is neither user- nor developer-friendly. Also, metadata services that are built on Hadoop, suchas SQL-on-Hadoop, access control, data provenance, and datagovernance are necessarily implemented as eventually consistentservices, resulting in increased development effort and morebrittle software. In this paper, we present a new project-based multi-tenancymodel for Hadoop, built on a new distribution of Hadoopthat provides a distributed database backend for the HadoopDistributed Filesystem's (HDFS) metadata layer. We extendHadoop's metadata model to introduce projects, datasets, andproject-users as new core concepts that enable a user-friendly, UI-driven Hadoop experience. As our metadata service is backed bya transactional database, developers can easily extend metadataby adding new tables and ensure the strong consistency ofextended metadata using both transactions and foreign keys.", "title": "" }, { "docid": "9a14ce0f7c53978152fd1baeccb13cbc", "text": "Current logo retrieval research focuses on closed set scenarios. We argue that the logo domain is too large for this strategy and requires an open set approach. To foster research in this direction, a large-scale logo dataset, called Logos in the Wild, is collected and released to the public. A typical open set logo retrieval application is, for example, assessing the effectiveness of advertisement in sports event broadcasts. Given a query sample in shape of a logo image, the task is to find all further occurrences of this logo in a set of images or videos. Currently, common logo retrieval approaches are unsuitable for this task because of their closed world assumption. Thus, an open set logo retrieval method is proposed in this work which allows searching for previously unseen logos by a single query sample. A two stage concept with separate logo detection and comparison is proposed where both modules are based on task specific Convolutional Neural Networks (CNNs). If trained with the Logos in the Wild data, significant performance improvements are observed, especially compared with state-of-the-art closed set approaches.", "title": "" }, { "docid": "5706b4955db81d04398fd6a64eb70c7c", "text": "The number of applications (or apps) in the Android Market exceeded 450,000 in 2012 with more than 11 billion total downloads. The necessity to fix bugs and add new features leads to frequent app updates. For each update, a full new version of the app is downloaded to the user's smart phone; this generates significant traffic in the network. We propose to use delta encoding algorithms and to download only the difference between two versions of an app. We implement delta encoding for Android using the bsdiff and bspatch tools and evaluate its performance. We show that app update traffic can be reduced by about 50%, this can lead to significant cost and energy savings.", "title": "" }, { "docid": "134d85937dc13e4174e2ddb99197f924", "text": "A compact hybrid-integrated 100 Gb/s (4 lane × 25.78125 Gb/s) transmitter optical sub-assembly (TOSA) has been developed for a 100 Gb/s transceiver for 40-km transmission over a single-mode fiber. The TOSA has a simple configuration in which four electro-absorption modulator-integrated distributed feedback (EADFB) lasers are directly attached to the input waveguide end-face of a silica-based arrayed waveguide grating (AWG) multiplexer without bulk lenses. To achieve a high optical butt coupling efficiency between the EADFB lasers and the AWG multiplexer, we integrated a laterally tapered spot-size converter (SSC) for the EADFB laser and employed a waveguide with a high refractive index difference of 2.0% for the AWG multiplexer. By optimizing the laterally tapered SSC structure, we achieved a butt-coupling loss of less than 3 dB, which is an improvement of around 2 dB compared with a laser without an SSC structure. We also developed an ultracompact AWG multiplexer, which was 6.7 mm × 3.5 mm in size with an insertion loss of less than 1.9 dB. We achieved this by using a Mach-Zehnder interferometer-synchronized configuration to obtain a low loss and wide flat-top transmission filter spectra. The TOSA body size was 19.9 mm (L) × 6.0 mm (W) × 5.8 mm (H). Error-free operation was demonstrated for a 40-km transmission when all the lanes were driven simultaneously with a low EA modulator driving voltage of 1.5 V at an operating temperature of 55 °C.", "title": "" }, { "docid": "be5176e14475ffcdc00aee371ac3ebec", "text": "Multiphase interior permanent magnet (IPM) motors are very good candidates for hybrid electric vehicle applications. High torque pulsation is the major disadvantage of most IPM motor configurations. A five-phase IPM motor with low torque pulsation is discussed. The mathematical model of the five-phase motor is given. A control strategy that provides fault tolerance to five-phase permanent-magnet motors is introduced. In this scheme, the five-phase system continues operating safely under loss of up to two phases without any additional hardware connections. This feature is very important in traction and propulsion applications where high reliability is of major importance. The system that is introduced in this paper will guarantee high efficiency, high performance, and high reliability, which are required for automotive applications A prototype four-pole IPM motor with 15 stator slots has been built and is used for experimental verification.", "title": "" }, { "docid": "40252c2047c227fbbeee4d492bee9bc6", "text": "A planar integrated multi-way broadband SIW power divider is proposed. It can be combined by the fundamental modules of T-type or Y-type two-way power dividers and an SIW bend directly. A sixteen way SIW power divider prototype was designed, fabricated and measured. The whole structure is made by various metallic-vias on the same substrate. Hence, it can be easily fabricated and conveniently integrated into microwave and millimeter-wave integrated circuits for mass production with low cost and small size.", "title": "" }, { "docid": "6244a3389b89785427d3ade917983269", "text": "The increased popularity and ubiquitous availability of online social networks and globalised Internet access have affected the way in which people share content. The information that users willingly disclose on these platforms can be used for various purposes, from building consumer models for advertising, to inferring personal, potentially invasive, information. In this work, we use Twitter, Instagram and Foursquare data to convey the idea that the content shared by users, especially when aggregated across platforms, can potentially disclose more information than was originally intended. We perform two case studies: First, we perform user deanonymization by mimicking the scenario of finding the identity of a user making anonymous posts within a group of users. Empirical evaluation on a sample of real-world social network profiles suggests that cross-platform aggregation introduces significant performance gains in user identification. In the second task, we show that it is possible to infer physical location visits of a user on the basis of shared Twitter and Instagram content. We present an informativeness scoring function which estimates the relevance and novelty of a shared piece of information with respect to an inference task. This measure is validated using an active learning framework which chooses the most informative content at each given point in time. Based on a large-scale data sample, we show that by doing this, we can attain an improved inference performance. In some cases this performance exceeds even the use of the user’s full timeline.", "title": "" }, { "docid": "08084de7a702b87bd8ffc1d36dbf67ea", "text": "In recent years, the mobile data traffic is increasing and many more frequency bands have been employed in cellular handsets. A simple π type tunable band elimination filter (BEF) with switching function has been developed using a wideband tunable surface acoustic wave (SAW) resonator circuit. The frequency of BEF is tuned approximately 31% by variable capacitors without spurious. In LTE low band, the arrangement of TX and RX frequencies is to be reversed in Band 13, 14 and 20 compared with the other bands. The steep edge slopes of the developed filter can be exchanged according to the resonance condition and switching. With combining the TX and RX tunable BEFs and the small sized broadband circulator, a new tunable duplexer has been fabricated, and its TX-RX isolation is proved to be more than 50dB in LTE low band operations.", "title": "" }, { "docid": "65ecfef85ae09603afddde09a2c65bf4", "text": "We outline a representation for discrete multivariate distributions in terms of interventional potential functions that are globally normalized. This representation can be used to model the effects of interventions, and the independence properties encoded in this model can be represented as a directed graph that allows cycles. In addition to discussing inference and sampling with this representation, we give an exponential family parametrization that allows parameter estimation to be stated as a convex optimization problem; we also give a convex relaxation of the task of simultaneous parameter and structure learning using group `1regularization. The model is evaluated on simulated data and intracellular flow cytometry data.", "title": "" } ]
scidocsrr
4f2fb8061e59c30496282133ffaab027
An overview of vulnerability assessment and penetration testing techniques
[ { "docid": "34461f38c51a270e2f3b0d8703474dfc", "text": "Software vulnerabilities are the root cause of computer security problem. How people can quickly discover vulnerabilities existing in a certain software has always been the focus of information security field. This paper has done research on software vulnerability techniques, including static analysis, Fuzzing, penetration testing. Besides, the authors also take vulnerability discovery models as an example of software vulnerability analysis methods which go hand in hand with vulnerability discovery techniques. The ending part of the paper analyses the advantages and disadvantages of each technique introduced here and talks about the future direction of this field.", "title": "" } ]
[ { "docid": "45a92ab90fabd875a50229921e99dfac", "text": "This paper describes an empirical study of the problems encountered by 32 blind users on the Web. Task-based user evaluations were undertaken on 16 websites, yielding 1383 instances of user problems. The results showed that only 50.4% of the problems encountered by users were covered by Success Criteria in the Web Content Accessibility Guidelines 2.0 (WCAG 2.0). For user problems that were covered by WCAG 2.0, 16.7% of websites implemented techniques recommended in WCAG 2.0 but the techniques did not solve the problems. These results show that few developers are implementing the current version of WCAG, and even when the guidelines are implemented on websites there is little indication that people with disabilities will encounter fewer problems. The paper closes by discussing the implications of this study for future research and practice. In particular, it discusses the need to move away from a problem-based approach towards a design principle approach for web accessibility.", "title": "" }, { "docid": "ed7832f6fbb1777ab3139cc8b5dd2d28", "text": "Tree ensemble models such as random forests and boosted trees are among the most widely used and practically successful predictive models in applied machine learning and business analytics. Although such models have been used to make predictions based on exogenous, uncontrollable independent variables, they are increasingly being used to make predictions where the independent variables are controllable and are also decision variables. In this paper, we study the problem of tree ensemble optimization: given a tree ensemble that predicts some dependent variable using controllable independent variables, how should we set these variables so as to maximize the predicted value? We formulate the problem as a mixed-integer optimization problem. We theoretically examine the strength of our formulation, provide a hierarchy of approximate formulations with bounds on approximation quality and exploit the structure of the problem to develop two large-scale solution methods, one based on Benders decomposition and one based on iteratively generating tree split constraints. We test our methodology on real data sets, including two case studies in drug design and customized pricing, and show that our methodology can efficiently solve large-scale instances to near or full optimality, and outperforms solutions obtained by heuristic approaches. In our drug design case, we show how our approach can identify compounds that efficiently trade-off predicted performance and novelty with respect to existing, known compounds. In our customized pricing case, we show how our approach can efficiently determine optimal store-level prices under a random forest model that delivers excellent predictive accuracy.", "title": "" }, { "docid": "d89d80791ac8157d054652e5f1292ebb", "text": "The Great Gatsby Curve, the observation that for OECD countries, greater crosssectional income inequality is associated with lower mobility, has become a prominent part of scholarly and policy discussions because of its implications for the relationship between inequality of outcomes and inequality of opportunities. We explore this relationship by focusing on evidence and interpretation of an intertemporal Gatsby Curve for the United States. We consider inequality/mobility relationships that are derived from nonlinearities in the transmission process of income from parents to children and the relationship that is derived from the effects of inequality of socioeconomic segregation, which then affects children. Empirical evidence for the mechanisms we identify is strong. We find modest reduced form evidence and structural evidence of an intertemporal Gatsby Curve for the US as mediated by social influences. Steven N. Durlauf Ananth Seshadri Department of Economics Department of Economics University of Wisconsin University of Wisconsin 1180 Observatory Drive 1180 Observatory Drive Madison WI, 53706 Madison WI, 53706 [email protected] [email protected]", "title": "" }, { "docid": "2ecd815af00b9961259fa9b2a9185483", "text": "This paper describes the current development status of a mobile robot designed to inspect the outer surface of large oil ship hulls and floating production storage and offloading platforms. These vessels require a detailed inspection program, using several nondestructive testing techniques. A robotic crawler designed to perform such inspections is presented here. Locomotion over the hull is provided through magnetic tracks, and the system is controlled by two networked PCs and a set of custom hardware devices to drive motors, video cameras, ultrasound, inertial platform, and other devices. Navigation algorithm uses an extended-Kalman-filter (EKF) sensor-fusion formulation, integrating odometry and inertial sensors. It was shown that the inertial navigation errors can be decreased by selecting appropriate Q and R matrices in the EKF formulation.", "title": "" }, { "docid": "d7f92d2503d02a76c635c4ab5bce1f1e", "text": "A fundamental feature of learning in animals is the “ability to forget” that allows an organism to perceive, model, and make decisions from disparate streams of information and adapt to changing environments. Against this backdrop, we present a novel unsupervised learning mechanism adaptive synaptic plasticity (ASP) for improved recognition with spiking neural networks (SNNs) for real time online learning in a dynamic environment. We incorporate an adaptive weight decay mechanism with the traditional spike timing dependent plasticity (STDP) learning to model adaptivity in SNNs. The leak rate of the synaptic weights is modulated based on the temporal correlation between the spiking patterns of the pre- and post-synaptic neurons. This mechanism helps in gradual forgetting of insignificant data while retaining significant, yet old, information. ASP, thus, maintains a balance between forgetting and immediate learning to construct a stable-plastic self-adaptive SNN for continuously changing inputs. We demonstrate that the proposed learning methodology addresses catastrophic forgetting, while yielding significantly improved accuracy over the conventional STDP learning method for digit recognition applications. In addition, we observe that the proposed learning model automatically encodes selective attention toward relevant features in the input data, while eliminating the influence of background noise (or denoising) further improving the robustness of the ASP learning.", "title": "" }, { "docid": "82ca6a400bf287dc287df9fa751ddac2", "text": "Research on ontology is becoming increasingly widespread in the computer science community, and its importance is being recognized in a multiplicity of research fields and application areas, including knowledge engineering, database design and integration, information retrieval and extraction. We shall use the generic term “information systems”, in its broadest sense, to collectively refer to these application perspectives. We argue in this paper that so-called ontologies present their own methodological and architectural peculiarities: on the methodological side, their main peculiarity is the adoption of a highly interdisciplinary approach, while on the architectural side the most interesting aspect is the centrality of the role they can play in an information system, leading to the perspective of ontology-driven information systems.", "title": "" }, { "docid": "f3bed3a3234fd61a168c9653a82b2f04", "text": "Digital libraries such as the NASA Astrophysics Data System (Kurtz et al. 2004) permit the easy accumulation of a new type of bibliometric measure, the number of electronic accesses (\\reads\") of individual articles. We explore various aspects of this new measure. We examine the obsolescence function as measured by actual reads, and show that it can be well t by the sum of four exponentials with very di erent time constants. We compare the obsolescence function as measured by readership with the obsolescence function as measured by citations. We nd that the citation function is proportional to the sum of two of the components of the readership function. This proves that the normative theory of citation is true in the mean. We further examine in detail the similarities and di erences between the citation rate, the readership rate and the total citations for individual articles, and discuss some of the causes. Using the number of reads as a bibliometric measure for individuals, we introduce the read-cite diagram to provide a two-dimensional view of an individual's scienti c productivity. We develop a simple model to account for an individual's reads and cites and use it to show that the position of a person in the read-cite diagram is a function of age, innate productivity, and work history. We show the age biases of both reads and cites, and develop two new bibliometric measures which have substantially less age bias than citations: SumProd, a weighted sum of total citations and the readership rate, intended to show the total productivity of an individual; and Read10, the readership rate for papers published in the last ten years, intended to show an individual's current productivity. We also discuss the e ect of normalization (dividing by the number of authors on a paper) on these statistics. We apply SumProd and Read10 using new, non-parametric techniques to rank and compare di erent astronomical research organizations Subject headings: digital libraries; bibliometrics; sociology of science; information retrieval", "title": "" }, { "docid": "cec97a91937daebec592085319e0f01e", "text": "Key features of the two dominating standards for the unlicensed bands, IEEE 802.11 and Bluetooth Wireless Technology, are combined to obtain a physical layer (PHY) with several desirable features for internet of things (IoT). The proposed PHY, which is referred to as Narrow-band WiFi (NB-WiFi) can be supported by an OFDM transceiver available in an IEEE 802.11 access point (AP). In addition, NB-WiFi supports concurrent use of low data rate IoT application and high data rate broadband using IEEE 802.11ax technology, based on a single IFFT#x002F;FFT in the AP. In the sensor node, Bluetooth Low Energy (BLE) hardware can be reused, making it suitable for dual mode implementation of BLE and NB-WiFi. The performance of the proposed PHY is simulated for an AWGN channel, and it achieves about 10dB improved sensitivity compared to a typical BLE receiver, due to the lower data rate.", "title": "" }, { "docid": "a1757ee58eb48598d3cd6e257b53cd10", "text": "This paper examines the issues of puzzle design in the context of collaborative gaming. The qualitative research approach involves both the conceptual analysis of key terminology and a case study of a collaborative game called eScape. The case study is a design experiment, involving both the process of designing a game environment and an empirical study, where data is collected using multiple methods. The findings and conclusions emerging from the analysis provide insight into the area of multiplayer puzzle design. The analysis and reflections answer questions on how to create meaningful puzzles requiring collaboration and how far game developers can go with collaboration design. The multiplayer puzzle design introduces a new challenge for game designers. Group dynamics, social roles and an increased level of interaction require changes in the traditional conceptual understanding of a single-player puzzle.", "title": "" }, { "docid": "37ac562b07d6d191eabbec94ea344e82", "text": "License plate recognition has been widely studied, and the advance in image capture technology helps enhance or create new methods to achieve this objective. In this work is presented a method for real time detection and segmentation of car license plates based on image analyzing and processing techniques. The results show that the computational cost and accuracy rate considering the proposed approach are acceptable to real time applications, with an execution time under 1 second. The proposed method was validated using two datasets (A and B). It was obtained over 92% detection success for dataset A, 88% in digit segmentation for datasets A and B, and 95% digits classification accuracy rate for dataset B.", "title": "" }, { "docid": "8f4ce2d2ec650a3923d27c3188f30f38", "text": "Synthetic aperture radar (SAR) interferometry is a modern efficient technique that allows reconstructing the height profile of the observed scene. However, apart for the presence of critical nonlinear inversion steps, particularly crucial in abrupt topography scenarios, it does not allow one to separate different scattering mechanisms in the elevation (height) direction within the ground pixel. Overlay of scattering at different elevations in the same azimuth-range resolution cell can be due either to the penetration of the radiation below the surface or to perspective ambiguities caused by the side-looking geometry. Multibaseline three-dimensional (3-D) SAR focusing allows overcoming such a limitation and has thus raised great interest in the recent research. First results with real data have been only obtained in the laboratory and with airborne systems, or with limited time-span and spatial-coverage spaceborne data. This work presents a novel approach for the tomographic processing of European Remote Sensing satellite (ERS) real data for extended scenes and long time span. Besides facing problems common to the airborne case, such as the nonuniformly spaced passes, this processing requires tackling additional difficulties specific to the spaceborne case, in particular a space-varying phase calibration of the data due to atmospheric variations and possible scene deformations occurring for years-long temporal spans. First results are presented that confirm the capability of ERS multipass tomography to resolve multiple targets within the same azimuth-range cell and to map the 3-D scattering properties of the illuminated scene.", "title": "" }, { "docid": "3e0dd3cf428074f21aaf202342003554", "text": "Despite significant recent work, purely unsupervised techniques for part-of-speech (POS) tagging have not achieved useful accuracies required by many language processing tasks. Use of parallel text between resource-rich and resource-poor languages is one source of weak supervision that significantly improves accuracy. However, parallel text is not always available and techniques for using it require multiple complex algorithmic steps. In this paper we show that we can build POS-taggers exceeding state-of-the-art bilingual methods by using simple hidden Markov models and a freely available and naturally growing resource, the Wiktionary. Across eight languages for which we have labeled data to evaluate results, we achieve accuracy that significantly exceeds best unsupervised and parallel text methods. We achieve highest accuracy reported for several languages and show that our approach yields better out-of-domain taggers than those trained using fully supervised Penn Treebank.", "title": "" }, { "docid": "a1774a08ffefd28785fbf3a8f4fc8830", "text": "Bounds are given for the empirical and expected Rademacher complexity of classes of linear transformations from a Hilbert space H to a …nite dimensional space. The results imply generalization guarantees for graph regularization and multi-task subspace learning. 1 Introduction Rademacher averages have been introduced to learning theory as an e¢ cient complexity measure for function classes, motivated by tight, sample or distribution dependent generalization bounds ([10], [2]). Both the de…nition of Rademacher complexity and the generalization bounds extend easily from realvalued function classes to function classes with values in R, as they are relevant to multi-task learning ([1], [12]). There has been an increasing interest in multi-task learning which has shown to be very e¤ective in experiments ([7], [1]), and there have been some general studies of its generalisation performance ([4], [5]). For a large collection of tasks there are usually more data available than for a single task and these data may be put to a coherent use by some constraint of ’relatedness’. A practically interesting case is linear multi-task learning, extending linear large margin classi…ers to vector valued large-margin classi…ers. Di¤erent types of constraints have been proposed: Evgeniou et al ([8], [9]) propose graph regularization, where the vectors de…ning the classi…ers of related tasks have to be near each other. They also show that their scheme can be implemented in the framework of kernel machines. Ando and Zhang [1] on the other hand require the classi…ers to be members of a common low dimensional subspace. They also give generalization bounds using Rademacher complexity, but these bounds increase with the dimension of the input space. This paper gives dimension free bounds which apply to both approaches. 1.1 Multi-task generalization and Rademacher complexity Suppose we have m classi…cation tasks, represented by m independent random variables X ; Y l taking values in X f 1; 1g, where X l models the random", "title": "" }, { "docid": "f84011e3b4c8b1e80d4e79dee3ccad53", "text": "What is the future of fashion? Tackling this question from a data-driven vision perspective, we propose to forecast visual style trends before they occur. We introduce the first approach to predict the future popularity of styles discovered from fashion images in an unsupervised manner. Using these styles as a basis, we train a forecasting model to represent their trends over time. The resulting model can hypothesize new mixtures of styles that will become popular in the future, discover style dynamics (trendy vs. classic), and name the key visual attributes that will dominate tomorrow’s fashion. We demonstrate our idea applied to three datasets encapsulating 80,000 fashion products sold across six years on Amazon. Results indicate that fashion forecasting benefits greatly from visual analysis, much more than textual or meta-data cues surrounding products.", "title": "" }, { "docid": "7b496aac963284f3415ac98b3abd8165", "text": "Forecasting is an important data analysis technique that aims to study historical data in order to explore and predict its future values. In fact, to forecast, different methods have been tested and applied from regression to neural network models. In this research, we proposed Elman Recurrent Neural Network (ERNN) to forecast the Mackey-Glass time series elements. Experimental results show that our scheme outperforms other state-of-art studies.", "title": "" }, { "docid": "456a246b468feb443e0ed576173d6d46", "text": "Automatic person re-identification (re-id) across camera boundaries is a challenging problem. Approaches have to be robust against many factors which influence the visual appearance of a person but are not relevant to the person's identity. Examples for such factors are pose, camera angles, and lighting conditions. Person attributes are a semantic high level information which is invariant across many such influences and contain information which is often highly relevant to a person's identity. In this work we develop a re-id approach which leverages the information contained in automatically detected attributes. We train an attribute classifier on separate data and include its responses into the training process of our person re-id model which is based on convolutional neural networks (CNNs). This allows us to learn a person representation which contains information complementary to that contained within the attributes. Our approach is able to identify attributes which perform most reliably for re-id and focus on them accordingly. We demonstrate the performance improvement gained through use of the attribute information on multiple large-scale datasets and report insights into which attributes are most relevant for person re-id.", "title": "" }, { "docid": "6975d0200669923b414f1775c208b91b", "text": "Wireless sensor networks (WSNs) have attracted a lot of interest over the last decade in wireless and mobile computing research community. Applications of WSNs are numerous and growing, which range from indoor deployment scenarios in the home and office to outdoor deployment in adversary’s territory in a tactical battleground. However, due to distributed nature and their deployment in remote areas, these networks are vulnerable to numerous security threats that can adversely affect their performance. This problem is more critical if the network is deployed for some mission-critical applications such as in a tactical battlefield. Random failure of nodes is also very likely in real-life deployment scenarios. Due to resource constraints in the sensor nodes, traditional security mechanisms with large overhead of computation and communication are infeasible in WSNs. Design and implementation of secure WSNs is, therefore, a particularly challenging task. This chapter provides a comprehensive discussion on the state of the art in security technologies for WSNs. It identifies various possible attacks at different layers of the communication protocol stack in a typical WSN and presents their possible countermeasures. A brief discussion on the future direction of research in WSN security is also included.", "title": "" }, { "docid": "f2579b9d625018867f4c1738d046ec7a", "text": "Carpenter syndrome, a rare autosomal recessive disorder characterized by a combination of craniosynostosis, polysyndactyly, obesity, and other congenital malformations, is caused by mutations in RAB23, encoding a member of the Rab-family of small GTPases. In 15 out of 16 families previously reported, the disease was caused by homozygosity for truncating mutations, and currently only a single missense mutation has been identified in a compound heterozygote. Here, we describe a further 8 independent families comprising 10 affected individuals with Carpenter syndrome, who were positive for mutations in RAB23. We report the first homozygous missense mutation and in-frame deletion, highlighting key residues for RAB23 function, as well as the first splice-site mutation. Multi-suture craniosynostosis and polysyndactyly have been present in all patients described to date, and abnormal external genitalia have been universal in boys. High birth weight was not evident in the current group of patients, but further evidence for laterality defects is reported. No genotype-phenotype correlations are apparent. We provide experimental evidence that transcripts encoding truncating mutations are subject to nonsense-mediated decay, and that this plays an important role in the pathogenesis of many RAB23 mutations. These observations refine the phenotypic spectrum of Carpenter syndrome and offer new insights into molecular pathogenesis.", "title": "" }, { "docid": "6194a43f6c355c921e5dee3e3a368696", "text": "Inverse reinforcement learning (IRL) is the problem of inferring the underlying reward function from the expert's behavior data. The difficulty in IRL mainly arises in choosing the best reward function since there are typically an infinite number of reward functions that yield the given behavior data as optimal. Another difficulty comes from the noisy behavior data due to sub-optimal experts. We propose a hierarchical Bayesian framework, which subsumes most of the previous IRL algorithms as well as models the sub-optimality of the expert's behavior. Using a number of experiments on a synthetic problem, we demonstrate the effectiveness of our approach including the robustness of our hierarchical Bayesian framework to the sub-optimal expert behavior data. Using a real dataset from taxi GPS traces, we additionally show that our approach predicts the driving behavior with a high accuracy.", "title": "" }, { "docid": "536e45f7130aa40625e3119523d2e1de", "text": "We consider the problem of Simultaneous Localization and Mapping (SLAM) from a Bayesian point of view using the Rao-Blackwellised Particle Filter (RBPF). We focus on the class of indoor mobile robots equipped with only a stereo vision sensor. Our goal is to construct dense metric maps of natural 3D point landmarks for large cyclic environments in the absence of accurate landmark position measurements and reliable motion estimates. Landmark estimates are derived from stereo vision and motion estimates are based on visual odometry. We distinguish between landmarks using the Scale Invariant Feature Transform (SIFT). Our work defers from current popular approaches that rely on reliable motion models derived from odometric hardware and accurate landmark measurements obtained with laser sensors. We present results that show that our model is a successful approach for vision-based SLAM, even in large environments. We validate our approach experimentally, producing the largest and most accurate vision-based map to date, while we identify the areas where future research should focus in order to further increase its accuracy and scalability to significantly larger", "title": "" } ]
scidocsrr
d0798ee169ec1636d3ed71d7c39c8233
Sentiment Classification of Drug Reviews Using a Rule-Based Linguistic Approach
[ { "docid": "650fe1308c081bfde0eea6885d6fa256", "text": "MetaMap is a widely available program providing access to the concepts in the unified medical language system (UMLS) Metathesaurus from biomedical text. This study reports on MetaMap's evolution over more than a decade, concentrating on those features arising out of the research needs of the biomedical informatics community both within and outside of the National Library of Medicine. Such features include the detection of author-defined acronyms/abbreviations, the ability to browse the Metathesaurus for concepts even tenuously related to input text, the detection of negation in situations in which the polarity of predications is important, word sense disambiguation (WSD), and various technical and algorithmic features. Near-term plans for MetaMap development include the incorporation of chemical name recognition and enhanced WSD.", "title": "" } ]
[ { "docid": "0ab14a40df6fe28785262d27a4f5b8ce", "text": "State-of-the-art 3D shape classification and retrieval algorithms, hereinafter referred to as shape analysis, are often based on comparing signatures or descriptors that capture the main geometric and topological properties of 3D objects. None of the existing descriptors, however, achieve best performance on all shape classes. In this article, we explore, for the first time, the usage of covariance matrices of descriptors, instead of the descriptors themselves, in 3D shape analysis. Unlike histogram -based techniques, covariance-based 3D shape analysis enables the fusion and encoding of different types of features and modalities into a compact representation. Covariance matrices, however, are elements of the non-linear manifold of symmetric positive definite (SPD) matrices and thus \\BBL2 metrics are not suitable for their comparison and clustering. In this article, we study geodesic distances on the Riemannian manifold of SPD matrices and use them as metrics for 3D shape matching and recognition. We then: (1) introduce the concepts of bag of covariance (BoC) matrices and spatially-sensitive BoC as a generalization to the Riemannian manifold of SPD matrices of the traditional bag of features framework, and (2) generalize the standard kernel methods for supervised classification of 3D shapes to the space of covariance matrices. We evaluate the performance of the proposed BoC matrices framework and covariance -based kernel methods and demonstrate their superiority compared to their descriptor-based counterparts in various 3D shape matching, retrieval, and classification setups.", "title": "" }, { "docid": "41652e67ee31b3e2d745be43e10a7ca9", "text": "We have recently witnessed the real life demonstration of link-flooding attacks—DDoS attacks that target the core of the Internet that can cause significant damage while remaining undetected. Because these attacks use traffic patterns that are indistinguishable from legitimate TCP-like flows, they can be persistent and cause long-term traffic disruption. Existing DDoS defenses that rely on detecting flow deviations from normal TCP traffic patterns cannot work in this case. Given the low cost of launching such attacks and their indistinguishability, we argue that any countermeasure must fundamentally tackle the root cause of the problem: either force attackers to increase their costs, or barring that, force attack traffic to become distinguishable from legitimate traffic. Our key insight is that to tackle this root cause it is sufficient to perform a rate change test, where we temporarily increase the effective bandwidth of the bottlenecked core link and observe the response. Attacks by cost-sensitive adversaries who try to fully utilize the bots’ upstream bandwidth will be detected since they will be unable to demonstrably increase throughput after bandwidth expansion. Alternatively, adversaries are forced to increase costs by having to mimic legitimate clients’ traffic patterns to avoid detection. We design a software-defined network (SDN) based system called SPIFFY that addresses key practical challenges in turning this high-level idea into a concrete defense mechanism, and provide a practical solution to force a tradeoff between cost vs. detectability for linkflooding attacks. We develop fast traffic-engineering algorithms to achieve effective bandwidth expansion and suggest scalable monitoring algorithms for tracking the change in traffic-source behaviors. We demonstrate the effectiveness of SPIFFY using a real SDN testbed and large-scale packet-level and flow-level simulations.", "title": "" }, { "docid": "86318b52b1bdf0dcf64a2d067645237b", "text": "Neurons that fire high-frequency bursts of spikes are found in various sensory systems. Although the functional implications of burst firing might differ from system to system, bursts are often thought to represent a distinct mode of neuronal signalling. The firing of bursts in response to sensory input relies on intrinsic cellular mechanisms that work with feedback from higher centres to control the discharge properties of these cells. Recent work sheds light on the information that is conveyed by bursts about sensory stimuli, on the cellular mechanisms that underlie bursting, and on how feedback can control the firing mode of burst-capable neurons, depending on the behavioural context. These results provide strong evidence that bursts have a distinct function in sensory information transmission.", "title": "" }, { "docid": "0d1ff6f8cfc8022138565116f832db03", "text": "Suppose X is a uniformly distributed n-dimensional binary vector and Y is obtained by passing X through a binary symmetric channel with crossover probability α. A recent conjecture by Courtade and Kumar postulates that I(f(X); Y ) ≤ 1 - h(α) for any Boolean function f. So far, the best known upper bound was essentially I(f(X); Y ) ≤ (1 - 2α)2. In this paper, we derive a new upper bound that holds for all balanced functions, and improves upon the best known previous bound for α > 1 over 3.", "title": "" }, { "docid": "b87f7587821f4a8718396a1dd7fa479e", "text": "In the future, robots will be important device widely in our daily lives to achieve complicated tasks. To achieve the tasks, there are some demands for the robots. In this paper, two strong demands of them are taken attention. First one is multiple-degrees of freedom (DOF), and the second one is miniaturization of the robots. Although rotary actuators is necessary to get multiple-DOF, miniaturization is difficult with rotary motors which are usually utilized for multiple-DOF robots. Here, tendon-driven rotary actuator is a candidate to solve the problems of the rotary actuators. The authors proposed a type of tendon-driven rotary actuator using thrust wires. However, big mechanical loss and frictional loss occurred because of the complicated structure of connection points. As the solution for the problems, this paper proposes a tendon-driven rotary actuator for haptics with thrust wires and polyethylene (PE) line. In the proposed rotary actuator, a PE line is used in order to connect the tip points of thrust wires and the end effector. The validity of the proposed rotary actuator is evaluated by experiments.", "title": "" }, { "docid": "d09d9d9f74079981f8f09e829e2af255", "text": "Determination of sensitive and specific markers of very early AD progression is intended to aid researchers and clinicians to develop new treatments and monitor their effectiveness, as well as to lessen the time and cost of clinical trials. Magnetic Resonance (MR)-related biomarkers have been recently identified by the use of machine learning methods for the in vivo differential diagnosis of AD. However, the vast majority of neuroimaging papers investigating this topic are focused on the difference between AD and patients with mild cognitive impairment (MCI), not considering the impact of MCI patients who will (MCIc) or not convert (MCInc) to AD. Morphological T1-weighted MRIs of 137 AD, 76 MCIc, 134 MCInc, and 162 healthy controls (CN) selected from the Alzheimer's disease neuroimaging initiative (ADNI) cohort, were used by an optimized machine learning algorithm. Voxels influencing the classification between these AD-related pre-clinical phases involved hippocampus, entorhinal cortex, basal ganglia, gyrus rectus, precuneus, and cerebellum, all critical regions known to be strongly involved in the pathophysiological mechanisms of AD. Classification accuracy was 76% AD vs. CN, 72% MCIc vs. CN, 66% MCIc vs. MCInc (nested 20-fold cross validation). Our data encourage the application of computer-based diagnosis in clinical practice of AD opening new prospective in the early management of AD patients.", "title": "" }, { "docid": "5d827a27d9fb1fe4041e21dde3b8ce44", "text": "Cloud storage systems are becoming increasingly popular. A promising technology that keeps their cost down is deduplication, which stores only a single copy of repeating data. Client-side deduplication attempts to identify deduplication opportunities already at the client and save the bandwidth of uploading copies of existing files to the server. In this work we identify attacks that exploit client-side deduplication, allowing an attacker to gain access to arbitrary-size files of other users based on a very small hash signatures of these files. More specifically, an attacker who knows the hash signature of a file can convince the storage service that it owns that file, hence the server lets the attacker download the entire file. (In parallel to our work, a subset of these attacks were recently introduced in the wild with respect to the Dropbox file synchronization service.) To overcome such attacks, we introduce the notion of proofs-of-ownership (PoWs), which lets a client efficiently prove to a server that that the client holds a file, rather than just some short information about it. We formalize the concept of proof-of-ownership, under rigorous security definitions, and rigorous efficiency requirements of Petabyte scale storage systems. We then present solutions based on Merkle trees and specific encodings, and analyze their security. We implemented one variant of the scheme. Our performance measurements indicate that the scheme incurs only a small overhead compared to naive client-side deduplication.", "title": "" }, { "docid": "11e9bdfbdcc7718878c4a87c894964eb", "text": "Detecting topics from Twitter streams has become an important task as it is used in various fields including natural disaster warning, users opinion assessment, and traffic prediction. In this article, we outline different types of topic detection techniques and evaluate their performance. We categorize the topic detection techniques into five categories which are clustering, frequent pattern mining, Exemplar-based, matrix factorization, and probabilistic models. For clustering techniques, we discuss and evaluate nine different techniques which are sequential k-means, spherical k-means, Kernel k-means, scalable Kernel k-means, incremental batch k-means, DBSCAN, spectral clustering, document pivot clustering, and Bngram. Moreover, for matrix factorization techniques, we analyze five different techniques which are sequential Latent Semantic Indexing (LSI), stochastic LSI, Alternating Least Squares (ALS), Rank-one Downdate (R1D), and Column Subset Selection (CSS). Additionally, we evaluate several other techniques in the frequent pattern mining, Exemplar-based, and probabilistic model categories. Results on three Twitter datasets show that Soft Frequent Pattern Mining (SFM) and Bngram achieve the best term precision, while CSS achieves the best term recall and topic recall in most of the cases. Moreover, Exemplar-based topic detection obtains a good balance between the term recall and term precision, while achieving a good topic recall and running time.", "title": "" }, { "docid": "61615273dad80e5a0a95ecbe3002fd72", "text": "Other than serving as building blocks for DNA and RNA, purine metabolites provide a cell with the necessary energy and cofactors to promote cell survival and proliferation. A renewed interest in how purine metabolism may fuel cancer progression has uncovered a new perspective into how a cell regulates purine need. Under cellular conditions of high purine demand, the de novo purine biosynthetic enzymes cluster near mitochondria and microtubules to form dynamic multienzyme complexes referred to as 'purinosomes'. In this review, we highlight the purinosome as a novel level of metabolic organization of enzymes in cells, its consequences for regulation of purine metabolism, and the extent that purine metabolism is being targeted for the treatment of cancers.", "title": "" }, { "docid": "ca7f9128cc946126f284413e8d3bea06", "text": "Urinary tract infections, genital tract infections and sexually transmitted infections are the most prevalent infectious diseases, and the establishment of locally optimized guidelines is critical to provide appropriate treatment. The Urological Association of Asia has planned to develop the Asian guidelines for all urological fields, and the present urinary tract infections, genital tract infections and sexually transmitted infections guideline was the second project of the Urological Association of Asia guideline development, which was carried out by the Asian Association of Urinary Tract Infection and Sexually Transmitted Infection. The members have meticulously reviewed relevant references, retrieved via the PubMed and MEDLINE databases, published between 2009 through 2015. The information identified through the literature review of other resources was supplemented by the author. Levels of evidence and grades of recommendation for each management were made according to the relevant strategy. If the judgment was made on the basis of insufficient or inadequate evidence, the grade of recommendation was determined on the basis of committee discussions and resultant consensus statements. Here, we present a short English version of the original guideline, and overview its key clinical issues.", "title": "" }, { "docid": "1e6583ec7a290488cd8e672ab59158b9", "text": "Evidence-based guidelines for the management of patients with Lyme disease, human granulocytic anaplasmosis (formerly known as human granulocytic ehrlichiosis), and babesiosis were prepared by an expert panel of the Infectious Diseases Society of America. These updated guidelines replace the previous treatment guidelines published in 2000 (Clin Infect Dis 2000; 31[Suppl 1]:1-14). The guidelines are intended for use by health care providers who care for patients who either have these infections or may be at risk for them. For each of these Ixodes tickborne infections, information is provided about prevention, epidemiology, clinical manifestations, diagnosis, and treatment. Tables list the doses and durations of antimicrobial therapy recommended for treatment and prevention of Lyme disease and provide a partial list of therapies to be avoided. A definition of post-Lyme disease syndrome is proposed.", "title": "" }, { "docid": "7c53bfd12b0599acd1f5d33d0185d29a", "text": "Advanced warning of potential new opportunities and threats related to biodiversity allows decision-makers to act strategically to maximize benefits or minimize costs. Strategic foresight explores possible futures, their consequences for decisions, and the actions that promote more desirable futures. Foresight tools, such as horizon scanning and scenario planning, are increasingly used by governments and business for long-term strategic planning and capacity building. These tools are now being applied in ecology, although generally not as part of a comprehensive foresight strategy. We highlight several ways foresight could play a more significant role in environmental decisions by: monitoring existing problems, highlighting emerging threats, identifying promising new opportunities, testing the resilience of policies, and defining a research agenda.", "title": "" }, { "docid": "d8be7894234bde528c59e22c208e4473", "text": "-As the workplace has become increasingly diverse, there has been a tension between the promise and the reality of diversity in team process and performance. The optimistic view holds that diversity will lead to an increase in the variety of perspectives and approaches brought to a problem and to opportunities for knowledge sharing, and hence lead to greater creativity and quality of team performance. However, the preponderance of the evidence favors a more pessimistic view: that diversity creates social divisions, which in turn create negative performance outcomes for the group. Why is the reality of diversity less than the promise? Answering this requires understanding a variety of factors, including how diversity is defined and categorized, and the moderating as well as mediating processes that affect the diversity-process-performance linkage. We start with a definition. The word diversity has been used to refer to so many types of differences among people that the most commonly used definition-\"any attribute that another person may use to detect individual differences\" (Williams & O'Reilly, 1998, p. 81)-while accurate, is also quite broad. As a result, various categorization schemes based on factors such as race or gender, or based on proportions such as the size of the minority, have been used to further refine the definition of diversity in teams. The choices researchers have made in using these categorization schemes, however, do lead to particular tradeoffs. Factor approaches, for example, allow an examination of multiple types of diversity and the interactions among them but ignore the sizes of factions and subgroups. Proportional approaches allow the consideration of minority-group size, and hence the study of issues such as tokenism, but also tend to focus on only one type of diversity and thereby overestimate its relevance relative to other types. The underlying effects of diversity, whichever way it is defined and categorized, have typically been understood through three primary theoretical perspectives: the similarity-attraction paradigm, self- and social categorization, and information processing. These approaches also have their biases. The predictions of similarity-attraction theory are straightforward: Similarity on attributes such as attitudes, values, and beliefs will facilitate interpersonal attraction and liking. Empirical research has supported that surface-level similarity tends to predict affiliation and attraction. The similarity-attraction paradigm was developed to understand dyadic relationships. Yet, individuals can express preferences for membership in particular groups even when they have had no prior social interaction with members of that group. This is primarily a cognitive process of categorization: Individuals are postulated to have a hierarchical structure of self-categorizations at the personal, group, and superordinate levels. Research has demonstrated that the specific categories on which we tend to focus in categorizing others-such as race, gender, values, or beliefs-are likely to be those that are the most distinctive or salient within the particular social context. The act of social categorization activates differential expectations for in-group and out-group members. This distinction creates the atmosphere for stereotyping, in which out-group members are judged more stereotypically than in-group members are. The self-categorization/social-identity and similarity-attraction approaches both tend to lead to the pessimistic view of diversity in teams. In these paradigms, individuals will be more attracted to similar others and will experience more cohesion and social integration in homogeneous groups. The information-processing approach, by contrast, offers a more optimistic view: that diversity creates an atmosphere for enhancing group performance. The information-processing approach argues that individuals in diverse groups have access to other individuals with different backgrounds, networks, information, and skills. This added information should improve the group outcome even though it might create coordination problems for the group. As we disentangle what researchers have learned from the last 50 years, we can conclude that surface-level social- category differences, such as those of race/ethnicity, gender, or age, tend to be more likely to have negative effects on the ability of groups to function effectively. By contrast, underlying differences, such as differences in functional background, education, or personality, are more often positively related to performance-for example by facilitating creativity or group problem solving-but only when the group process is carefully controlled. The majority of these effects have typically been explained in terms of potential mediators such as social integration, communication, and conflict. However, the actual evidence for the input-process-output linkage is not as strong as one might like. Clarifying the mixed effects of diversity in work groups will only be possible by carefully considering moderators such as context, by broadening our view to include new types of diversity such as emotions and networks, and by focusing more carefully on mediating mechanisms. As we delve into advice for organizational teams to enhance the assets of diversity and manage the liabilities, we focus on the benefits of \"exploring\" as opposed to \"exploiting\" types of tasks, of bridging diversity through values and goals, and of enhancing the power of the minority. Finally, we end with suggestions for how organizations can learn to create incentives for change within the firm.", "title": "" }, { "docid": "19a02cb59a50f247663acc77b768d7ec", "text": "Machine learning is a useful technology for decision support systems and assumes greater importance in research and practice. Whilst much of the work focuses technical implementations and the adaption of machine learning algorithms to application domains, the factors of machine learning design affecting the usefulness of decision support are still understudied. To enhance the understanding of machine learning and its use in decision support systems, we report the results of our content analysis of design-oriented research published between 1994 and 2013 in major Information Systems outlets. The findings suggest that the usefulness of machine learning for supporting decision-makers is dependent on the task, the phase of decision-making, and the applied technologies. We also report about the advantages and limitations of prior research, the applied evaluation methods and implications for future decision support research. Our findings suggest that future decision support research should shed more light on organizational and people-related evaluation criteria.", "title": "" }, { "docid": "80cf82caebfb48dac02d001b24163bdf", "text": "This paper presents a new current sensor based on fluxgate principle. The sensor consists of a U-shaped magnetic gathering shell. In the designed sensor, the exciting winding and the secondary winding are arranged orthogonally, so that the magnetic fields produced by the two windings are mutually orthogonal and decoupled. Introducing a magnetic gathering shell into the sensor is to concentrate the detected magnetic field and to reduce the interference of an external stray field. Based on the theoretical analysis and the simulation results, a prototype was designed. Test results show that the proposed sensor can measure currents up to 25 A, and has an accuracy of 0.6% and a remarkable resolution.", "title": "" }, { "docid": "7438a34023cfcff0a5751709d6ca2851", "text": "susceptible to non-communicable diseases; 80% of deaths attributable to them occur in these countries. Many people think that obesity is the root cause of these diseases. But 20% of obese people have normal metabolism and Last September, the United Nations declared that, for the first time in human history, chronic non-communicable diseases such as heart disease, cancer and diabetes pose a greater health burden worldwide than do infectious diseases, contributing to 35 million deaths annually. This is not just a problem of the developed world. Every country that has adopted the Western diet — one dominated by low-cost, highly processed food — has witnessed rising rates of obesity and related diseases. There are now 30% more people who are obese than who are undernourished. Economic development means that the populations of lowand middle-income countries are living longer, and therefore are more will have a normal lifespan. Conversely, up to 40% of normal-weight people develop the diseases that constitute the metabolic syndrome: diabetes, hypertension, lipid problems, cardio vascular disease and non-alcoholic fatty liver disease. Obesity is not the cause; rather, it is a marker for metabolic dysfunction, which is even more prevalent. The UN announcement targets tobacco, alcohol and diet as the central risk factors in non-communicable disease. Two of these three — tobacco and alcohol — are regulated by governments to protect public health, leaving one of the primary culprits behind this worldwide health crisis unchecked. Of course, regulating food is more SUMMARY ● Sugar consumption is linked to a rise in non-communicable disease ● Sugar’s effects on the body can be similar to those of alcohol ● Regulation could include tax, limiting sales during school hours and placing age limits on purchase IL LU ST R AT IO N B Y M A R K S M IT H", "title": "" }, { "docid": "e70086a4ba81b7457031e850450601cd", "text": "Some of the features in Discipulus that contribute to its extraordinary performance [3, 4, 5, 6, 9] are: • Discipulus implements a Genetic Programming algorithm. This algorithm determines the appropriate functional form and optimizes the parameters of the function. It is an ideal algorithm for complex, noisy, poorly understood domains. • Discipulus performs Genetic Programming thru direct manipulation of binary machine code. This makes Discipulus about sixty to two-hundred times faster than comparable automated learning approaches [10]. • Discipulus performs multi-run Genetic Programming, intelligently adapting its own parameters to the problem at hand. Each of these capabilities of Discipulus are discussed below.", "title": "" }, { "docid": "93177b2546e8efa1eccad4c81468f9fe", "text": "Online Transaction Processing (OLTP) databases include a suite of features - disk-resident B-trees and heap files, locking-based concurrency control, support for multi-threading - that were optimized for computer technology of the late 1970's. Advances in modern processors, memories, and networks mean that today's computers are vastly different from those of 30 years ago, such that many OLTP databases will now fit in main memory, and most OLTP transactions can be processed in milliseconds or less. Yet database architecture has changed little.\n Based on this observation, we look at some interesting variants of conventional database systems that one might build that exploit recent hardware trends, and speculate on their performance through a detailed instruction-level breakdown of the major components involved in a transaction processing database system (Shore) running a subset of TPC-C. Rather than simply profiling Shore, we progressively modified it so that after every feature removal or optimization, we had a (faster) working system that fully ran our workload. Overall, we identify overheads and optimizations that explain a total difference of about a factor of 20x in raw performance. We also show that there is no single \"high pole in the tent\" in modern (memory resident) database systems, but that substantial time is spent in logging, latching, locking, B-tree, and buffer management operations.", "title": "" }, { "docid": "9d7852606784ecb8501d5b26b1b98f7f", "text": "This work describes a visualization tool and sensor testbed that can be used for assessing the performance of both instruments and human observers in support of port and harbor security. Simulation and modeling of littoral environments must take into account the complex interplay of incident light distributions, spatially correlated boundary interfaces, bottom-type variation, and the three-dimensional structure of objects in and out of the water. A general methodology for a two-pass Monte Carlo solution called Photon Mapping has been adopted and developed in the context of littoral hydrologic optics. The resulting tool is an end-to-end technique for simulating spectral radiative transfer in natural waters. A modular design allows arbitrary distributions of optical properties, geometries, and incident radiance to be modeled effectively. This tool has been integrated as part of the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model. DIRSIG has an established history in multi and hyperspectral scene simulation of terrain targets ranging from the visible to the thermal infrared (0.380 20.0 microns). This tool extends its capabilities to the domain of hydrologic optics and can be used to simulate and develop active/passive sensors that could be deployed on either aerial or underwater platforms. Applications of this model as a visualization tool for underwater sensors or divers are also demonstrated.", "title": "" }, { "docid": "2a669b7068dfff6b560032d2f99fc5ef", "text": "BACKGROUND\nRoutine resection of cavity shave margins (additional tissue circumferentially around the cavity left by partial mastectomy) may reduce the rates of positive margins (margins positive for tumor) and reexcision among patients undergoing partial mastectomy for breast cancer.\n\n\nMETHODS\nIn this randomized, controlled trial, we assigned, in a 1:1 ratio, 235 patients with breast cancer of stage 0 to III who were undergoing partial mastectomy, with or without resection of selective margins, to have further cavity shave margins resected (shave group) or not to have further cavity shave margins resected (no-shave group). Randomization occurred intraoperatively after surgeons had completed standard partial mastectomy. Positive margins were defined as tumor touching the edge of the specimen that was removed in the case of invasive cancer and tumor that was within 1 mm of the edge of the specimen removed in the case of ductal carcinoma in situ. The rate of positive margins was the primary outcome measure; secondary outcome measures included cosmesis and the volume of tissue resected.\n\n\nRESULTS\nThe median age of the patients was 61 years (range, 33 to 94). On final pathological testing, 54 patients (23%) had invasive cancer, 45 (19%) had ductal carcinoma in situ, and 125 (53%) had both; 11 patients had no further disease. The median size of the tumor in the greatest diameter was 1.1 cm (range, 0 to 6.5) in patients with invasive carcinoma and 1.0 cm (range, 0 to 9.3) in patients with ductal carcinoma in situ. Groups were well matched at baseline with respect to demographic and clinicopathological characteristics. The rate of positive margins after partial mastectomy (before randomization) was similar in the shave group and the no-shave group (36% and 34%, respectively; P=0.69). After randomization, patients in the shave group had a significantly lower rate of positive margins than did those in the no-shave group (19% vs. 34%, P=0.01), as well as a lower rate of second surgery for margin clearance (10% vs. 21%, P=0.02). There was no significant difference in complications between the two groups.\n\n\nCONCLUSIONS\nCavity shaving halved the rates of positive margins and reexcision among patients with partial mastectomy. (Funded by the Yale Cancer Center; ClinicalTrials.gov number, NCT01452399.).", "title": "" } ]
scidocsrr
68c03ef61a5fb85096c27e23afa4eb8f
Conversation with and through computers
[ { "docid": "c85d1b4193f016da93e555bb4227d7cd", "text": "ground in on orderly way. To do this, we argue, they try to establish for each utterance the mutual belief that the addressees hove understood what the speaker meant well enough for current purposes. This is accomplished by the collective actions of the current contributor and his or her partners, and these result in units of conversation called contributions. We present a model of contributions and show how it accounts for o variety of features of everyday conversations.", "title": "" } ]
[ { "docid": "f8d0929721ba18b2412ca516ac356004", "text": "Because of the fact that vehicle crash tests are complex and complicated experiments it is advisable to establish their mathematical models. This paper contains an overview of the kinematic and dynamic relationships of a vehicle in a collision. There is also presented basic mathematical model representing a collision together with its analysis. The main part of this paper is devoted to methods of establishing parameters of the vehicle crash model and to real crash data investigation i.e. – creation of a Kelvin model for a real experiment, its analysis and validation. After model’s parameters extraction a quick assessment of an occupant crash severity is done. Key-Words: Modeling, vehicle crash, Kelvin model, data processing.", "title": "" }, { "docid": "7c9cd59a4bb14f678c57ad438f1add12", "text": "This paper proposes a new ensemble method built upon a deep neural network architecture. We use a set of meteorological models for rain forecast as base predictors. Each meteorological model is provided to a channel of the network and, through a convolution operator, the prediction models are weighted and combined. As a result, the predicted value produced by the ensemble depends on both the spatial neighborhood and the temporal pattern. We conduct some computational experiments in order to compare our approach to other ensemble methods widely used for daily rainfall prediction. The results show that our architecture based on ConvLSTM networks is a strong candidate to solve the problem of combining predictions in a spatiotemporal context.", "title": "" }, { "docid": "4625d09122eb2e42a201503405f7abfa", "text": "OBJECTIVE\nTo summarize 16 years of National Collegiate Athletic Association (NCAA) injury surveillance data for 15 sports and to identify potential modifiable risk factors to target for injury prevention initiatives.\n\n\nBACKGROUND\nIn 1982, the NCAA began collecting standardized injury and exposure data for collegiate sports through its Injury Surveillance System (ISS). This special issue reviews 182 000 injuries and slightly more than 1 million exposure records captured over a 16-year time period (1988-1989 through 2003-2004). Game and practice injuries that required medical attention and resulted in at least 1 day of time loss were included. An exposure was defined as 1 athlete participating in 1 practice or game and is expressed as an athlete-exposure (A-E).\n\n\nMAIN RESULTS\nCombining data for all sports, injury rates were statistically significantly higher in games (13.8 injuries per 1000 A-Es) than in practices (4.0 injuries per 1000 A-Es), and preseason practice injury rates (6.6 injuries per 1000 A-Es) were significantly higher than both in-season (2.3 injuries per 1000 A-Es) and postseason (1.4 injuries per 1000 A-Es) practice rates. No significant change in game or practice injury rates was noted over the 16 years. More than 50% of all injuries were to the lower extremity. Ankle ligament sprains were the most common injury over all sports, accounting for 15% of all reported injuries. Rates of concussions and anterior cruciate ligament injuries increased significantly (average annual increases of 7.0% and 1.3%, respectively) over the sample period. These trends may reflect improvements in identification of these injuries, especially for concussion, over time. Football had the highest injury rates for both practices (9.6 injuries per 1000 A-Es) and games (35.9 injuries per 1000 A-Es), whereas men's baseball had the lowest rate in practice (1.9 injuries per 1000 A-Es) and women's softball had the lowest rate in games (4.3 injuries per 1000 A-Es).\n\n\nRECOMMENDATIONS\nIn general, participation in college athletics is safe, but these data indicate modifiable factors that, if addressed through injury prevention initiatives, may contribute to lower injury rates in collegiate sports.", "title": "" }, { "docid": "7f27e541df0632d5fe1eda2cb1ead491", "text": "In recent years, Massively Parallel Processors have increasingly been used to manage and query vast amounts of data. Dramatic performance improvements are achieved through distributed execution of queries across many nodes. Query optimization for such system is a challenging and important problem.\n In this paper we describe the Query Optimizer inside the SQL Server Parallel Data Warehouse product (PDW QO). We leverage existing QO technology in Microsoft SQL Server to implement a cost-based optimizer for distributed query execution. By properly abstracting metadata we can readily reuse existing logic for query simplification, space exploration and cardinality estimation. Unlike earlier approaches that simply parallelize the best serial plan, our optimizer considers a rich space of execution alternatives, and picks one based on a cost-model for the distributed execution environment. The result is a high-quality, effective query optimizer for distributed query processing in an MPP.", "title": "" }, { "docid": "0177729f2d7fc610bd8e55a93a93b03b", "text": "Preference-based recommendation systems have transformed how we consume media. By analyzing usage data, these methods uncover our latent preferences for items (such as articles or movies) and form recommendations based on the behavior of others with similar tastes. But traditional preference-based recommendations do not account for the social aspect of consumption, where a trusted friend might point us to an interesting item that does not match our typical preferences. In this work, we aim to bridge the gap between preference- and social-based recommendations. We develop social Poisson factorization (SPF), a probabilistic model that incorporates social network information into a traditional factorization method; SPF introduces the social aspect to algorithmic recommendation. We develop a scalable algorithm for analyzing data with SPF, and demonstrate that it outperforms competing methods on six real-world datasets; data sources include a social reader and Etsy.", "title": "" }, { "docid": "40ec8caea52ba75a6ad1e100fb08e89a", "text": "Disambiguating concepts and entities in a context sensitive way is a fundamental problem in natural language processing. The comprehensiveness of Wikipedia has made the online encyclopedia an increasingly popular target for disambiguation. Disambiguation to Wikipedia is similar to a traditional Word Sense Disambiguation task, but distinct in that the Wikipedia link structure provides additional information about which disambiguations are compatible. In this work we analyze approaches that utilize this information to arrive at coherent sets of disambiguations for a given document (which we call “global” approaches), and compare them to more traditional (local) approaches. We show that previous approaches for global disambiguation can be improved, but even then the local disambiguation provides a baseline which is very hard to beat.", "title": "" }, { "docid": "50ef30738ae020c3f0d2b5751acf132d", "text": "Dispatching automated guided vehicles (AGVs) is the common approach for AGVs scheduling in practice, the information about load arrivals in advance was not used to optimize the performance of the automated guided vehicles system (AGVsS). According to the characteristics of the AGVsS, the mathematical model of AGVs scheduling was established. A heuristic algorithm called Annealing Genetic Algorithm (AGA) was presented to deal with the AGVs scheduling problem, and applied the algorithm dynamically by using it repeatedly under a combined rolling optimization strategy. the performance of the proposed approach for AGVs scheduling was compared with the dispatching rules by simulation. Results showed that the approach performs significantly better than the dispatching rules and proved that it is really effective for AGVsS.", "title": "" }, { "docid": "3d67093ff0885734ca8be9be3b44429c", "text": "The autoencoder algorithm and its deep version as traditional dimensionality reduction methods have achieved great success via the powerful representability of neural networks. However, they just use each instance to reconstruct itself and ignore to explicitly model the data relation so as to discover the underlying effective manifold structure. In this paper, we propose a dimensionality reduction method by manifold learning, which iteratively explores data relation and use the relation to pursue the manifold structure. The method is realized by a so called \"generalized autoencoder\" (GAE), which extends the traditional autoencoder in two aspects: (1) each instance xi is used to reconstruct a set of instances {xj} rather than itself. (2) The reconstruction error of each instance (||xj -- x'i||2) is weighted by a relational function of xi and xj defined on the learned manifold. Hence, the GAE captures the structure of the data space through minimizing the weighted distances between reconstructed instances and the original ones. The generalized autoencoder provides a general neural network framework for dimensionality reduction. In addition, we propose a multilayer architecture of the generalized autoencoder called deep generalized autoencoder to handle highly complex datasets. Finally, to evaluate the proposed methods, we perform extensive experiments on three datasets. The experiments demonstrate that the proposed methods achieve promising performance.", "title": "" }, { "docid": "5c690df3977b078243b9cb61e5e712a6", "text": "Computing indirect illumination is a challenging and complex problem for real-time rendering in 3D applications. We present a global illumination approach that computes indirect lighting in real time using a simplified version of the outgoing radiance and the scene stored in voxels. This approach comprehends two-bounce indirect lighting for diffuse, specular and emissive materials. Our voxel structure is based on a directional hierarchical structure stored in 3D textures with mipmapping, the structure is updated in real time utilizing the GPU which enables us to approximate indirect lighting for dynamic scenes. Our algorithm employs a voxel-light pass which calculates voxel direct and global illumination for the simplified outgoing radiance. We perform voxel cone tracing within this voxel structure to approximate different lighting phenomena such as ambient occlusion, soft shadows and indirect lighting. We demonstrate with different tests that our developed approach is capable to compute global illumination of complex scenes on interactive times.", "title": "" }, { "docid": "e289e25a86e743a189fd5fec1d911f74", "text": "Congestion avoidance mechanisms allow a network to operate in the optimal region of low delay and high throughput, thereby, preventing the network from becoming congested. This is different from the traditional congestion control mechanisms that allow the network to recover from the congested state of high delay and low throughput. Both congestion avoidance and congestion control mechanisms are basically resource management problems. They can be formulated as system control problems in which the system senses its state and feeds this back to its users who adjust their controls. The key component of any congestion avoidance scheme is the algorithm (or control function) used by the users to increase or decrease their load (window or rate). We abstractly characterize a wide class of such increase/decrease algorithms and compare them using several different performance metrics. They key metrics are efficiency, fairness, convergence time, and size of oscillations. It is shown that a simple additive increase and multiplicative decrease algorithm satisfies the sufficient conditions for convergence to an efficient and fair state regardless of the starting state of the network. This is the algorithm finally chosen for implementation in the congestion avoidance scheme recommended for Digital Networking Architecture and OSI Transport Class 4 Networks.", "title": "" }, { "docid": "17dce24f26d7cc196e56a889255f92a8", "text": "As known, to finish this book, you may not need to get it at once in a day. Doing the activities along the day may make you feel so bored. If you try to force reading, you may prefer to do other entertaining activities. But, one of concepts we want you to have this book is that it will not make you feel bored. Feeling bored when reading will be only unless you don't like the book. computational principles of mobile robotics really offers what everybody wants.", "title": "" }, { "docid": "937644ac8b97de476653e4b8aaa924ac", "text": "In this paper, a generalized discontinuous pulsewidth modulation (GDPWM) method with superior high modulation operating range performance characteristics is developed. An algorithm which employs the conventional space-vector PWM method in the low modulation range, and the GDPWM method in the high modulation range, is established. As a result, the current waveform quality, switching losses, voltage linearity range, and the overmodulation region performance of a PWM voltage-source inverter (PWM-VSI) drive are on-line optimized, as opposed to conventional modulators with fixed characteristics. Due to its compactness, simplicity, and superior performance, the algorithm is suitable for most high-performance PWM-VSI drive applications. This paper provides detailed performance analysis of the method and compares it to the other methods. The experimental results verify the superiority of this algorithm to the conventional PWM methods.", "title": "" }, { "docid": "8e3f8fca93ca3106b83cf85d20c061ca", "text": "KeeLoq is a 528-round lightweight block cipher which has a 64-bit secret key and a 32-bit block length. The cube attack, proposed by Dinur and Shamir, is a new type of attacking method. In this paper, we investigate the security of KeeLoq against iterative side-channel cube attack which is an enhanced attack scheme. Based on structure of typical block ciphers, we give the model of iterative side-channel cube attack. Using the traditional single-bit leakage model, we assume that the attacker can exactly possess the information of one bit leakage after round 23. The new attack model costs a data complexity of 211.00 chosen plaintexts to recover the 23-bit key of KeeLoq. Our attack will reduce the key searching space to 241 by considering an error-free bit from internal states.", "title": "" }, { "docid": "569fcd0efaba3c142f8282369af9fff1", "text": "Since fouling-release coating systems do not prevent settlement, various methods to quantify the tenacity of adhesion of fouling organisms on these systems have been offered. One such method is the turbulent channel flow apparatus. The question remains how the results from laboratory scale tests relate to the self-cleaning of a ship coated with a fouling-release surface. This paper relates the detachment strength of low form fouling determined in the laboratory using a turbulent channel flow to the conditions necessary for detachment of these organisms in a turbulent boundary layer at ship scale. A power-law formula, the ITTC-57 formula, and a computational fluid dynamics (CFD) model are used to predict the skin-friction at ship scale. The results from all three methods show good agreement and are illustrated using turbulent channel flow data for sporelings of the green macrofouling alga Enteromorpha growing on a fouling-release coating.", "title": "" }, { "docid": "b78a3aef42a5b8987ac441359b06d780", "text": "Fuel cell stacks and photovoltaic panels generate rather low dc voltages and these voltages need to be boosted before converted to ac voltage. Therefore, high step-up ratio dc-dc converters are preferred in renewable energy systems. A new Z-source-based topology that can boost the input voltage to desired levels with low duty ratios is proposed in this paper. The topology utilizes coupled inductor. The leakage inductance energy can efficiently be discharged. Since the device stresses are low in this topology, low-voltage MOSFETs with small RDS(on) values can be selected to reduce the conduction loss. These features improve the converter efficiency. Also, the converter has a galvanic isolation between source and load. The operating principles and steady-state analysis of continuous and discontinuous conduction modes are discussed in detail. Finally, experimental results are given for a prototype converter that converts 25 V dc to 400 V dc at various power levels with over 90% efficiency to verify the effectiveness of the theoretical analysis.", "title": "" }, { "docid": "4d042dfb53c93da464d3cecdc3ba90b2", "text": "Pectus excavatum, the most common congenital chest wall malformation, has a higher incidence among men. Since 1987, when Donald Nuss performed his technique for the first time, the minimally invasive approach has become the most widely used technique for treating pectus excavatum. Few reported studies have focused on the repair of female pectus excavatum. Women with pectus excavatum often present with breast asymmetry that may require breast augmentation, either before or after pectus excavatum repair. To the authors’ knowledge, no reports on the Nuss procedure after breast implant surgery have been published. This report describes the case of a 26-year-old woman who underwent minimally invasive repair after breast implant surgery. The authors believe that for women with severe pectus excavatum, the Nuss procedure should be the first choice for surgical correction. Moreover, for breast implant patients, this technique is absolutely feasible without major complications.", "title": "" }, { "docid": "a00cc13a716439c75a5b785407b02812", "text": "A novel current feedback programming principle and circuit architecture are presented, compatible with LED displays utilizing the 2T1C pixel structure. The new pixel programming approach is compatible with all TFT backplane technologies and can compensate for non-uniformities in both threshold voltage and carrier mobility of the OLED pixel drive TFT, due to a feedback loop that modulates the gate of the driving transistor according to the OLED current. The circuit can be internal or external to the integrated display data driver. Based on simulations and data gathered through a fabricated prototype driver, a pixel drive current of 20 nA can be programmed within an addressing time ranging from 10 μs to 50 μs.", "title": "" }, { "docid": "99ae659cc30dea0c0add3752bd04506b", "text": "Silicon Physical Unclonable Functions (PUF) are novel circuits that exploit the random variations that exist in CMOS manufacturing process to generate chip-unique random bits. As their applications are security based, it is highly desired to have PUF responses that are very stable against the reliability issues that exist in Integrated Circuits (IC). One of the major sources of unreliability in the technology nodes 90nm and below is device aging. Aging is primarily due to phenomena like Bias Temperature Instability (BTI) and Hot Carrier Injection (HCI). In this work, we study the effect of device aging on the stability of Ring Oscillator PUFs for different PUF circuit-level choices and operating conditions. We observe that most of the PUF's aging instability happens early in its lifetime. Due to the typical differential nature of PUF structures, stability does not change significantly with age. Further, a high correlation has been observed between instability that is caused due to aging and instability that is caused due to temperature. In various RO-PUF setups and operating conditions, we observe that around 4% of the PUF bits are prone to instability due to aging.", "title": "" }, { "docid": "94a2c522f65627eb36e8dd0b56db5461", "text": "Humans have a unique ability to learn more than one language — a skill that is thought to be mediated by functional (rather than structural) plastic changes in the brain. Here we show that learning a second language increases the density of grey matter in the left inferior parietal cortex and that the degree of structural reorganization in this region is modulated by the proficiency attained and the age at acquisition. This relation between grey-matter density and performance may represent a general principle of brain organization.", "title": "" }, { "docid": "58cce77789fc7b5970f3b387ce89e8c4", "text": "We propose a series of recurrent and contextual neural network models for multiple choice visual question answering on the Visual7W dataset. Motivated by divergent trends in model complexities in the literature, we explore the balance between model expressiveness and simplicity by studying incrementally more complex architectures. We start with LSTM-encoding of input questions and answers; build on this with context generation by LSTM-encodings of neural image and question representations and attention over images; and evaluate the diversity and predictive power of our models and the ensemble thereof. All models are evaluated against a simple baseline inspired by the current state-of-the-art, consisting of involving simple concatenation of bag-of-words and CNN representations for the text and images, respectively. Generally, we observe marked variation in image-reasoning performance between our models not obvious from their overall performance, as well as evidence of dataset bias. Our standalone models achieve accuracies up to 64.6%, while the ensemble of all models achieves the best accuracy of 66.67%, within 0.5% of the current state-of-the-art for Visual7W.", "title": "" } ]
scidocsrr
c7578c0fb55247696899b2d11f48e4ce
Microblogging Queries on Graph Databases: An Introspection
[ { "docid": "bd33ed4cde24e8ec16fb94cf543aad8e", "text": "Users' locations are important to many applications such as targeted advertisement and news recommendation. In this paper, we focus on the problem of profiling users' home locations in the context of social network (Twitter). The problem is nontrivial, because signals, which may help to identify a user's location, are scarce and noisy. We propose a unified discriminative influence model, named as UDI, to solve the problem. To overcome the challenge of scarce signals, UDI integrates signals observed from both social network (friends) and user-centric data (tweets) in a unified probabilistic framework. To overcome the challenge of noisy signals, UDI captures how likely a user connects to a signal with respect to 1) the distance between the user and the signal, and 2) the influence scope of the signal. Based on the model, we develop local and global location prediction methods. The experiments on a large scale data set show that our methods improve the state-of-the-art methods by 13%, and achieve the best performance.", "title": "" }, { "docid": "88862d86e43d491ec4368410a61c13fb", "text": "With the proliferation of large, irregular, and sparse relational datasets, new storage and analysis platforms have arisen to fill gaps in performance and capability left by conventional approaches built on traditional database technologies and query languages. Many of these platforms apply graph structures and analysis techniques to enable users to ingest, update, query, and compute on the topological structure of the network represented as sets of edges relating sets of vertices. To store and process Facebook-scale datasets, software and algorithms must be able to support data sources with billions of edges, update rates of millions of updates per second, and complex analysis kernels. These platforms must provide intuitive interfaces that enable graph experts and novice programmers to write implementations of common graph algorithms. In this paper, we conduct a qualitative study and a performance comparison of 12 open source graph databases using four fundamental graph algorithms on networks containing up to 256 million edges.", "title": "" }, { "docid": "164b61b3c8e29e19cd6c7be2abf046db", "text": "In recent years, more and more companies provide services that can not be anymore achieved efficiently using relational databases. As such, these companies are forced to use alternative database models such as XML databases, object-oriented databases, document-oriented databases and, more recently graph databases. Graph databases only exist for a few years. Although there have been some comparison attempts, they are mostly focused on certain aspects only. In this paper, we present a distributed graph database comparison framework and the results we obtained by comparing four important players in the graph databases market: Neo4j, Orient DB, Titan and DEX.", "title": "" } ]
[ { "docid": "86aa04b01d2db65abd5ddd5d62b91645", "text": "Asthma is a serious health problem throughout the world. During the past two decades, many scientific advances have improved our understanding of asthma and ability to manage and control it effectively. However, recommendations for asthma care need to be adapted to local conditions, resources and services. Since it was formed in 1993, the Global Initiative for Asthma, a network of individuals, organisations and public health officials, has played a leading role in disseminating information about the care of patients with asthma based on a process of continuous review of published scientific investigations. A comprehensive workshop report entitled \"A Global Strategy for Asthma Management and Prevention\", first published in 1995, has been widely adopted, translated and reproduced, and forms the basis for many national guidelines. The 2006 report contains important new themes. First, it asserts that \"it is reasonable to expect that in most patients with asthma, control of the disease can and should be achieved and maintained,\" and recommends a change in approach to asthma management, with asthma control, rather than asthma severity, being the focus of treatment decisions. The importance of the patient-care giver partnership and guided self-management, along with setting goals for treatment, are also emphasised.", "title": "" }, { "docid": "5545b3b7d24b9f5b9298f5779166ca01", "text": "In a large variety of situations one would like to have an expressive and accurate model of observed animal or human behavior. While general purpose mathematical models may capture successfully properties of observed behavior, it is desirable to root models in biological facts. Because of ample empirical evidence for reward-based learning in visuomotor tasks, we use a computational model based on the assumption that the observed agent is balancing the costs and benefits of its behavior to meet its goals. This leads to using the framework of reinforcement learning, which additionally provides well-established algorithms for learning of visuomotor task solutions. To quantify the agent’s goals as rewards implicit in the observed behavior, we propose to use inverse reinforcement learning, which quantifies the agent’s goals as rewards implicit in the observed behavior. Based on the assumption of a modular cognitive architecture, we introduce a modular inverse reinforcement learning algorithm that estimates the relative reward contributions of the component tasks in navigation, consisting of following a path while avoiding obstacles and approaching targets. It is shown how to recover the component reward weights for individual tasks and that variability in observed trajectories can be explained succinctly through behavioral goals. It is demonstrated through simulations that good estimates can be obtained already with modest amounts of observation data, which in turn allows the prediction of behavior in novel configurations.", "title": "" }, { "docid": "6e8d1b5c2183ce09aadb09e4ff215241", "text": "The widely used ChestX-ray14 dataset addresses an important medical image classification problem and has the following caveats: 1) many lung pathologies are visually similar, 2) a variant of diseases including lung cancer, tuberculosis, and pneumonia are present in a single scan, i.e. multiple labels and 3) The incidence of healthy images is much larger than diseased samples, creating imbalanced data. These properties are common in medical domain. Existing literature uses stateof-the-art DensetNet/Resnet models being transfer learned where output neurons of the networks are trained for individual diseases to cater for multiple diseases labels in each image. However, most of them don’t consider relationship between multiple classes. In this work we have proposed a novel error function, Multi-label Softmax Loss (MSML), to specifically address the properties of multiple labels and imbalanced data. Moreover, we have designed deep network architecture based on fine-grained classification concept that incorporates MSML. We have evaluated our proposed method on various network backbones and showed consistent performance improvements of AUC-ROC scores on the ChestX-ray14 dataset. The proposed error function provides a new method to gain improved performance across wider medical datasets.", "title": "" }, { "docid": "0d996ba5c45d24cbc481ac4cd225f84d", "text": "In this paper, we design and evaluate a routine for the efficient generation of block-Jacobi preconditioners on graphics processing units (GPUs). Concretely, to exploit the architecture of the graphics accelerator, we develop a batched Gauss-Jordan elimination CUDA kernel for matrix inversion that embeds an implicit pivoting technique and handles the entire inversion process in the GPU registers. In addition, we integrate extraction and insertion CUDA kernels to rapidly set up the block-Jacobi preconditioner.\n Our experiments compare the performance of our implementation against a sequence of batched routines from the MAGMA library realizing the inversion via the LU factorization with partial pivoting. Furthermore, we evaluate the costs of different strategies for the block-Jacobi extraction and insertion steps, using a variety of sparse matrices from the SuiteSparse matrix collection. Finally, we assess the efficiency of the complete block-Jacobi preconditioner generation in the context of an iterative solver applied to a set of computational science problems, and quantify its benefits over a scalar Jacobi preconditioner.", "title": "" }, { "docid": "f0bbe4e6d61a808588153c6b5fc843aa", "text": "The development of Information and Communications Technologies (ICT) has affected various fields including the automotive industry. Therefore, vehicle network protocols such as Controller Area Network (CAN), Local Interconnect Network (LIN), and FlexRay have been introduced. Although CAN is the most widely used for vehicle network protocol, its security issue is not properly addressed. In this paper, we propose a security gateway, an improved version of existing CAN gateways, to protect CAN from spoofing and DoS attacks. We analyze sequence of messages based on the driver’s behavior to resist against spoofing attack and utilize a temporary ID and SipHash algorithm to resist against DoS attack. For the verification of our proposed method, OMNeT++ is used. The suggested method shows high detection rate and low increase of traffic. Also, analysis of frame drop rate during DoS attack shows that our suggested method can defend DoS attack.", "title": "" }, { "docid": "b13c9597f8de229fb7fec3e23c0694d1", "text": "Using capture-recapture analysis we estimate the effective size of the active Amazon Mechanical Turk (MTurk) population that a typical laboratory can access to be about 7,300 workers. We also estimate that the time taken for half of the workers to leave the MTurk pool and be replaced is about 7 months. Each laboratory has its own population pool which overlaps, often extensively, with the hundreds of other laboratories using MTurk. Our estimate is based on a sample of 114,460 completed sessions from 33,408 unique participants and 689 sessions across seven laboratories in the US, Europe, and Australia from January 2012 to March 2015.", "title": "" }, { "docid": "c890c635dd0f2dcb6827f59707b5dcd4", "text": "We presenttwo families of reflective surfacesthat are capableof providing a wide field of view, andyet still approximatea perspecti ve projectionto a high degree.These surfacesarederivedby consideringaplaneperpendicular to theaxisof a surfaceof revolutionandfinding theequations governingthedistortionof theimageof theplanein thissurface. We thenview this relationasa differentialequation and prescribethe distortion term to be linear. By choosing appropriateinitial conditionsfor the differentialequation andsolvingit numerically, wederivethesurfaceshape andobtaina preciseestimateasto what degreethe resulting sensorcanapproximatea perspecti ve projection.Thus thesesurfacesactascomputational sensors, allowing for a wide-angleperspecti ve view of a scenewithout processing the imagein software. The applicationsof sucha sensor shouldbe numerous,including surveillance,roboticsand traditionalphotography . Recently, many researchersin the roboticsand vision communityhave begun to considervisual sensorsthat are ableto obtainwidefieldsof view. Suchdevicesarethenatural solution to variousdifficulties encounteredwith conventionalimagingsystems. Thetwo mostcommonmeansof obtainingwidefieldsof view arefish-eye lensesandreflectivesurfaces,alsoknown ascatoptrics.Whencatoptricsarecombinedwith conventional lenssystems,known asdioptrics,the resultingsensors are known as catadioptrics. The possibleusesof thesesystemsincludeapplicationssuchasrobotcontroland surveillance. In this paperwe will consideronly catadioptric basedsensors.Oftensuchsystemsconsistof a camera pointingataconvex mirror, asin figure(1). How to interpretand make useof the visual information obtainedby suchsystems,e.g. how they shouldbe usedto control robots,is not at all obvious. Thereareinfinitely many differentshapesthat a mirror canhave, and at leasttwo differentcameramodels(perspecti ve and orthographicprojection)with which to combineeachmirror. Convex mirror", "title": "" }, { "docid": "2876086e4431e8607d5146f14f0c29dc", "text": "Vascular ultrasonography has an important role in the diagnosis and management of venous disease. The venous system, however, is more complex and variable compared to the arterial system due to its frequent anatomical variations. This often becomes quite challenging for sonographers. This paper discusses the anatomy of the long saphenous vein and its anatomical variations accompanied by sonograms and illustrations.", "title": "" }, { "docid": "08731e24a7ea5e8829b03d79ef801384", "text": "A new power-rail ESD clamp circuit designed with PMOS as main ESD clamp device has been proposed and verified in a 65nm 1.2V CMOS process. The new proposed design with adjustable holding voltage controlled by the ESD detection circuit has better immunity against mis-trigger or transient-induced latch-on event. The layout area and the standby leakage current of this new proposed design are much superior to that of traditional RC-based power-rail ESD clamp circuit with NMOS as main ESD clamp device.", "title": "" }, { "docid": "9b1d851a41e7c253a61fec9cb65ebbfc", "text": "One of Android's main defense mechanisms against malicious apps is a risk communication mechanism which, before a user installs an app, warns the user about the permissions the app requires, trusting that the user will make the right decision. This approach has been shown to be ineffective as it presents the risk information of each app in a “stand-alone” fashion and in a way that requires too much technical knowledge and time to distill useful information. We discuss the desired properties of risk signals and relative risk scores for Android apps in order to generate another metric that users can utilize when choosing apps. We present a wide range of techniques to generate both risk signals and risk scores that are based on heuristics as well as principled machine learning techniques. Experimental results conducted using real-world data sets show that these methods can effectively identify malware as very risky, are simple to understand, and easy to use.", "title": "" }, { "docid": "ae19bd4334434cfb8c5ac015dc8d3bd4", "text": "Soldiers and front-line personnel operating in tactical environments increasingly make use of handheld devices to help with tasks such as face recognition, language translation, decision-making, and mission planning. These resource constrained edge environments are characterized by dynamic context, limited computing resources, high levels of stress, and intermittent network connectivity. Cyber-foraging is the leverage of external resource-rich surrogates to augment the capabilities of resource-limited devices. In cloudlet-based cyber-foraging, resource-intensive computation and data is offloaded to cloudlets. Forward-deployed, discoverable, virtual-machine-based tactical cloudlets can be hosted on vehicles or other platforms to provide infrastructure to offload computation, provide forward data staging for a mission, perform data filtering to remove unnecessary data from streams intended for dismounted users, and serve as collection points for data heading for enterprise repositories. This paper describes tactical cloudlets and presents experimentation results for five different cloudlet provisioning mechanisms. The goal is to demonstrate that cyber-foraging in tactical environments is possible by moving cloud computing concepts and technologies closer to the edge so that tactical cloudlets, even if disconnected from the enterprise, can provide capabilities that can lead to enhanced situational awareness and decision making at the edge.", "title": "" }, { "docid": "7bf64a2dbfa14b52d0ee46d0c61bf8d2", "text": "Mobility prediction allows estimating the stability of paths in a mobile wireless Ad Hoc networks. Identifying stable paths helps to improve routing by reducing the overhead and the number of connection interruptions. In this paper, we introduce a neural network based method for mobility prediction in Ad Hoc networks. This method consists of a multi-layer and recurrent neural network using back propagation through time algorithm for training.", "title": "" }, { "docid": "74aaf19d143d86b52c09e726a70a2ac0", "text": "This paper presents simulation and experimental investigation results of steerable integrated lens antennas (ILAs) operating in the 60 GHz frequency band. The feed array of the ILAs is comprised by four switched aperture coupled microstrip antenna (ACMA) elements that allows steering between four different antenna main beam directions in one plane. The dielectric lenses of the designed ILAs are extended hemispherical quartz (ε = 3.8) lenses with the radiuses of 7.5 and 12.5 mm. The extension lengths of the lenses are selected through the electromagnetic optimization in order to achieve the maximum ILAs directivities and also the minimum directivity degradations of the outer antenna elements in the feed array (± 3 mm displacement) relatively to the inner ones (± 1 mm displacement). Simulated maximum directivities of the boresight beam of the designed ILAs are 19.8 dBi and 23.8 dBi that are sufficient for the steerable antennas for the millimeter-wave WLAN/WPAN communication systems. The feed ACMA array together with the waveguide to microstrip transition dedicated for experimental investigations is fabricated on high frequency and low cost Rogers 4003C substrate. Single Pole Double Through (SPDT) switches from Hittite are used in order to steer the ILA prototypes main beam directions. The experimental results of the fabricated electronically steerable quartz ILA prototypes prove the simulation results and show ±35° and ±22° angle sector coverage for the lenses with the 7.5 and 12.5 mm radiuses respectively.", "title": "" }, { "docid": "f5182ad077b1fdaa450d16544d63f01b", "text": "This article paves the knowledge about the next generation Bluetooth Standard-BT 5 that will bring some mesmerizing upgrades including increased range, speed, and broadcast messaging capacity. Further, three relevant queries such as what is better about BT 5, why does that matter, and how will it affect IoT have been explained to gather related information so that developers, practitioners, and naive people could formulate BT 5 into IoT based applications while assimilating the need of short range communication in true sense.", "title": "" }, { "docid": "18c8fcba57c295568942fa40b605c27e", "text": "The Internet of Things (IoT), an emerging global network of uniquely identifiable embedded computing devices within the existing Internet infrastructure, is transforming how we live and work by increasing the connectedness of people and things on a scale that was once unimaginable. In addition to increased communication efficiency between connected objects, the IoT also brings new security and privacy challenges. Comprehensive measures that enable IoT device authentication and secure access control need to be established. Existing hardware, software, and network protection methods, however, are designed against fraction of real security issues and lack the capability to trace the provenance and history information of IoT devices. To mitigate this shortcoming, we propose an RFID-enabled solution that aims at protecting endpoint devices in IoT supply chain. We take advantage of the connection between RFID tag and control chip in an IoT device to enable data transfer from tag memory to centralized database for authentication once deployed. Finally, we evaluate the security of our proposed scheme against various attacks.", "title": "" }, { "docid": "557da3544fd738ecfc3edf812b92720b", "text": "OBJECTIVES\nTo describe the sonographic appearance of the structures of the posterior cranial fossa in fetuses at 11 + 3 to 13 + 6 weeks of pregnancy and to determine whether abnormal findings of the brain and spine can be detected by sonography at this time.\n\n\nMETHODS\nThis was a prospective study including 692 fetuses whose mothers attended Innsbruck Medical University Hospital for first-trimester sonography. In 3% (n = 21) of cases, measurement was prevented by fetal position. Of the remaining 671 cases, in 604 there was either a normal anomaly scan at 20 weeks or delivery of a healthy child and in these cases the transcerebellar diameter (TCD) and the anteroposterior diameter of the cisterna magna (CM), measured at 11 + 3 to 13 + 6 weeks, were analyzed. In 502 fetuses, the anteroposterior diameter of the fourth ventricle (4V) was also measured. In 25 fetuses, intra- and interobserver repeatability was calculated.\n\n\nRESULTS\nWe observed a linear correlation between crown-rump length (CRL) and CM (CM = 0.0536 × CRL - 1.4701; R2 = 0.688), TCD (TCD = 0.1482 × CRL - 1.2083; R2 = 0.701) and 4V (4V = 0.0181 × CRL + 0.9186; R2 = 0.118). In three patients with posterior fossa cysts, measurements significantly exceeded the reference values. One fetus with spina bifida had an obliterated CM and the posterior border of the 4V could not be visualized.\n\n\nCONCLUSIONS\nTransabdominal sonographic assessment of the posterior fossa is feasible in the first trimester. Measurements of the 4V, the CM and the TCD performed at this time are reliable. The established reference values assist in detecting fetal anomalies. However, findings must be interpreted carefully, as some supposed malformations might be merely delayed development of brain structures.", "title": "" }, { "docid": "cb55daf6ada8e9caba80aa4f421fc395", "text": "This paper surveys the state of the art on multimodal gesture recognition and introduces the JMLR special topic on gesture recognition 2011-2015. We began right at the start of the KinectT Mrevolution when inexpensive infrared cameras providing image depth recordings became available. We published papers using this technology and other more conventional methods, including regular video cameras, to record data, thus providing a good overview of uses of machine learning and computer vision using multimodal data in this area of application. Notably, we organized a series of challenges and made available several datasets we recorded for that purpose, including tens of thousands of videos, which are available to conduct further research. We also overview recent state of the art works on gesture recognition based on a proposed taxonomy for gesture recognition, discussing challenges and future lines of research.", "title": "" }, { "docid": "ec7590c04dc31b1c6065ef4e15148dfc", "text": "No thesis - no graduation. Academic writing poses manifold challenges to students, instructors and institutions alike. High labor costs, increasing student numbers, and the Bologna Process (which has reduced the period after which undergraduates in Europe submit their first thesis and thus the time available to focus on writing skills) all pose a threat to students’ academic writing abilities. This situation gave rise to the practical goal of this study: to determine if, and to what extent, academic writing and its instruction can be scaled (i.e., designed more efficiently) using a technological solution, in this case Thesis Writer (TW), a domain-specific, online learning environment for the scaffolding of student academic writing, combined with an online editor optimized for producing academic text. Compared to existing automated essay scoring and writing evaluation tools, TW is not focusing on feedback but on instruction, planning, and genre mastery. While most US-based tools, particularly those also used in secondary education, are targeting on the essay genre, TW is tailored to the needs of theses and research article writing (IMRD scheme). This mixed-methods paper reports data of a test run with a first-year course of 102 business administration students. A technology adoption model served as a frame of reference for the research design. From a student’s perspective, problems posed by the task of writing a research proposal as well as the use, usability, and usefulness of TW were studied through an online survey and focus groups (explanatory sequential design). Results seen were positive to highly positive – TW is being used, and has been deemed supportive by students. In particular, it supports the scaling of writing instruction in group assignment settings.", "title": "" }, { "docid": "fecfd19eaf90b735cf00e727fca768b8", "text": "Real-time detection of irregularities in visual data is very invaluable and useful in many prospective applications including surveillance, patient monitoring systems, etc. With the surge of deep learning methods in the recent years, researchers have tried a wide spectrum of methods for different applications. However, for the case of irregularity or anomaly detection in videos, training an end-to-end model is still an open challenge, since often irregularity is not well-defined and there are not enough irregular samples to use during training. In this paper, inspired by the success of generative adversarial networks (GANs) for training deep models in unsupervised or self-supervised settings, we propose an end-to-end deep network for detection and fine localization of irregularities in videos (and images). Our proposed architecture is composed of two networks, which are trained in competing with each other while collaborating to find the irregularity. One network works as a pixel-level irregularity Inpainter, and the other works as a patch-level Detector. After an adversarial self-supervised training, in which I tries to fool D into accepting its inpainted output as regular (normal), the two networks collaborate to detect and fine-segment the irregularity in any given testing video. Our results on three different datasets show that our method can outperform the state-of-the-art and fine-segment the irregularity. 1", "title": "" }, { "docid": "258d98751d5b3cf4f33bf9473a678cf4", "text": "A Blockchain is a public immutable distributed ledger and stores a various kinds of transactions. Because there is no central authority that regulates the system and users don’t trust eath other, a blockchain system needs an algorithm for users to reach consensus on block creation. In this report, we will explore 3 consensus algorithms: Proof-of-Work, Proof-ofStake and Proof-of-Activity.", "title": "" } ]
scidocsrr
e53153e8873ae09042a4787a01906792
Computation Offloading for Service Workflow in Mobile Cloud Computing
[ { "docid": "3d4c02afa38b7693ddfe893a0ffa012c", "text": "Offloading computation from smartphones to remote cloud resources has recently been rediscovered as a technique to enhance the performance of smartphone applications, while reducing the energy usage. In this paper we present the first practical implementation of this idea for Android: the Cuckoo framework, which simplifies the development of smartphone applications that benefit from computation offloading and provides a dynamic runtime system, that can, at runtime, decide whether a part of an application will be executed locally or remotely. We evaluate the framework using two real life applications.", "title": "" } ]
[ { "docid": "7cdbc62cc96f7300e35d71f64634882c", "text": "The dueling bandits problem is an online learning framework for learning from pairwise preference feedback, and is particularly wellsuited for modeling settings that elicit subjective or implicit human feedback. In this paper, we study the problem of multi-dueling bandits with dependent arms, which extends the original dueling bandits setting by simultaneously dueling multiple arms as well as modeling dependencies between arms. These extensions capture key characteristics found in many realworld applications, and allow for the opportunity to develop significantly more efficient algorithms than were possible in the original setting. We propose the SELFSPARRING algorithm, which reduces the multi-dueling bandits problem to a conventional bandit setting that can be solved using a stochastic bandit algorithm such as Thompson Sampling, and can naturally model dependencies using a Gaussian process prior. We present a no-regret analysis for multi-dueling setting, and demonstrate the effectiveness of our algorithm empirically on a wide range of simulation settings.", "title": "" }, { "docid": "350fda22c7dea712813d8b288f9f25cb", "text": "The embedded sensor networks are a promising technology to improve our life with home and industrial automation, health monitoring, and sensing and actuation in agriculture. Fitness trackers, thermostats, door locks are just a few examples of Internet of Things that have already become part of our everyday life. Despite advances in sensors, microcontrollers, signal processing, networking and programming languages, developing an Internet of Things application is a laborious task. Many of these complex distributed systems share a 3-tier architecture consisting of embedded nodes, gateways that connect an embedded network to the wider Internet and data services in servers or the cloud. Yet the IoT applications are developed for each tier separately. Consequently, the developer needs to amalgamate these distinct applications together. This paper proposes a novel approach for programming applications across 3-tiers using a distributed extension of the Model-View-Controller architecture. We add new primitive: a space - that contains properties and implementation of a particular tier. Writing applications in this architecture affords numerous advantages: automatic model synchronization, data transport, and energy efficiency.", "title": "" }, { "docid": "8684054a3aed718333d39ea27a813791", "text": "Article history: Received Accepted Available online 02 Sept. 2013 30 Sept. 2013 07 Oct. 2013 Colour is an inseparable as well as an important aspect of an interior design. The maximum influence in interior comes with the design of colour. So it is very important to study the colour and its effect in interior environment, it may be physiological as well as psychological. For this, articles were reviewed and analyzed from the existing literature, related to use of colour in both residence as well as commercial interior. The three major areas reviewed were (1) Psychological and physiological effect of colour (2) Meaning of Warm, Cool and Neutral Colour (3) Effect of Colour in forms. The results show that colour is important in designing functional spaces. The results of this analysis may benefit to architects, interior designer, and homeowner to use colour effectively in interior environment. © 2013 International Journal of Advanced Research in Science and Technology (IJARST). All rights reserved.", "title": "" }, { "docid": "652e544ec32f5fde48d2435de81f5351", "text": "As many as 50% of spontaneous preterm births are infection-associated. Intrauterine infection leads to a maternal and fetal inflammatory cascade, which produces uterine contractions and may also result in long-term adverse outcomes, such as cerebral palsy. This article addresses the prevalence, microbiology, and management of intrauterine infection in the setting of preterm labor with intact membranes. It also outlines antepartum treatment of infections for the purpose of preventing preterm birth.", "title": "" }, { "docid": "97ca52a74f6984cda706b54830c58fd8", "text": "In this paper, we study a novel approach for named entity recognition (NER) and mention detection in natural language processing. Instead of treating NER as a sequence labelling problem, we propose a new local detection approach, which rely on the recent fixed-size ordinally forgetting encoding (FOFE) method to fully encode each sentence fragment and its left/right contexts into a fixed-size representation. Afterwards, a simple feedforward neural network is used to reject or predict entity label for each individual fragment. The proposed method has been evaluated in several popular NER and mention detection tasks, including the CoNLL 2003 NER task and TAC-KBP2015 and TAC-KBP2016 Trilingual Entity Discovery and Linking (EDL) tasks. Our methods have yielded pretty strong performance in all of these examined tasks. This local detection approach has shown many advantages over the traditional sequence labelling methods.", "title": "" }, { "docid": "65be3c4cf41f035e79fe0e968b8b5158", "text": "An efficient and analytical continuous-curvature path-smoothing algorithm, which fits an ordered sequence of waypoints generated by an obstacle-avoidance path planner, is proposed. The algorithm is based upon parametric cubic Bézier curves; thus, it is inherently closed-form in its expression, and the algorithm only requires the maximum curvature to be defined. The algorithm is, thus, computational efficient and easy to implement. Results show the effectiveness of the analytical algorithm in generating a continuous-curvature path, which satisfies an upper bound-curvature constraint, and that the path generated requires less control effort to track and minimizes control-input variability.", "title": "" }, { "docid": "534fd7868826681596586f00f47cd819", "text": "Locally weighted projection regression is a new algorithm that achieves nonlinear function approximation in high dimensional spaces with redundant and irrelevant input dimensions. At its core, it uses locally linear models, spanned by a small number of univariate regressions in selected directions in input space. This paper evaluates different methods of projection regression and derives a nonlinear function approximator based on them. This nonparametric local learning system i) learns rapidly with second order learning methods based on incremental training, ii) uses statistically sound stochastic cross validation to learn iii) adjusts its weighting kernels based on local information only, iv) has a computational complexity that is linear in the number of inputs, and v) can deal with a large number of possibly redundant inputs, as shown in evaluations with up to 50 dimensional data sets. To our knowledge, this is the first truly incremental spatially localized learning method to combine all these properties.", "title": "" }, { "docid": "1afd292e5f6632b8a4be79671429e9b3", "text": "In this work, we automatically detect and remove distracting shadows from photographs of documents and other text-based items. Documents typically have a constant colored background; based on this observation, we propose a technique to estimate background and text color in local image blocks. We match these local background color estimates to a global reference to generate a shadow map. Correcting the image with this shadow map produces the final unshadowed output. We demonstrate that our algorithm is robust and produces high-quality results, qualitatively and quantitatively, in both controlled and real-world settings containing large regions of significant shadow.", "title": "" }, { "docid": "d7a85bedea94e2e70f9ad52c6247f8d3", "text": "Little is known about the perception of artificial spatial hearing by hearing-impaired subjects. The purpose of this study was to investigate how listeners with hearing disorders perceived the effect of a spatialization feature designed for wireless microphone systems. Forty listeners took part in the experiments. They were arranged in four groups: normal-hearing, moderate, severe, and profound hearing loss. Their performance in terms of speech understanding and speaker localization was assessed with diotic and binaural stimuli. The results of the speech intelligibility experiment revealed that the subjects presenting a moderate or severe hearing impairment better understood speech with the spatialization feature. Thus, it was demonstrated that the conventional diotic binaural summation operated by current wireless systems can be transformed to reproduce the spatial cues required to localize the speaker, without any loss of intelligibility. The speaker localization experiment showed that a majority of the hearing-impaired listeners had similar performance with natural and artificial spatial hearing, contrary to the normal-hearing listeners. This suggests that certain subjects with hearing impairment preserve their localization abilities with approximated generic head-related transfer functions in the frontal horizontal plane.", "title": "" }, { "docid": "7c2d0b382685ac7e85c978ece31251d7", "text": "Given an edge-weighted graph G with a set $$Q$$ Q of k terminals, a mimicking network is a graph with the same set of terminals that exactly preserves the size of minimum cut between any partition of the terminals. A natural question in the area of graph compression is to provide as small mimicking networks as possible for input graph G being either an arbitrary graph or coming from a specific graph class. We show an exponential lower bound for cut mimicking networks in planar graphs: there are edge-weighted planar graphs with k terminals that require $$2^{k-2}$$ 2k-2 edges in any mimicking network. This nearly matches an upper bound of $$\\mathcal {O}(k 2^{2k})$$ O(k22k) of Krauthgamer and Rika (in: Khanna (ed) Proceedings of the twenty-fourth annual ACM-SIAM symposium on discrete algorithms, SODA 2013, New Orleans, 2013) and is in sharp contrast with the upper bounds of $$\\mathcal {O}(k^2)$$ O(k2) and $$\\mathcal {O}(k^4)$$ O(k4) under the assumption that all terminals lie on a single face (Goranci et al., in: Pruhs and Sohler (eds) 25th Annual European symposium on algorithms (ESA 2017), 2017, arXiv:1702.01136; Krauthgamer and Rika in Refined vertex sparsifiers of planar graphs, 2017, arXiv:1702.05951). As a side result we show a tight example for double-exponential upper bounds given by Hagerup et al. (J Comput Syst Sci 57(3):366–375, 1998), Khan and Raghavendra (Inf Process Lett 114(7):365–371, 2014), and Chambers and Eppstein (J Gr Algorithms Appl 17(3):201–220, 2013).", "title": "" }, { "docid": "12a34678fa46825e11944f317fdd4977", "text": "The purpose of a distributed file system (DFS) is to allow users of physically distributed computers to share data and storage resources by using a common file system. A typical configuration for a DFS is a collection of workstations and mainframes connected by a local area network (LAN). A DFS is implemented as part of the operating system of each of the connected computers. This paper establishes a viewpoint that emphasizes the dispersed structure and decentralization of both data and control in the design of such systems. It defines the concepts of transparency, fault tolerance, and scalability and discusses them in the context of DFSs. The paper claims that the principle of distributed operation is fundamental for a fault tolerant and scalable DFS design. It also presents alternatives for the semantics of sharing and methods for providing access to remote files. A survey of contemporary UNIX-based systems, namely, UNIX United, Locus, Sprite, Sun's Network File System, and ITC's Andrew, illustrates the concepts and demonstrates various implementations and design alternatives. Based on the assessment of these systems, the paper makes the point that a departure from the extending centralized file systems over a communication network is necessary to accomplish sound distributed file system design.", "title": "" }, { "docid": "3c6ced0f3778c2d3c123a1752c50d276", "text": "Business intelligence (BI) has been referred to as the process of making better decisions through the use of people, processes, data and related tools and methodologies. Data mining is the extraction of hidden stating information from large databases. It is a powerful new technology with large potential to help the company's to focus on the most necessary information in the data warehouse. This study gives us an idea of how data mining is applied in exhibiting business intelligence thereby helping the organizations to make better decisions. Keywords-Business intelligence, data mining, database, information technology, management information system ——————————  ——————————", "title": "" }, { "docid": "70eccf91a9d1c74044677b425fbe51f3", "text": "Wearable technology presents a wealth of new HCI issues. In particular, this paper addresses the impact of the physical interaction between the user's body and the device's physical form on the user's mental representation of self and cognitive abilities, a blend of HCI and ergonomics that is unique to wearable computing. We explore the human sensory mechanisms that facilitate perception of worn objects and the elements of sensation that influence the comfort of worn objects, and discuss the psychological elements that may cause worn objects to be forgotten or detected, wearable or not. We discuss the implications of un-wearability on attention and cognitive capability.", "title": "" }, { "docid": "268a7147cc4ae486bf4b9184787b9492", "text": "Autonomous vehicles will need to decide on a course of action when presented with multiple less-than-ideal outcomes.", "title": "" }, { "docid": "5a416fb88c3f5980989f7556fb19755c", "text": "Cloud computing helps to share data and provide many resources to users. Users pay only for those resources as much they used. Cloud computing stores the data and distributed resources in the open environment. The amount of data storage increases quickly in open environment. So, load balancing is a main challenge in cloud environment. Load balancing is helped to distribute the dynamic workload across multiple nodes to ensure that no single node is overloaded. It helps in proper utilization of resources .It also improve the performance of the system. Many existing algorithms provide load balancing and better resource utilization. There are various types load are possible in cloud computing like memory, CPU and network load. Load balancing is the process of finding overloaded nodes and then transferring the extra load to other nodes.", "title": "" }, { "docid": "b06deb6b5b8a1729d1b386bed06789c4", "text": "Identifying regions of interest in an image has long been of great importance in a wide range of tasks, including place recognition. In this letter, we propose a novel attention mechanism with flexible context, which can be incorporated into existing feedforward network architecture to learn image representations for long-term place recognition. In particular, in order to focus on regions that contribute positively to place recognition, we introduce a multiscale context-flexible network to estimate the importance of each spatial region in the feature map. Our model is trained end-to-end for place recognition and can detect regions of interest of arbitrary shape. Extensive experiments have been conducted to verify the effectiveness of our approach and the results demonstrate that our model can achieve consistently better performance than the state of the art on standard benchmark datasets. Finally, we visualize the learned attention maps to generate insights into what attention the network has learned.", "title": "" }, { "docid": "18c30c601e5f52d5117c04c85f95105b", "text": "Crohn's disease is a relapsing systemic inflammatory disease, mainly affecting the gastrointestinal tract with extraintestinal manifestations and associated immune disorders. Genome wide association studies identified susceptibility loci that--triggered by environmental factors--result in a disturbed innate (ie, disturbed intestinal barrier, Paneth cell dysfunction, endoplasmic reticulum stress, defective unfolded protein response and autophagy, impaired recognition of microbes by pattern recognition receptors, such as nucleotide binding domain and Toll like receptors on dendritic cells and macrophages) and adaptive (ie, imbalance of effector and regulatory T cells and cytokines, migration and retention of leukocytes) immune response towards a diminished diversity of commensal microbiota. We discuss the epidemiology, immunobiology, amd natural history of Crohn's disease; describe new treatment goals and risk stratification of patients; and provide an evidence based rational approach to diagnosis (ie, work-up algorithm, new imaging methods [ie, enhanced endoscopy, ultrasound, MRI and CT] and biomarkers), management, evolving therapeutic targets (ie, integrins, chemokine receptors, cell-based and stem-cell-based therapies), prevention, and surveillance.", "title": "" }, { "docid": "1d9b50bf7fa39c11cca4e864bbec5cf3", "text": "FPGA-based embedded soft vector processors can exceed the performance and energy-efficiency of embedded GPUs and DSPs for lightweight deep learning applications. For low complexity deep neural networks targeting resource constrained platforms, we develop optimized Caffe-compatible deep learning library routines that target a range of embedded accelerator-based systems between 4 -- 8 W power budgets such as the Xilinx Zedboard (with MXP soft vector processor), NVIDIA Jetson TK1 (GPU), InForce 6410 (DSP), TI EVM5432 (DSP) as well as the Adapteva Parallella board (custom multi-core with NoC). For MNIST (28×28 images) and CIFAR10 (32×32 images), the deep layer structure is amenable to MXP-enhanced FPGA mappings to deliver 1.4 -- 5× higher energy efficiency than all other platforms. Not surprisingly, embedded GPU works better for complex networks with large image resolutions.", "title": "" }, { "docid": "ed0641e59cc1b081c894bc6f49182971", "text": "Researchers display confirmation bias when they persevere by revising procedures until obtaining a theory-predicted result. This strategy produces findings that are overgeneralized in avoidable ways, and this in turn hinders successful applications. (The 40-year history of an attitude-change phenomenon, the sleeper effect, stands as a case in point.) Confirmation bias is an expectable product of theorycentered research strategies, including both the puzzle-solving activity of T. S. Kuhn's \"normal science\" and, more surprisingly, K. R. Popper's recommended method of falsification seeking. The alternative strategies of condition seeking (identifying limiting conditions for a known finding) and design (discovering conditions that can produce a previously unobtained result) are result centered; they are directed at producing specified patterns of data rather than at the logically impossible goals of establishing either the truth or falsity of a theory. Result-centered methods are by no means atheoretical. Rather, they oblige resourcefulness in using existing theory and can stimulate novel development of theory.", "title": "" }, { "docid": "a085131dda55d95a52fa0d4653f77410", "text": "Numerous studies show that happy individuals are successful across multiple life domains, including marriage, friendship, income, work performance, and health. The authors suggest a conceptual model to account for these findings, arguing that the happiness-success link exists not only because success makes people happy, but also because positive affect engenders success. Three classes of evidence--crosssectional, longitudinal, and experimental--are documented to test their model. Relevant studies are described and their effect sizes combined meta-analytically. The results reveal that happiness is associated with and precedes numerous successful outcomes, as well as behaviors paralleling success. Furthermore, the evidence suggests that positive affect--the hallmark of well-being--may be the cause of many of the desirable characteristics, resources, and successes correlated with happiness. Limitations, empirical issues, and important future research questions are discussed.", "title": "" } ]
scidocsrr
acf405c82d24dd2057cbd064e2898867
CoFiSet: Collaborative Filtering via Learning Pairwise Preferences over Item-sets
[ { "docid": "91f718a69532c4193d5e06bf1ea19fd3", "text": "Factorization approaches provide high accuracy in several important prediction problems, for example, recommender systems. However, applying factorization approaches to a new prediction problem is a nontrivial task and requires a lot of expert knowledge. Typically, a new model is developed, a learning algorithm is derived, and the approach has to be implemented.\n Factorization machines (FM) are a generic approach since they can mimic most factorization models just by feature engineering. This way, factorization machines combine the generality of feature engineering with the superiority of factorization models in estimating interactions between categorical variables of large domain. libFM is a software implementation for factorization machines that features stochastic gradient descent (SGD) and alternating least-squares (ALS) optimization, as well as Bayesian inference using Markov Chain Monto Carlo (MCMC). This article summarizes the recent research on factorization machines both in terms of modeling and learning, provides extensions for the ALS and MCMC algorithms, and describes the software tool libFM.", "title": "" }, { "docid": "2d7d20d578573dab8af8aff960010fea", "text": "Two flavors of the recommendation problem are the explicit and the implicit feedback settings. In the explicit feedback case, users rate items and the user-item preference relationship can be modelled on the basis of the ratings. In the harder but more common implicit feedback case, the system has to infer user preferences from indirect information: presence or absence of events, such as a user viewed an item. One approach for handling implicit feedback is to minimize a ranking objective function instead of the conventional prediction mean squared error. The naive minimization of a ranking objective function is typically expensive. This difficulty is usually overcome by a trade-off: sacrificing the accuracy to some extent for computational efficiency by sampling the objective function. In this paper, we present a computationally effective approach for the direct minimization of a ranking objective function, without sampling. We demonstrate by experiments on the Y!Music and Netflix data sets that the proposed method outperforms other implicit feedback recommenders in many cases in terms of the ErrorRate, ARP and Recall evaluation metrics.", "title": "" } ]
[ { "docid": "736a454a8aa08edf645312cecc7925c3", "text": "This paper describes an <i>analogy ontology</i>, a formal representation of some key ideas in analogical processing, that supports the integration of analogical processing with first-principles reasoners. The ontology is based on Gentner's <i>structure-mapping</i> theory, a psychological account of analogy and similarity. The semantics of the ontology are enforced via procedural attachment, using cognitive simulations of structure-mapping to provide analogical processing services. Queries that include analogical operations can be formulated in the same way as standard logical inference, and analogical processing systems in turn can call on the services of first-principles reasoners for creating cases and validating their conjectures. We illustrate the utility of the analogy ontology by demonstrating how it has been used in three systems: A crisis management analogical reasoner that answers questions about international incidents, a course of action analogical critiquer that provides feedback about military plans, and a comparison question-answering system for knowledge capture. These systems rely on large, general-purpose knowledge bases created by other research groups, thus demonstrating the generality and utility of these ideas.", "title": "" }, { "docid": "ca683d498e690198ca433050c3d91fd0", "text": "Cross-graph Relational Learning (CGRL) refers to the problem of predicting the strengths or labels of multi-relational tuples of heterogeneous object types, through the joint inference over multiple graphs which specify the internal connections among each type of objects. CGRL is an open challenge in machine learning due to the daunting number of all possible tuples to deal with when the numbers of nodes in multiple graphs are large, and because the labeled training instances are extremely sparse as typical. Existing methods such as tensor factorization or tensor-kernel machines do not work well because of the lack of convex formulation for the optimization of CGRL models, the poor scalability of the algorithms in handling combinatorial numbers of tuples, and/or the non-transductive nature of the learning methods which limits their ability to leverage unlabeled data in training. This paper proposes a novel framework which formulates CGRL as a convex optimization problem, enables transductive learning using both labeled and unlabeled tuples, and offers a scalable algorithm that guarantees the optimal solution and enjoys a linear time complexity with respect to the sizes of input graphs. In our experiments with a subset of DBLP publication records and an Enzyme multi-source dataset, the proposed method successfully scaled to the large cross-graph inference problem, and outperformed other representative approaches significantly.", "title": "" }, { "docid": "7d08501a0123d773f9fe755f1612e57e", "text": "Language-music comparative studies have highlighted the potential for shared resources or neural overlap in auditory short-term memory. However, there is a lack of behavioral methodologies for comparing verbal and musical serial recall. We developed a visual grid response that allowed both musicians and nonmusicians to perform serial recall of letter and tone sequences. The new method was used to compare the phonological similarity effect with the impact of an operationalized musical equivalent-pitch proximity. Over the course of three experiments, we found that short-term memory for tones had several similarities to verbal memory, including limited capacity and a significant effect of pitch proximity in nonmusicians. Despite being vulnerable to phonological similarity when recalling letters, however, musicians showed no effect of pitch proximity, a result that we suggest might reflect strategy differences. Overall, the findings support a limited degree of correspondence in the way that verbal and musical sounds are processed in auditory short-term memory.", "title": "" }, { "docid": "f24bfd745d9f28a96de1d3a897bf91e6", "text": "In this paper, autoregressive parameter estimation for Kalman filtering speech enhancement is studied. In conventional Kalman filtering speech enhancement, spectral subtraction is usually used for speech autoregressive (AR) parameter estimation. We propose log spectral amplitude (LSA) minimum mean-square error (MMSE) instead of spectral subtraction for the estimation of speech AR parameters. Based on an observation that full-band Kalman filtering speech enhancement often causes an unbalanced noise reduction between speech and non-speech segments, a spectral solution is proposed to overcome the unbalanced reduction of noise. This is done by shaping the spectral envelopes of the noise through likelihood ratio. Our simulation results show the effectiveness of the proposed method.", "title": "" }, { "docid": "ad526a01f76956af87be7287c5cdb964", "text": "Model-based reinforcement learning is a powerful paradigm for learning tasks in robotics. However, in-depth exploration is usually required and the actions have to be known in advance. Thus, we propose a novel algorithm that integrates the option of requesting teacher demonstrations to learn new domains with fewer action executions and no previous knowledge. Demonstrations allow new actions to be learned and they greatly reduce the amount of exploration required, but they are only requested when they are expected to yield a significant improvement because the teacher’s time is considered to be more valuable than the robot’s time. Moreover, selecting the appropriate action to demonstrate is not an easy task, and thus some guidance is provided to the teacher. The rule-based model is analyzed to determine the parts of the state that may be incomplete, and to provide the teacher with a set of possible problems for which a demonstration is needed. Rule analysis is also used to find better alternative models and to complete subgoals before requesting help, thereby minimizing the number of requested demonstrations. These improvements were demonstrated in a set of experiments, which included domains from the international planning competition and a robotic task. Adding teacher demonstrations and rule analysis reduced the amount of exploration required by up to 60% in some domains, and improved the success ratio by 35% in other domains.", "title": "" }, { "docid": "701fb71923bb8a2fc90df725074f576b", "text": "Quantum computing poses challenges to public key signatures as we know them today. LMS and XMSS are two hash based signature schemes that have been proposed in the IETF as quantum secure. Both schemes are based on well-studied hash trees, but their similarities and differences have not yet been discussed. In this work, we attempt to compare the two standards. We compare their security assumptions and quantify their signature and public key sizes. We also address the computation overhead they introduce. Our goal is to provide a clear understanding of the schemes’ similarities and differences for implementers and protocol designers to be able to make a decision as to which standard to chose.", "title": "" }, { "docid": "98c9adda989991cc2d2ddbe27988a2cd", "text": "Multi-user, touch-sensing input devices create opportunities for the use of cooperative gestures -- multi-user gestural interactions for single display groupware. Cooperative gestures are interactions where the system interprets the gestures of more than one user as contributing to a single, combined command. Cooperative gestures can be used to enhance users' sense of teamwork, increase awareness of important system events, facilitate reachability and access control on large, shared displays, or add a unique touch to an entertainment-oriented activity. This paper discusses motivating scenarios for the use of cooperative gesturing and describes some initial experiences with CollabDraw, a system for collaborative art and photo manipulation. We identify design issues relevant to cooperative gesturing interfaces, and present a preliminary design framework. We conclude by identifying directions for future research on cooperative gesturing interaction techniques.", "title": "" }, { "docid": "38be1070365c2c8c2214ff1aafccd8c3", "text": "We investigate the problem of transforming an input sequence into a high-dimensional output sequence in order to transcribe polyphonic audio music into symbolic notation. We introduce a probabilistic model based on a recurrent neural network that is able to learn realistic output distributions given the input and we devise an efficient algorithm to search for the global mode of that distribution. The resulting method produces musically plausible transcriptions even under high levels of noise and drastically outperforms previous state-of- the-art approaches on five datasets of synthesized sounds and real recordings, approximately halving the test error rate.", "title": "" }, { "docid": "cee1d7d199f6122871391112a8ba1c81", "text": "Plagiarism of digital documents seems a serious problem in today's era. Plagiarism refers to the use of someone's data, language and writing without proper acknowledgment of the original source. Plagiarism of another author's original work is one of the biggest problems in publishing, science, and education. Plagiarism can be of different types. This paper presents a different approach for measuring semantic similarity between words and their meanings. Existing systems are based on the traditional approach. For detecting plagiarism, traditional methods focus on text matching according to keywords but fail to detect intelligent plagiarism using semantic web. We have suggested new strategies for detecting the plagiarism in the user document using the semantic web. In paper we have proposed architecture and algorithms to better detection of copy case using semantic search, it can improve the performance of copy case detection system. It analyzes the user document. After the implementation of this technique, the accuracy of plagiarism detection system will surely increase.", "title": "" }, { "docid": "a1d061eb47e1404d2160c5e830229dc1", "text": "Recommendation techniques are very important in the fields of E-commerce and other web-based services. One of the main difficulties is dynamically providing high-quality recommendation on sparse data. In this paper, a novel dynamic personalized recommendation algorithm is proposed, in which information contained in both ratings and profile contents are utilized by exploring latent relations between ratings, a set of dynamic features are designed to describe user preferences in multiple phases, and finally, a recommendation is made by adaptively weighting the features. Experimental results on public data sets show that the proposed algorithm has satisfying performance.", "title": "" }, { "docid": "0bd7c453279c97333e7ac6c52f7127d8", "text": "Among various biometric modalities, signature verification remains one of the most widely used methods to authenticate the identity of an individual. Signature verification, the most important component of behavioral biometrics, has attracted significant research attention over the last three decades. Despite extensive research, the problem still remains open to research due to the variety of challenges it offers. The high intra-class variations in signatures resulting from different physical or mental states of the signer, the differences that appear with aging and the visual similarity in case of skilled forgeries etc. are only a few of the challenges to name. This paper is intended to provide a review of the recent advancements in offline signature verification with a discussion on different types of forgeries, the features that have been investigated for this problem and the classifiers employed. The pros and cons of notable recent contributions to this problem have also been presented along with a discussion of potential future research directions on this subject.", "title": "" }, { "docid": "7e873e837ccc1696eb78639e03d02cae", "text": "Steering is an integral component of adaptive locomotor behavior. Along with reorientation of gaze and body in the direction of intended travel, body center of mass must be controlled in the mediolateral plane. In this study we examine how these subtasks are sequenced when steering is planned early or initiated under time constraints. Whole body kinematics were monitored as individuals were required to change their direction of travel by varying amounts when visually cued either at the beginning of the walk or one stride before. The analyses focused on the transition stride from one travel direction to another. Timing of changes (with respect to first right foot contact) in trunk roll angle, head and trunk yaw angle, and right foot displacement in the mediolateral plane were analyzed. The magnitude of these measures along with right and left foot placement at the beginning and right foot placement at the end of the transition stride were also analyzed. The results show the CNS uses two mechanisms, foot placement and trunk roll motion (piking action about the hip joint in the frontal plane), to move the center of mass towards the new direction of travel in the transition stride, preferring to use the first option when planning can be done early. Control of body center of mass precedes all other changes and is followed by initiation of head reorientation. Only then is the rest of the body reorientation initiated.", "title": "" }, { "docid": "dd0cc729ce33906c31fa48fbc31b23c1", "text": "Firstborn children's reactions to mother-infant and father-infant interaction after a sibling's birth were examined in an investigation of 224 families. Triadic observations of parent-infant-sibling interaction were conducted at 1 month after the birth. Parents reported on children's problem behaviors at 1 and 4 months after the birth and completed the Attachment Q-sort before the birth. Latent profile analysis (LPA) identified 4 latent classes (behavioral profiles) for mother-infant and father-infant interactions: regulated-exploration, disruptive-dysregulated, approach-avoidant, and anxious-clingy. A fifth class, attention-seeking, was found with fathers. The regulated-exploration class was the normative pattern (60%), with few children in the disruptive class (2.7%). Approach-avoidant children had more behavior problems at 4 months than any other class, with the exception of the disruptive children, who were higher on aggression and attention problems. Before the birth, anxious-clingy children had less secure attachments to their fathers than approach avoidant children but more secure attachments to their mothers. Results underscore individual differences in firstborns' behavioral responses to parent-infant interaction and the importance of a person-centered approach for understanding children's jealousy.", "title": "" }, { "docid": "070ffb09caeb20625ca6cea201801b20", "text": "KDD-Cup 2011 challenged the community to identify user tastes in music by leveraging Yahoo! Music user ratings. The competition hosted two tracks, which were based on two datasets sampled from the raw data, including hundreds of millions of ratings. The underlying ratings were given to four types of musical items: tracks, albums, artists, and genres, forming a four level hierarchical taxonomy. The challenge started on March 15, 2011 and ended on June 30, 2011 attracting 2389 participants, 2100 of which were active by the end of the competition. The popularity of the challenge is related to the fact that learning a large scale recommender systems is a generic problem, highly relevant to the industry. In addition, the contest drew interest by introducing a number of scientific and technical challenges including dataset size, hierarchical structure of items, high resolution timestamps of ratings, and a non-conventional ranking-based task. This paper provides the organizers’ account of the contest, including: a detailed analysis of the datasets, discussion of the contest goals and actual conduct, and lessons learned throughout the contest.", "title": "" }, { "docid": "7c82645a48119c4fcfee83ae80caa80e", "text": "For the past few decades, automatic accident detection, especially using video analysis, has become a very important subject. It is important not only for traffic management but also, for Intelligent Transportation Systems (ITS) through its contribution to avoid the escalation of accidents especially on highways. In this paper a novel vision-based road accident detection algorithm on highways and expressways is proposed. This algorithm is based on an adaptive traffic motion flow modeling technique, using Farneback Optical Flow for motions detection and a statistic heuristic method for accident detection. The algorithm was applied on a set of collected videos of traffic and accidents on highways. The results prove the efficiency and practicability of the proposed algorithm using only 240 frames for traffic motion modeling. This method avoids to utilization of a large database while adequate and common accidents videos benchmarks do not exist.", "title": "" }, { "docid": "66f47f612c332ac9e3eee7a4f4024a17", "text": "The welfare of both women and men constitutes the human welfare. At the turn of the century amidst the glory of unprecedented growth in national income, India is experiencing the spread of rural distress. It is mainly due to the collapse of agricultural economy. Structural adjustments and competition from large-scale enterprises result in loss of rural livelihoods. Poor delivery of public services and safety nets, deepen the distress. The adverse impact is more on women than on men. This review examines the adverse impact of the events in terms of endowments, livelihood opportunities and nutritional outcomes on women in detail with the help of chosen indicators at two time-periods roughly representing mid nineties and early 2000. The gender equality index computed and the major indicators of welfare show that the gender gap is increasing in many aspects. All the aspects of livelihoods, such as literacy, unemployment and wages now have larger gender gaps than before. Survival indicators such as juvenile sex ratio, infant mortality, child labour have deteriorated for women, compared to men, though there has been a narrowing of gender gaps in life expectancy and literacy. The overall gender gap has widened due to larger gaps in some indicators, which are not compensated by the smaller narrowing in other indicators both in the rural and urban context.", "title": "" }, { "docid": "9e6c54018ca4d2907aa3a069252c6c53", "text": "Chronic pelvic pain is a frustrating symptom for patients with endometriosis and is frequently refractory to hormonal and surgical management. While these therapies target ectopic endometrial lesions, they do not directly address pain due to central sensitization of the nervous system and myofascial dysfunction, which can continue to generate pain from myofascial trigger points even after traditional treatments are optimized. This article provides a background for understanding how endometriosis facilitates remodeling of neural networks, contributing to sensitization and generation of myofascial trigger points. A framework for evaluating such sensitization and myofascial trigger points in a clinical setting is presented. Treatments that specifically address myofascial pain secondary to spontaneously painful myofascial trigger points and their putative mechanisms of action are also reviewed, including physical therapy, dry needling, anesthetic injections, and botulinum toxin injections.", "title": "" }, { "docid": "cb6d3b025e0047a78c9641d5f10ecf07", "text": "Surgical robotics is an evolving field with great advances having been made over the last decade. The origin of robotics was in the science-fiction literature and from there industrial applications, and more recently commercially available, surgical robotic devices have been realized. In this review, we examine the field of robotics from its roots in literature to its development for clinical surgical use. Surgical mills and telerobotic devices are discussed, as are potential future developments.", "title": "" }, { "docid": "0847b2b9270bc39a1273edfdfa022345", "text": "This paper presents the analysis, design and measurement of novel, low-profile, small-footprint folded monopoles employing planar metamaterial phase-shifting lines. These lines are composed of fully-printed spiral elements, that are inductively coupled and hence exhibit an effective high- mu property. An equivalent circuit for the proposed structure is presented, validating the operating principles of the antenna and the metamaterial line. The impact of the antenna profile and the ground plane size on the antenna performance is investigated using accurate full-wave simulations. A lambda/9 antenna prototype, designed to operate at 2.36 GHz, is fabricated and tested on both electrically large and small ground planes, achieving on average 80% radiation efficiency, 5% (110 MHz) and 2.5% (55 MHz) -10 dB measured bandwidths, respectively, and fully omnidirectional, vertically polarized, monopole-type radiation patterns.", "title": "" }, { "docid": "ef657884e6a7af08ca237cc97a2dfb19", "text": "Bruxism is defined as the repetitive jaw muscle activity characterized by the clenching or grinding of teeth. It can be categorized into awake and sleep bruxism (SB). Frequent SB occurs in about 13% of adults. The exact etiology of SB is still unknown and probably multifactorial in nature. Current literature suggests that SB is regulated centrally (pathophysiological and psychosocial factors) and not peripherally (morphological factors). Cited consequences of SB include temporomandibular disorders, headaches, tooth wear/fracture, implant, and other restoration failure. Chairside recognition of SB involves the use of subjective reports, clinical examinations, and trial oral splints. Definitive diagnosis of SB can only be achieved using electrophysiological tools. Pharmacological, psychological, and dental strategies had been employed to manage SB. There is at present, no effective treatment that \"cures\" or \"stops\" SB permanently. Management is usually directed toward tooth/restoration protection, reduction of bruxism activity, and pain relief.", "title": "" } ]
scidocsrr
8d1905824ed77dc9ef1f8bb5a471d1d4
Predicting flight delays with artificial neural networks: Case study of an airport
[ { "docid": "192b0b494719184be8d40ff9ad28aecc", "text": "The primary goal of the model proposed in this paper is to predict airline delays caused by inclement weather conditions using data mining and supervised machine learning algorithms. US domestic flight data and the weather data from 2005 to 2015 were extracted and used to train the model. To overcome the effects of imbalanced training data, sampling techniques are applied. Decision trees, random forest, the AdaBoost and the k-Nearest-Neighbors were implemented to build models which can predict delays of individual flights. Then, each of the algorithms' prediction accuracy and the receiver operating characteristic (ROC) curve were compared. In the prediction step, flight schedule and weather forecast were gathered and fed into the model. Using those data, the trained model performed a binary classification to predicted whether a scheduled flight will be delayed or on-time.", "title": "" } ]
[ { "docid": "42045752f292585bf20ad960f2b30469", "text": "eveloping software systems that meet stakeholders' needs and expectations is the ultimate goal of any software provider seeking a competitive edge. To achieve this, you must effectively and accurately manage your stakeholders' system requirements: the features, functions , and attributes they need in their software system. 1 Once you agree on these requirements, you can use them as a focal point for the development process and produce a software system that meets the expectations of both customers and users. However, in real-world software development, there are usually more requirements than you can implement given stakeholders' time and resource constraints. Thus, project managers face a dilemma: How do you select a subset of the customers' requirements and still produce a system that meets their needs? Deciding which requirements really matter is a difficult task and one increasingly demanded because of time and budget constraints. The authors developed a cost–value approach for prioritizing requirements and applied it to two commercial projects.", "title": "" }, { "docid": "657f020ce1977882fc80ba9b6c0db4b3", "text": "BACKGROUND\nThe delivery of effective, high-quality patient care is a complex activity. It demands health and social care professionals collaborate in an effective manner. Research continues to suggest that collaboration between these professionals can be problematic. Interprofessional education (IPE) offers a possible way to improve interprofessional collaboration and patient care.\n\n\nOBJECTIVES\nTo assess the effectiveness of IPE interventions compared to separate, profession-specific education interventions; and to assess the effectiveness of IPE interventions compared to no education intervention.\n\n\nSEARCH METHODS\nFor this update we searched the Cochrane Effective Practice and Organisation of Care Group specialised register, MEDLINE and CINAHL, for the years 2006 to 2011. We also handsearched the Journal of Interprofessional Care (2006 to 2011), reference lists of all included studies, the proceedings of leading IPE conferences, and websites of IPE organisations.\n\n\nSELECTION CRITERIA\nRandomised controlled trials (RCTs), controlled before and after (CBA) studies and interrupted time series (ITS) studies of IPE interventions that reported objectively measured or self reported (validated instrument) patient/client or healthcare process outcomes.\n\n\nDATA COLLECTION AND ANALYSIS\nAt least two review authors independently assessed the eligibility of potentially relevant studies. For included studies, at least two review authors extracted data and assessed study quality. A meta-analysis of study outcomes was not possible due to heterogeneity in study designs and outcome measures. Consequently, the results are presented in a narrative format.\n\n\nMAIN RESULTS\nThis update located nine new studies, which were added to the six studies from our last update in 2008. This review now includes 15 studies (eight RCTs, five CBA and two ITS studies). All of these studies measured the effectiveness of IPE interventions compared to no educational intervention. Seven studies indicated that IPE produced positive outcomes in the following areas: diabetes care, emergency department culture and patient satisfaction; collaborative team behaviour and reduction of clinical error rates for emergency department teams; collaborative team behaviour in operating rooms; management of care delivered in cases of domestic violence; and mental health practitioner competencies related to the delivery of patient care. In addition, four of the studies reported mixed outcomes (positive and neutral) and four studies reported that the IPE interventions had no impact on either professional practice or patient care.\n\n\nAUTHORS' CONCLUSIONS\nThis updated review reports on 15 studies that met the inclusion criteria (nine studies from this update and six studies from the 2008 update). Although these studies reported some positive outcomes, due to the small number of studies and the heterogeneity of interventions and outcome measures, it is not possible to draw generalisable inferences about the key elements of IPE and its effectiveness. To improve the quality of evidence relating to IPE and patient outcomes or healthcare process outcomes, the following three gaps will need to be filled: first, studies that assess the effectiveness of IPE interventions compared to separate, profession-specific interventions; second, RCT, CBA or ITS studies with qualitative strands examining processes relating to the IPE and practice changes; third, cost-benefit analyses.", "title": "" }, { "docid": "835b74c546ba60dfbb62e804daec8521", "text": "The goal of Open Information Extraction (OIE) is to extract surface relations and their arguments from naturallanguage text in an unsupervised, domainindependent manner. In this paper, we propose MinIE, an OIE system that aims to provide useful, compact extractions with high precision and recall. MinIE approaches these goals by (1) representing information about polarity, modality, attribution, and quantities with semantic annotations instead of in the actual extraction, and (2) identifying and removing parts that are considered overly specific. We conducted an experimental study with several real-world datasets and found that MinIE achieves competitive or higher precision and recall than most prior systems, while at the same time producing shorter, semantically enriched extractions.", "title": "" }, { "docid": "f6a1d7b206ca2796d4e91f3e8aceeed8", "text": "Objective To develop a classifier that tackles the problem of determining the risk of a patient of suffering from a cardiovascular disease within the next ten years. The system has to provide both a diagnosis and an interpretable model explaining the decision. In this way, doctors are able to analyse the usefulness of the information given by the system. Methods Linguistic fuzzy rule-based classification systems are used, since they provide a good classification rate and a highly interpretable model. More specifically, a new methodology to combine fuzzy rule-based classification systems with interval-valued fuzzy sets is proposed, which is composed of three steps: 1) the modelling of the linguistic labels of the classifier using interval-valued fuzzy sets; 2) the use of theKα operator in the inference process and 3) the application of a genetic tuning to find the best ignorance degree that each interval-valued fuzzy set represents as well as the best value for the parameter α of theKα operator in each rule. Results Correspondingauthor. Tel:+34-948166048. Fax:+34-948168924 Email addresses: [email protected] (Jośe Antonio Sanz ), [email protected] (Mikel Galar),[email protected] (Aranzazu Jurio), [email protected] (Antonio Brugos), [email protected] (Miguel Pagola),[email protected] (Humberto Bustince) Preprint submitted to Elsevier November 13, 2013 © 2013. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/", "title": "" }, { "docid": "719589d72fb9f7cbc64582a51eeea008", "text": "Objective To investigate the clinical symptoms, the physical and neurological findings, and the clinical course of neurological complications in eosinophilic granulomatosis with polyangiitis (EGPA). Methods A retrospective chart review of EGPA cases managed by two referral hospitals was performed, with a focus on the neurological findings. The study analyzed the symptoms at the onset of EGPA and investigated their chronological relationship. The patient delay (the delay between the onset of symptoms and the initial consultation), and the physician delay (the delay from consultation to the initiation of therapy) were determined and compared. The involved nerves were identified thorough a neurological examination. The cases with central nervous system (CNS) involvement were described. Results The average duration of symptoms prior to the initiating of therapy for sensory disturbances, motor deficits, rash, edema, and fever was 23, 5, 21, 18, and 24 days, respectively. Among the EGPA-specific symptoms, sensory disturbance was often the first symptom (63%), and was usually followed by the appearance of rash within four days (63%). The average physician delay (32.9±38.3 days) was significantly longer than the average patient delay (7.9±7.8 days; p=0.010). Reduced touch sensation in the superficial peroneal area, and weakness of dorsal flexion of the first toe secondary to deep peroneal nerve involvement, were highly sensitive for identifying the presence of peripheral nerve involvement in our series of patients with EGPA. Two cases, with CNS involvement, had multiple skin lesions over their hands and feet (Janeway lesions). Conclusion Japanese physicians are not always familiar with EGPA. It is important for us to consider this disease, when an asthmatic patient complains about the new onset of an abnormal sensation in the distal lower extremities, which is followed several days later by rash.", "title": "" }, { "docid": "4d6559e3216836c475b4b069aa924a88", "text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Asteroids. From Observations to Models. D. Hestroffer, Paolo Tanga", "title": "" }, { "docid": "fc036d58e966b72fc9f0c9a4c156b5a7", "text": "OBJECTIVE\nWe sought to estimate the prevalence of pelvic organ prolapse in older women using the Pelvic Organ Prolapse Quantification examination and to identify factors associated with prolapse.\n\n\nMETHODS\nWomen with a uterus enrolled at one site of the Women's Health Initiative Hormone Replacement Therapy randomized clinical trial were eligible for this ancillary cross-sectional study. Subjects underwent a Pelvic Organ Prolapse Quantification examination during a maximal Valsalva maneuver and in addition completed a questionnaire. Logistic regression was used to identify independent risk factors for each of 2 definitions of prolapse: 1) Pelvic Organ Prolapse Quantification stage II or greater and 2) the leading edge of prolapse measured at the hymen or below.\n\n\nRESULTS\nIn 270 participants, age (mean +/- SD) was 68.3 +/- 5.6 years, body mass index was 30.4 +/- 6.2 kg/m(2), and vaginal parity (median [range]) was 3 (0-12). The proportions of Pelvic Organ Prolapse Quantification stages (95% confidence intervals [CIs]) were stage 0, 2.3% (95% CI 0.8-4.8%); stage I, 33.0% (95% CI 27.4-39.0%); stage II, 62.9% (95% CI 56.8-68.7%); and stage III, 1.9% (95% CI 0.6-4.3%). In 25.2% (95% CI 20.1-30.8%), the leading edge of prolapse was at the hymen or below. Hormone therapy was not associated with prolapse (P =.9). On multivariable analysis, less education (odds ratio [OR] 2.16, 95% CI 1.10-4.24) and higher vaginal parity (OR 1.61, 95% CI 1.03-2.50) were associated with prolapse when defined as stage II or greater. For prolapse defined by the leading edge at or below the hymen, older age had a decreased risk (OR 0.50, 95% CI 0.27-0.92) and less education, and larger babies had an increased risk (OR 2.38, 95% CI 1.31-4.32 and OR 1.97, 95% CI 1.07-3.64, respectively).\n\n\nCONCLUSION\nSome degree of prolapse is nearly ubiquitous in older women, which should be considered in the development of clinically relevant definitions of prolapse. Risk factors for prolapse differed depending on the definition of prolapse used.", "title": "" }, { "docid": "8f967b0a46e3dad8f39476b2efea48b7", "text": "Today’s rapid changing world highlights the influence and impact of technology in all aspects of learning life. Higher Education institutions in developed Western countries believe that these developments offer rich opportunities to embed technological innovations within the learning environment. This places developing countries, striving to be equally competitive in international markets, under tremendous pressure to similarly embed appropriate blends of technologies within their learning and curriculum approaches, and consequently enhance and innovate their learning experiences. Although many universities across the world have incorporated internet-based learning systems, the success of their implementation requires an extensive understanding of the end user acceptance process. Learning using technology has become a popular approach within higher education institutions due to the continuous growth of Internet innovations and technologies. Therefore, this paper focuses on the investigation of students, who attempt to successfully adopt e-learning systems at universities in Jordan. The conceptual research framework of e-learning adoption, which is used in the analysis, is based on the technology acceptance model. The study also provides an indicator of students’ acceptance of e-learning as well as identifying the important factors that would contribute to its successful use. The outcomes will enrich the understanding of students’ acceptance of e-learning and will assist in its continuing implementation at Jordanian Universities.", "title": "" }, { "docid": "23e08b1f6886d8171fe2f46c88ea6ee2", "text": "In recent years, there has been a significant interest in integrating probability theory with first order logic and relational representations [see De Raedt and Kersting, 2003, for an overview]. Muggleton [1996] and Cussens [1999] have upgraded stochastic grammars towards Stochastic Logic Programs, Sato and Kameya [2001] have introduced Probabilistic Distributional Semantics for logic programs, and Domingos and Richardson [2004] have upgraded Markov networks towards Markov Logic Networks. Another research stream including Poole’s Independent Choice Logic [1993], Ngo and Haddawy’s Probabilistic-Logic Programs [1997], Jäger’s Relational Bayesian Networks [1997], and Pfeffer’s Probabilistic Relational Models [2000] concentrates on first order logical and relational extensions of Bayesian networks.", "title": "" }, { "docid": "c4495ba4cf80e3550161a99e6cfa2694", "text": "The pervasive and prevalent use of touch-screen mobile phones in both work and daily life has generated more and more private and sensitive information on those devices. Accordingly, there is an ever-increasing need to improve the security of mobile phones. Recent advances in mobile user authentication technologies mainly focus on entry-point authentication. Although post-login continuous authentication has attracted increasing attention from researchers, none of the previous studies addressed mobile user authentication at both stages simultaneously. In addition, extant authentication systems are subject to the common trade-off between security and usability. To address the above limitations, we propose Harmonized Authentication based on Thumb Stroke dynamics (HATS) that supports both entry-point and post-login mobile user authentication. HATS integrates password, gesture, keystroke, and touch dynamics based authentication methods to address the vulnerabilities of individual methods to certain security attacks. Moreover, HATS supports one-handed thumb stroke based interaction with touch screen mobile phones to improve the usability of authentication systems. We empirically evaluated HATS through controlled lab experiments. The results provide strong evidence that HATS improved both security and usability of mobile user authentication compared with keystroke dynamics based user authentication.", "title": "" }, { "docid": "7d07ce7ef1f35968a537f498abb62e8b", "text": "Prenatal stress (PS) can cause early and long-term developmental effects resulting in part from altered maternal and/or fetal glucocorticoid exposure. The aim of the present study was to assess the impact of chronic restraint stress during late gestation on feto-placental unit physiology and function in embryonic (E) day 21 male rat fetuses. Chronic stress decreased body weight gain and food intake of the dams and increased their adrenal weight. In the placenta of PS rats, the expression of glucose transporter type 1 (GLUT1) was decreased, whereas GLUT3 and GLUT4 were slightly increased. Moreover, placental expression and activity of the glucocorticoid \"barrier\" enzyme 11beta-hydroxysteroid dehydrogenase type 2 was strongly reduced. At E21, PS fetuses exhibited decreased body, adrenal pancreas, and testis weights. These alterations were associated with reduced pancreatic beta-cell mass, plasma levels of glucose, growth hormone, and ACTH, whereas corticosterone, insulin, IGF-1, and CBG levels were unaffected. These data emphasize the impact of PS on both fetal growth and endocrine function as well as on placental physiology, suggesting that PS could program processes implied in adult biology and pathophysiology.", "title": "" }, { "docid": "91eda0e2f9ef0e2ed87c5135c0061dfd", "text": "We detail the design, implementation, and an initial evaluation of a virtual reality education and entertainment (edutainment) application called Virtual Environment Interactions (VEnvI). VEnvI is an application in which students learn computer science concepts through the process of choreographing movement for a virtual character using a fun and intuitive interface. In this exploratory study, 54 participants as part of a summer camp geared towards promoting participation of women in science and engineering programmatically crafted a dance performance for a virtual human. A subset of those students participated in an immersive embodied interaction metaphor in VEnvI. In creating this metaphor that provides first-person, embodied experiences using self-avatars, we seek to facilitate engagement, excitement and interest in computational thinking. We qualitatively and quantitatively evaluated the extent to which the activities of the summer camp, programming the dance moves, and the embodied interaction within VEnvI facilitated students' edutainment, presence, interest, excitement, and engagement in computing, and the potential to alter their perceptions of computing and computer scientists. Results indicate that students enjoyed the experience and successfully engaged the virtual character in the immersive embodied interaction, thus exhibiting high telepresence and social presence. Students also showed increased interest and excitement regarding the computing field at the end of their summer camp experience using VEnvI.", "title": "" }, { "docid": "b6286076ec2585f24dc33e775ab0fe70", "text": "Trajectory tracking control for quadrotors is important for applications ranging from surveying and inspection, to film making. However, designing and tuning classical controllers, such as proportional-integral-derivative (PID) controllers, to achieve high tracking precision can be time-consuming and difficult, due to hidden dynamics and other non-idealities. The Deep Neural Network (DNN), with its superior capability of approximating abstract, nonlinear functions, proposes a novel approach for enhancing trajectory tracking control. This paper presents a DNN-based algorithm as an add-on module that improves the tracking performance of a classical feedback controller. Given a desired trajectory, the DNNs provide a tailored reference input to the controller based on their gained experience. The input aims to achieve a unity map between the desired and the output trajectory. The motivation for this work is an interactive “fly-as-you-draw” application, in which a user draws a trajectory on a mobile device, and a quadrotor instantly flies that trajectory with the DNN-enhanced control system. Experimental results demonstrate that the proposed approach improves the tracking precision for user-drawn trajectories after the DNNs are trained on selected periodic trajectories, suggesting the method's potential in real-world applications. Tracking errors are reduced by around 40–50% for both training and testing trajectories from users, highlighting the DNNs' capability of generalizing knowledge.", "title": "" }, { "docid": "95784da517f2b001e0c83400a877119f", "text": "Code-switching is a common phenomenon that bilinguals engage in, including bilingual children. While many researchers have analyzed code-switching behaviors to better understand more about the language processes in bilingual children, few have examined how code-switching behavior affects a child’s linguistic competence. This study thus sought to examine the relationship between code-switching and linguistic competency in bilingual children. Fifty-five English–Mandarin bilingual children aged 5 to 6 years were observed during classroom activities over five days (three hours each day). A number of different word roots and mean length of utterance for both languages, and a number of code-switched utterances for each child, were computed. English receptive vocabulary scores were also obtained. Additionally, teachers rated children’s English and Mandarin language competencies approximately six months later. Correlational and hierarchical regression analyses support the argument that code-switching does not indicate linguistic incompetence. Instead, bilingual children’s code-switching strongly suggests that it is a marker of linguistic competence.", "title": "" }, { "docid": "6dc1ebefb6fc3b904c803e61d931cfac", "text": "Traffic classification is the first step for network anomaly detection or network based intrusion detection system and plays an important role in network security domain. In this paper we first presented a new taxonomy of traffic classification from an artificial intelligence perspective, and then proposed a malware traffic classification method using convolutional neural network by taking traffic data as images. This method needed no hand-designed features but directly took raw traffic as input data of classifier. To the best of our knowledge this interesting attempt is the first time of applying representation learning approach to malware traffic classification using raw traffic data. We determined that the best type of traffic representation is session with all layers through eight experiments. The method is validated in two scenarios including three types of classifiers and the experiment results show that our proposed method can satisfy the accuracy requirement of practical application.", "title": "" }, { "docid": "1b9806a90e813b9cd452a223b81aa411", "text": "This communication presents a compact substrate-integrated waveguide (SIW) H-plane horn antenna fed by a novel elevated coplanar waveguide (ECPW) structure. First, the wideband characteristic of the SIW horn antenna is achieved through loading a dielectric slab with gradually decreasing dielectric constants, which can be realized through simply perforating different air vias on the extended slab. Second, in order to sustain an efficient feeding for the relatively thick substrate (0.27λ<sub>0</sub>), an additional metal ground is inserted in the middle of the grounded coplanar waveguide (GCPW). Moreover, a triangular-shaped transition structure is placed at the end of the ECPW to smoothly transmit the energy from the thin ECPW to the thick SIW horn antenna. Finally, a prototype is fabricated to validate the proposed concept. Measured results indicate that the proposed horn antenna operates from 17.4 to 24 GHz. Stable radiation patterns can be observed in the whole operating band. The measured results show good accordance with the simulated ones. Above all, the proposed antenna occupies an area of 22 × 56.5 × 4 mm<sup>3</sup> (1.47λ<sub>0</sub> × 3.77λ<sub>0</sub> × 0.27λ<sub>0</sub>), which is much more compact than the previous rectangular waveguide-fed horn antenna (2.33λ<sub>0</sub> × 9.21λ<sub>0</sub> × 0.31λ<sub>0</sub>) (where λ0 is the wavelength at 20 GHz in the free space).", "title": "" }, { "docid": "d99fdf7b559d5609bec3c179dee3cd58", "text": "This study aimed to describe dietary habits of Syrian adolescents attending secondary schools in Damascus and the surrounding areas. A descriptive, cross-sectional study was carried out on 3507 students in 2001. A stratified, 2-stage random cluster sample was used to sample the students. The consumption pattern of food items during the previous week was described. More than 50% of the students said that they had not consumed green vegetables and more than 35% had not consumed meat. More than 35% said that they consumed cheese and milk at least once a day. Only 11.8% consumed fruit 3 times or more daily. Potential determinants of the pattern of food consumption were arialysed. Weight control practices and other eating habits were also described.", "title": "" }, { "docid": "5158b5da8a561799402cb1ef3baa3390", "text": "We study the segmental recurrent neural network for end-to-end acoustic modelling. This model connects the segmental conditional random field (CRF) with a recurrent neural network (RNN) used for feature extraction. Compared to most previous CRF-based acoustic models, it does not rely on an external system to provide features or segmentation boundaries. Instead, this model marginalises out all the possible segmentations, and features are extracted from the RNN trained together with the segmental CRF. In essence, this model is self-contained and can be trained end-to-end. In this paper, we discuss practical training and decoding issues as well as the method to speed up the training in the context of speech recognition. We performed experiments on the TIMIT dataset. We achieved 17.3% phone error rate (PER) from the first-pass decoding — the best reported result using CRFs, despite the fact that we only used a zeroth-order CRF and without using any language model.", "title": "" }, { "docid": "6d323f8dbfd7d2883a4926b80097727c", "text": "This work presents a novel geospatial mapping service, based on OpenStreetMap, which has been designed and developed in order to provide personalized path to users with special needs. This system gathers data related to barriers and facilities of the urban environment via crowd sourcing and sensing done by users. It also considers open data provided by bus operating companies to identify the actual accessibility feature and the real time of arrival at the stops of the buses. The resulting service supports citizens with reduced mobility (users with disabilities and/or elderly people) suggesting urban paths accessible to them and providing information related to travelling time, which are tailored to their abilities to move and to the bus arrival time. The manuscript demonstrates the effectiveness of the approach by means of a case study focusing on the differences between the solutions provided by our system and the ones computed by main stream geospatial mapping services.", "title": "" }, { "docid": "b2a2fdf56a79c1cb82b8b3a55b9d841d", "text": "This paper describes the architecture and implementation of a shortest path processor, both in reconfigurable hardware and VLSI. This processor is based on the principles of recurrent spatiotemporal neural network. The processor’s operation is similar to Dijkstra’s algorithm and it can be used for network routing calculations. The objective of the processor is to find the least cost path in a weighted graph between a given node and one or more destinations. The digital implementation exhibits a regular interconnect structure and uses simple processing elements, which is well suited for VLSI implementation and reconfigurable hardware.", "title": "" } ]
scidocsrr
2db849b783016f10cae3a9252a323af7
An architecture for aggregating information from distributed data nodes for industrial internet of things
[ { "docid": "13f43cf82f6322c2659f08b009c75076", "text": "The revolution of Internet-of-Things (IoT) is reshaping the modern food supply chains with promising business prospects. To be successful in practice, the IoT solutions should create “income-centric” values beyond the conventional “traceability-centric” values. To accomplish what we promised to users, sensor portfolios and information fusion must correspond to the new requirements introduced by this income-centric value creation. In this paper, we propose a value-centric business-technology joint design framework. Based on it the income-centric added-values including shelf life prediction, sales premium, precision agriculture, and reduction of assurance cost are identified and assessed. Then corresponding sensor portfolios are developed and implemented. Three-tier information fusion architecture is proposed as well as examples about acceleration data processing, self-learning shelf life prediction and real-time supply chain re-planning. The feasibilities of the proposed design framework and solution have been confirmed by the field trials and an implemented prototype system.", "title": "" } ]
[ { "docid": "dbb9db490ae3c1bb91d22ecd8d679270", "text": "The growing computational and storage needs of several scientific applications mandate the deployment of extreme-scale parallel machines, such as IBM's BlueGene/L, which can accommodate as many as 128K processors. In this paper, we present our experiences in collecting and filtering error event logs from a 8192 processor BlueGene/L prototype at IBM Rochester, which is currently ranked #8 in the Top-500 list. We analyze the logs collected from this machine over a period of 84 days starting from August 26, 2004. We perform a three-step filtering algorithm on these logs: extracting and categorizing failure events; temporal filtering to remove duplicate reports from the same location; and finally coalescing failure reports of the same error across different locations. Using this approach, we can substantially compress these logs, removing over 99.96% of the 828,387 original entries, and more accurately portray the failure occurrences on this system.", "title": "" }, { "docid": "4f44ef570fc3cb67a9aea670d019bea3", "text": "In this paper, a license plate detection system is implemented. For this purpose, we improve the contrast at possible locations where there might be a license plate, we propose a filtering method called ''region-based'' in order to smooth the uniform and background areas of an image, we apply the Sobel operator and morphological filtering to extract the vertical edges and the candidate regions respectively, and finally, we segment the plate region by considering some geometrical features. In fact the novelty and the strength of our license plate detection system is in applying the region-based filtering that decreases the run time and increases the accuracy in the two final stages: morphological filtering and using the geometrical features. The experimental results show that our proposed method achieves appropriate performance in different scenarios. We should mention that our system is reliable because the accuracy is above 92% in average for different scenarios and it is also practical because of the low computational cost. An intelligent transportation system (ITS) is an important tool for analyzing and controlling the moving vehicles in cities and highways [1] and in recent years much research on ITS has been carried out. Nowadays automatic vehicle license plate (VLP) recognition is a key ingredient for any ITS such as security control of restricted areas, traffic law enforcements, automatic payment of tolls on highways and parking management systems. In these examples, a camera captures the vehicle images and a computer processes the captured images, detects the car license plate from the input image and then reads the information on the license plate by applying various image processing and optical character recognition techniques. Generally , an automatic VLP recognition system is made up of four modules: image pre-processing, license plate detection, character segmentation and character recognition (Fig. 1). Among these four modules in Fig. 1, license plate detection is the most important and the most difficult task in any VLP recognition system because of images with low contrast, blurring and dirty plates. The most common solutions for VLP detection include analyzing the texture [1,2], edge extraction [3–6], morphological filtering [7,8], color feature [7,9], combining of edge statistics and color feature [10], combining of edge and morphology operations [11], Hough transform [12], neural networks [6], learning-based approaches [13], Gabor filtering [14] and Wave-let analysis [15]. An edge approach is normally simple and fast. Texture [1,2] and edge based methods [3–6] are widely used …", "title": "" }, { "docid": "0f2d6a8ce07258658f24fb4eec006a02", "text": "Dynamic bandwidth allocation in passive optical networks presents a key issue for providing efficient and fair utilization of the PON upstream bandwidth while supporting the QoS requirements of different traffic classes. In this article we compare the typical characteristics of DBA, such as bandwidth utilization, delay, and jitter at different traffic loads, within the two major standards for PONs, Ethernet PON and gigabit PON. A particular PON standard sets the framework for the operation of DBA and the limitations it faces. We illustrate these differences between EPON and GPON by means of simulations for the two standards. Moreover, we consider the evolution of both standards to their next-generation counterparts with the bit rate of 10 Gb/s and the implications to the DBA. A new simple GPON DBA algorithm is used to illustrate GPON performance. It is shown that the length of the polling cycle plays a crucial but different role for the operation of the DBA within the two standards. Moreover, only minor differences regarding DBA for current and next-generation PONs were found.", "title": "" }, { "docid": "40dfe4f55e2afe289bfe8a540356ef89", "text": "We explore the Tully-Fisher relation over five decades in stellar mass in galaxies with circular velocities ranging over 30 less, similarVc less, similar300 km s-1. We find a clear break in the optical Tully-Fisher relation: field galaxies with Vc less, similar90 km s-1 fall below the relation defined by brighter galaxies. These faint galaxies, however, are very rich in gas; adding in the gas mass and plotting the baryonic disk mass Md=M*+Mgas in place of luminosity restores the single linear relation. The Tully-Fisher relation thus appears fundamentally to be a relation between rotation velocity and total baryonic mass of the form Md~V4c.", "title": "" }, { "docid": "2e088ce4f7e5b3633fa904eab7563875", "text": "Large numbers of websites have started to markup their content using standards such as Microdata, Microformats, and RDFa. The marked-up content elements comprise descriptions of people, organizations, places, events, products, ratings, and reviews. This development has accelerated in last years as major search engines such as Google, Bing and Yahoo! use the markup to improve their search results. Embedding semantic markup facilitates identifying content elements on webpages. However, the markup is mostly not as fine-grained as desirable for applications that aim to integrate data from large numbers of websites. This paper discusses the challenges that arise in the task of integrating descriptions of electronic products from several thousand e-shops that offer Microdata markup. We present a solution for each step of the data integration process including Microdata extraction, product classification, product feature extraction, identity resolution, and data fusion. We evaluate our processing pipeline using 1.9 million product offers from 9240 e-shops which we extracted from the Common Crawl 2012, a large public Web corpus.", "title": "" }, { "docid": "1feaf48291b7ea83d173b70c23a3b7c0", "text": "Machine learning plays a critical role in extracting meaningful information out of the zetabytes of sensor data collected every day. For some applications, the goal is to analyze and understand the data to identify trends (e.g., surveillance, portable/wearable electronics); in other applications, the goal is to take immediate action based the data (e.g., robotics/drones, self-driving cars, smart Internet of Things). For many of these applications, local embedded processing near the sensor is preferred over the cloud due to privacy or latency concerns, or limitations in the communication bandwidth. However, at the sensor there are often stringent constraints on energy consumption and cost in addition to throughput and accuracy requirements. Furthermore, flexibility is often required such that the processing can be adapted for different applications or environments (e.g., update the weights and model in the classifier). In many applications, machine learning often involves transforming the input data into a higher dimensional space, which, along with programmable weights, increases data movement and consequently energy consumption. In this paper, we will discuss how these challenges can be addressed at various levels of hardware design ranging from architecture, hardware-friendly algorithms, mixed-signal circuits, and advanced technologies (including memories and sensors).", "title": "" }, { "docid": "a06c9d681bb8a8b89a8ee64a53e3b344", "text": "This paper introduces CIEL, a universal execution engine for distributed data-flow programs. Like previous execution engines, CIEL masks the complexity of distributed programming. Unlike those systems, a CIEL job can make data-dependent control-flow decisions, which enables it to compute iterative and recursive algorithms. We have also developed Skywriting, a Turingcomplete scripting language that runs directly on CIEL. The execution engine provides transparent fault tolerance and distribution to Skywriting scripts and highperformance code written in other programming languages. We have deployed CIEL on a cloud computing platform, and demonstrate that it achieves scalable performance for both iterative and non-iterative algorithms.", "title": "" }, { "docid": "7a06c1b73662a377875da0ea2526c610", "text": "a Earthquake Engineering and Structural Dynamics Laboratory (EESD), School of Architecture, Civil and Environmental Engineering (ENAC), École Polytechnique Fédérale de Lausanne (EPFL), EPFL ENAC IIC EESD, GC B2 515, Station 18, CH – 1015 Lausanne, Switzerland b Earthquake Engineering and Structural Dynamics Laboratory (EESD), School of Architecture, Civil and Environmental Engineering (ENAC), École Polytechnique Fédérale de Lausanne (EPFL), EPFL ENAC IIC EESD, GC B2 504, Station 18, CH – 1015 Lausanne, Switzerland", "title": "" }, { "docid": "fe79931e10ab26d1abb3a6cf07cecc85", "text": "Recent advancements in Ku-band high throughput satellites (HTS) will allow commercial Ku-band aeronautical mobile satellite systems (AMSS) to equal or exceed commercial Ka-band AMSS systems on cost and performance. The AMSS market is currently dominated by Ku-band solutions, both in the commercial sector (eXConnect, Row44, Yonder) and the government sector (Tachyon, Boeing Broadband Satellite Network (formerly Connexion), various UAV and ISR systems). All of these systems use conventional continental-scale wide beams that are leased from Fixed Satellite Service (FSS) providers such as Intelsat and Eutelsat. In the next several years the dominance of Ku-band AMSS will be challenged by Ka-band systems such as Inmarsat-5, which use multiple spot beams to offer enhanced performance. Previous work has suggested that these systems may offer better performance and better economics than conventional Ku-band systems [1]. The key insight of this paper is that the performance advantage of spot beam Ka-band systems comes from their smaller beam size rather than their frequency of operation, meaning that a Ku-band spot beam satellite can also be built with similar sized spot beams to Ka-band systems and can achieve competitive cost and performance as compared to a Ka-band spot beam systems. High throughput spot beam Ku-band systems, such as Intelsat's EpicNG system, are now in development and will be fielded in the same timeframe as Inmarsat-5. This result has critical implications for existing users and operators of AMSS systems: - Currently installed Ku-band terminals will be able to take advantage of dramatic improvements in performance when high throughput Ku-band becomes available - Current Ku-band will not have to undergo costly Ka-band retrofits to maintain competitive performance - Operators can continue to invest in Ku-band terminals today without fear of obsolescence in the near future - The AMSS market will continue to be diverse and competitive for years to come.", "title": "" }, { "docid": "ff7c2ec1a09923262123035a72922215", "text": "The repetitive structure of genomic DNA holds many secrets to be discovered. A systematic study of repetitive DNA on a genomic or inter-genomic scale requires extensive algorithmic support. The REPuter program described herein was designed to serve as a fundamental tool in such studies. Efficient and complete detection of various types of repeats is provided together with an evaluation of significance and interactive visualization. This article circumscribes the wide scope of repeat analysis using applications in five different areas of sequence analysis: checking fragment assemblies, searching for low copy repeats, finding unique sequences, comparing gene structures and mapping of cDNA/EST sequences.", "title": "" }, { "docid": "9bca50add40d1acbcce647df2f4b6940", "text": "Hex is a beautiful game with simple rules and a strategic complexity comparable to that of Chess and Go. The massive game-tree search techniques developed mostly for Chess and successfully used for Checkers and a number of other games, become less useful for games with large branching factors like Hex and Go. In this paper, we describe deduction rules, which are used to calculate values of complex Hex positions recursively starting from the simplest ones. We explain how this approach is implemented in HEXY—the strongest Hex-playing computer program, the Gold medallist of the 5th Computer Olympiad in London, August 2000.  2001 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "0cf1f63fd39c8c74465fad866958dac6", "text": "Software development organizations that have been employing capability maturity models, such as SW-CMM or CMMI for improving their processes are now increasingly interested in the possibility of adopting agile development methods. In the context of project management, what can we say about Scrum’s alignment with CMMI? The aim of our paper is to present the mapping between CMMI and the agile method Scrum, showing major gaps between them and identifying how organizations are adopting complementary practices in their projects to make these two approaches more compliant. This is useful for organizations that have a plan-driven process based on the CMMI model and are planning to improve the agility of processes or to help organizations to define a new project management framework based on both CMMI and Scrum practices.", "title": "" }, { "docid": "ee99824c45841124d83c41c2d4468e71", "text": "Long short-tem memory (LSTM) recurrent neural networks have been shown to give state-of-the-art performance on many speech recognition tasks. To achieve a further performance improvement, in this paper, maxout units are proposed to be integrated with the LSTM cells, considering those units have brought significant improvements to deep feed-forward neural networks. A novel architecture was constructed by replacing the input activation units (generally tanh) in the LSTM networks with maxout units. We implemented the LSTM network training on multi-GPU devices with truncated BPTT, and empirically evaluated the proposed designs on a large vocabulary Mandarin conversational telephone speech recognition task. The experimental results support our claim that the performance of LSTM based acoustic models can be further improved using the maxout units.", "title": "" }, { "docid": "d6d2e1c4da299fcc8dc1cff9c9999b1c", "text": "Purpose – The purpose of this paper is to describe the successful use of a knowledge management (KM) model in a public sector organization. Design/methodology/approach – Building on the theoretical foundation of others, this case study demonstrates the value of KM modeling in a science-based initiative in the Canadian public service. Findings – The Inukshuk KM model, which comprises the five elements of technology, leadership, culture, measurement, and process, provides a holistic approach in public service KM. Practical implications – The proposed model can be employed by other initiatives to facilitate KM planning and implementation. Originality/value – This the first project to consider how KM models may be implemented in a Canadian public service environment.", "title": "" }, { "docid": "186b616c56df44ad55cb39ee63ebe906", "text": "RIPEMD-160 is a fast cryptographic hash function that is tuned towards software implementations on 32-bit architectures. It has evolved from the 256-bit extension of MD4, which was introduced in 1990 by Ron Rivest [20, 21]. Its main design feature are two different and independent parallel chains, the result of which are combined at the end of every application of the compression function. As suggested by its name, RIPEMD-160 offers a 160-bit result. It is intended to provide a high security level for the next 10 years or more. RIPEMD-128 is a faster variant of RIPEMD-160, which provides a 128-bit result. Together with SHA-1, RIPEMD-160 and RIPEMD-128 have been included in the International Standard ISO/IEC 10118-3, the publication of which is expected for late 1997 [17]. The goal of this article is to motivate the existence of RIPEMD160, to explain the main design features and to provide a concise description of the algorithm.", "title": "" }, { "docid": "35abee3cab543f8dea757a424db4bcfc", "text": "Real-time abnormal driving behaviors monitoring is a corner stone to improving driving safety. Existing works on driving behaviors monitoring using smartphones only provide a coarsegrained result, i.e. distinguishing abnormal driving behaviors from normal ones. To improve drivers' awareness of their driving habits so as to prevent potential car accidents, we need to consider a finegrained monitoring approach, which not only detects abnormal driving behaviors but also identifies specific types of abnormal driving behaviors, i.e. Weaving, Swerving, Sideslipping, Fast U-turn, Turning with a wide radius and Sudden braking. Through empirical studies of the 6-month driving traces collected from real driving environments, we find that all of the six types of driving behaviors have their unique patterns on acceleration and orientation. Recognizing this observation, we further propose a finegrained abnormal Driving behavior Detection and iDentification system, D3, to perform real-time high-accurate abnormal driving behaviors monitoring using smartphone sensors. By extracting unique features from readings of smartphones' accelerometer and orientation sensor, we first identify sixteen representative features to capture the patterns of driving behaviors. Then, a machine learning method, Support Vector Machine (SVM), is employed to train the features and output a classifier model which conducts fine-grained identification. From results of extensive experiments with 20 volunteers driving for another 4 months in real driving environments, we show that D3 achieves an average total accuracy of 95.36%.", "title": "" }, { "docid": "5be55ce7d8f97689bf54028063ba63d7", "text": "Early diagnosis, playing an important role in preventing progress and treating the Alzheimer's disease (AD), is based on classification of features extracted from brain images. The features have to accurately capture main AD-related variations of anatomical brain structures, such as, e.g., ventricles size, hippocampus shape, cortical thickness, and brain volume. This paper proposed to predict the AD with a deep 3D convolutional neural network (3D-CNN), which can learn generic features capturing AD biomarkers and adapt to different domain datasets. The 3D-CNN is built upon a 3D convolutional autoencoder, which is pre-trained to capture anatomical shape variations in structural brain MRI scans. Fully connected upper layers of the 3D-CNN are then fine-tuned for each task-specific AD classification. Experiments on the CADDementia MRI dataset with no skull-stripping preprocessing have shown our 3D-CNN outperforms several conventional classifiers by accuracy. Abilities of the 3D-CNN to generalize the features learnt and adapt to other domains have been validated on the ADNI dataset.", "title": "" }, { "docid": "9e9ca921df1a2a8b8ddb37d1ca7be41d", "text": "The quantity of data transmitted in the network intensified rapidly with the increased dependency on social media applications, sensors for data acquisitions and smartphones utilizations. Typically, such data is unstructured and originates from multiple sources in different format. Consequently, the abstraction of data for rendering is difficult, that lead to the development of a computing system that is able to store data in unstructured format and support distributed parallel computing. To data, there exist approaches to handle big data using NoSQL. This paper provides a review and the comparison between NoSQL and Relational Database Management System (RDBMS). By reviewing each approach, the mechanics of NoSQL systems can be clearly distinguished from the RDBMS. Basically, such systems rely on multiple factors, that include the query language, architecture, data model and consumer API. This paper also defines the application that matches the system and subsequently able to accurately correlates to a specific NoSQL system.", "title": "" }, { "docid": "ed0444685c9a629c7d1fda7c4912fd55", "text": "Citrus fruits have potential health-promoting properties and their essential oils have long been used in several applications. Due to biological effects described to some citrus species in this study our objectives were to analyze and compare the phytochemical composition and evaluate the anti-inflammatory effect of essential oils (EO) obtained from four different Citrus species. Mice were treated with EO obtained from C. limon, C. latifolia, C. aurantifolia or C. limonia (10 to 100 mg/kg, p.o.) and their anti-inflammatory effects were evaluated in chemical induced inflammation (formalin-induced licking response) and carrageenan-induced inflammation in the subcutaneous air pouch model. A possible antinociceptive effect was evaluated in the hot plate model. Phytochemical analyses indicated the presence of geranial, limonene, γ-terpinene and others. EOs from C. limon, C. aurantifolia and C. limonia exhibited anti-inflammatory effects by reducing cell migration, cytokine production and protein extravasation induced by carrageenan. These effects were also obtained with similar amounts of pure limonene. It was also observed that C. aurantifolia induced myelotoxicity in mice. Anti-inflammatory effect of C. limon and C. limonia is probably due to their large quantities of limonene, while the myelotoxicity observed with C. aurantifolia is most likely due to the high concentration of citral. Our results indicate that these EOs from C. limon, C. aurantifolia and C. limonia have a significant anti-inflammatory effect; however, care should be taken with C. aurantifolia.", "title": "" }, { "docid": "2a89fb135d7c53bda9b1e3b8598663a5", "text": "We propose a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Networks. This method balances the generator and discriminator during training. Additionally, it provides a new approximate convergence measure, fast and stable training and high visual quality. We also derive a way of controlling the trade-off between image diversity and visual quality. We focus on the image generation task, setting a new milestone in visual quality, even at higher resolutions. This is achieved while using a relatively simple model architecture and a standard training procedure.", "title": "" } ]
scidocsrr
69406f0a12e86a7bf45a983547c151cc
A Lazy Man's Approach to Benchmarking: Semisupervised Classifier Evaluation and Recalibration
[ { "docid": "65901a189e87983dfd01db0161106a86", "text": "The presence of bias in existing object recognition datasets is now well-known in the computer vision community. While it remains in question whether creating an unbiased dataset is possible given limited resources, in this work we propose a discriminative framework that directly exploits dataset bias during training. In particular, our model learns two sets of weights: (1) bias vectors associated with each individual dataset, and (2) visual world weights that are common to all datasets, which are learned by undoing the associated bias from each dataset. We demonstrate the effectiveness of our model by applying the learned weights to a novel, unseen dataset. We find that it is beneficial to explicitly account for bias when combining multiple datasets. (For more details refer to [3] and http://undoingbias.csail.mit.edu)", "title": "" }, { "docid": "3ac2f2916614a4e8f6afa1c31d9f704d", "text": "This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available. We introduce an algorithm for learning from labeled and unlabeled documents based on the combination of Expectation-Maximization (EM) and a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents. It then trains a new classifier using the labels for all the documents, and iterates to convergence. This basic EM procedure works well when the data conform to the generative assumptions of the model. However these assumptions are often violated in practice, and poor performance can result. We present two extensions to the algorithm that improve classification accuracy under these conditions: (1) a weighting factor to modulate the contribution of the unlabeled data, and (2) the use of multiple mixture components per class. Experimental results, obtained using text from three different real-world tasks, show that the use of unlabeled data reduces classification error by up to 30%.", "title": "" }, { "docid": "c9b6f91a7b69890db88b929140f674ec", "text": "Pedestrian detection is a key problem in computer vision, with several applications that have the potential to positively impact quality of life. In recent years, the number of approaches to detecting pedestrians in monocular images has grown steadily. However, multiple data sets and widely varying evaluation protocols are used, making direct comparisons difficult. To address these shortcomings, we perform an extensive evaluation of the state of the art in a unified framework. We make three primary contributions: 1) We put together a large, well-annotated, and realistic monocular pedestrian detection data set and study the statistics of the size, position, and occlusion patterns of pedestrians in urban scenes, 2) we propose a refined per-frame evaluation methodology that allows us to carry out probing and informative comparisons, including measuring performance in relation to scale and occlusion, and 3) we evaluate the performance of sixteen pretrained state-of-the-art detectors across six data sets. Our study allows us to assess the state of the art and provides a framework for gauging future efforts. Our experiments show that despite significant progress, performance still has much room for improvement. In particular, detection is disappointing at low resolutions and for partially occluded pedestrians.", "title": "" }, { "docid": "20deb56f6d004a8e33d1e1a4f579c1ba", "text": "Hamiltonian dynamics can be used to produce distant proposals for the Metropolis algorithm, thereby avoiding the slow exploration of the state space that results from the diffusive behaviour of simple random-walk proposals. Though originating in physics, Hamiltonian dynamics can be applied to most problems with continuous state spaces by simply introducing fictitious “momentum” variables. A key to its usefulness is that Hamiltonian dynamics preserves volume, and its trajectories can thus be used to define complex mappings without the need to account for a hard-to-compute Jacobian factor — a property that can be exactly maintained even when the dynamics is approximated by discretizing time. In this review, I discuss theoretical and practical aspects of Hamiltonian Monte Carlo, and present some of its variations, including using windows of states for deciding on acceptance or rejection, computing trajectories using fast approximations, tempering during the course of a trajectory to handle isolated modes, and short-cut methods that prevent useless trajectories from taking much computation time.", "title": "" } ]
[ { "docid": "d3c83c600637d9aedd293f2d1b20caaa", "text": "We introduce an end-to-end private deep learning framework, applied to the task of predicting 30-day readmission from electronic health records. By using differential privacy during training and homomorphic encryption during inference, we demonstrate that our proposed pipeline could maintain high performance while providing robust privacy guarantees against information leak from data transmission or attacks against the model. We also explore several techniques to address the privacy-utility trade-off in deploying neural networks with privacy mechanisms, improving the accuracy of differentially-private training and the computation cost of encrypted operations using ideas from both machine learning and cryptography.", "title": "" }, { "docid": "98fb03e0e590551fa9e7c82b827c78ed", "text": "This article describes on-going developments of the VENUS European Project (Virtual ExploratioN of Underwater Sites, http://www.venus-project.eu) concerning the first mission to sea in Pianosa Island, Italy in October 2006. The VENUS project aims at providing scientific methodologies and technological tools for the virtual exploration of deep underwater archaeological sites. The VENUS project will improve the accessibility of underwater sites by generating thorough and exhaustive 3D records for virtual exploration. In this paper we focus on the underwater photogrammetric approach used to survey the archaeological site of Pianosa. After a brief presentation of the archaeological context we shall see the calibration process in such a context. The next part of this paper is dedicated to the survey: it is divided into two parts: a DTM of the site (combining acoustic bathymetry and photogrammetry) and a specific artefact plotting dedicated to the amphorae present on the site. * Corresponding author. This is useful to know for communication with the appropriate person in cases with more than one author. ** http://cordis.europa.eu/ist/digicult/venus.htm or the project web site : http://www.venus-project.eu 1. VENUS, VIRTUAL EXPLORATION OF UNDERWATER SITES The VENUS project is funded by European Commission, Information Society Technologies (IST) programme of the 6th FP for RTD . It aims at providing scientific methodologies and technological tools for the virtual exploration of deep underwater archaeological sites. (Chapman et alii, 2006). Underwater archaeological sites, for example shipwrecks, offer extraordinary opportunities for archaeologists due to factors such as darkness, low temperatures and a low oxygen rate which are favourable to preservation. On the other hand, these sites can not be experienced first hand and today are continuously jeopardised by activities such as deep trawling that destroy their surface layer. The VENUS project will improve the accessibility of underwater sites by generating thorough and exhaustive 3D records for virtual exploration. The project team plans to survey shipwrecks at various depths and to explore advanced methods and techniques of data acquisition through autonomous or remotely operated unmanned vehicles with innovative sonar and photogrammetry equipment. Research will also cover aspects such as data processing and storage, plotting of archaeological artefacts and information system management. This work will result in a series of best practices and procedures for collecting and storing data. Further, VENUS will develop virtual reality and augmented reality tools for the visualisation of an immersive interaction with a digital model of an underwater site. The model will be made accessible online, both as an example of digital preservation and for demonstrating new facilities of exploration in a safe, cost-effective and pedagogical environment. The virtual underwater site will provide archaeologists with an improved insight into the data and the general public with simulated dives to the site. The VENUS consortium, composed of eleven partners, is pooling expertise in various disciplines: archaeology and underwater exploration, knowledge representation and photogrammetry, virtual reality and digital data preservation. This paper focuses on the first experimentation in Pianosa Island, Tuscany, Italy. The document is structured as follows. A short description of the archaeological context, then the next section explains the survey method: calibration, collecting photographs using ROV and divers, photographs orientation and a particular way to measure amphorae with photogrammetry using archaeological knowledge. A section shows 3D results in VRML and finally we present the future planned work. 2. THE UNDERWATER ARCHAEOLOGICAL SITE OF PIANOSA ISLAND The underwater archaeological site of Pianosa, discovered in 1989 by volunteer divers (Giuseppe Adriani, Paolo Vaccari), is located at a depth of 35 m, close to the Scoglio della Scola, in XXI International CIPA Symposium, 01-06 October, Athens, Greece", "title": "" }, { "docid": "5a5c71b56cf4aa6edff8ecc57298a337", "text": "The learning process of a multilayer perceptron requires the optimization of an error function E(y,t) comparing the predicted output, y, and the observed target, t. We review some usual error functions, analyze their mathematical properties for data classification purposes, and introduce a new one, E(Exp), inspired by the Z-EDM algorithm that we have recently proposed. An important property of E(Exp) is its ability to emulate the behavior of other error functions by the sole adjustment of a real-valued parameter. In other words, E(Exp) is a sort of generalized error function embodying complementary features of other functions. The experimental results show that the flexibility of the new, generalized, error function allows one to obtain the best results achievable with the other functions with a performance improvement in some cases.", "title": "" }, { "docid": "9af928f8d620630cfd2938905adeb930", "text": "This paper describes the application of a pedagogical model called \\learning as a research activity\" [D. Gil-P erez and J. Carrascosa-Alis, Science Education 78 (1994) 301{315] to the design and implementation of a two-semester course on compiler design for Computer Engineering students. In the new model, the classical pattern of classroom activity based mainly on one-way knowledge transmission/reception of pre-elaborated concepts is replaced by an active working environment that resembles that of a group of novice researchers under the supervision of an expert. The new model, rooted in the now commonly-accepted constructivist postulates, strives for meaningful acquisition of fundamental concepts through problem solving |in close parallelism to the construction of scienti c knowledge through history.", "title": "" }, { "docid": "db5dcaddaa38f472afaa84b61e4ea650", "text": "The dynamics of load, especially induction motors, are the driving force for short-term voltage stability (STVS) problems. In this paper, the equivalent rotation speed of motors is identified online and its recovery time is estimated next to realize an emergency-demand-response (EDR) based under speed load shedding (USLS) scheme to improve STVS. The proposed scheme consists of an EDR program and two regular stages (RSs). In the EDR program, contracted load is used as a fast-response resource rather than the last defense. The estimated recovery time (ERT) is used as the triggering signal for the EDR program. In the RSs, the amount of load to be shed at each bus is determined according to the assigned weights based on ERTs. Case studies on a practical power system in China Southern Power Grid have validated the performance of the proposed USLS scheme under various contingency scenarios. The utilization of EDR resources and the adaptive distribution of shedding amount in RSs guarantee faster voltage recovery. Therefore, USLS offers a new and more effective approach compared with existing under voltage load shedding to improve STVS.", "title": "" }, { "docid": "139cd2b11e4126bfaa2522fdc812e066", "text": "We consider aspects pertinent to evaluating creativity to b e input, output and the process by which the output is achieved. These issues may be further divided, and we highlight associated justifications and controversies. Appropriate meth ods of measuring these aspects are suggested and discussed.", "title": "" }, { "docid": "84499d49c5e2d7ed9f30b754329d5175", "text": "The evolution of natural ecosystems is controled by a high level of biodiversity, In sharp contrast, intensive agricultural systems involve monocultures associated with high input of chemical fertilisers and pesticides. Intensive agricultural systems have clearly negative impacts on soil and water quality and on biodiversity conservation. Alternatively, cropping systems based on carefully designed species mixtures reveal many potential advantages under various conditions, both in temperate and tropical agriculture. This article reviews those potential advantages by addressing the reasons for mixing plant species; the concepts and tools required for understanding and designing cropping systems with mixed species; and the ways of simulating multispecies cropping systems with models. Multispecies systems are diverse and may include annual and perennial crops on a gradient of complexity from 2 to n species. A literature survey shows potential advantages such as (1) higher overall productivity, (2) better control of pests and diseases, (3) enhanced ecological services and (4) greater economic profitability. Agronomic and ecological conceptual frameworks are examined for a clearer understanding of cropping systems, including the concepts of competition and facilitation, above- and belowground interactions and the types of biological interactions between species that enable better pest management in the system. After a review of existing models, future directions in modelling plant mixtures are proposed. We conclude on the need to enhance agricultural research on these multispecies systems, combining both agronomic and ecological concepts and tools.", "title": "" }, { "docid": "f62e97d9780af541e3df0adb276751e4", "text": "Microtask crowdsouring has emerged as an excellent means to acquire human input on demand, and has found widespread application in solving a variety of problems. Popular examples include surveys, content creation, acquisition of image annotations, etc. However, there are a number of challenges that need to be overcome to realize the true potential of this paradigm. With an aim to improve the effectiveness of microtask crowdsourcing, we identify three main challenges to address within the scope of this thesis. The first challenge is the limited understanding of crowdsourced tasks and crowd worker characteristics. Understanding the dynamics of tasks that are crowdsourced and the behavior of workers contributing to tasks, can play a vital role in effective task design. Secondly, current worker pre-selection mechanisms are simplistic and inadequate. It is challenging to recruit workers with desirable skills in the absence of historical knowledge regarding the quality of work produced. There is a need for stronger indicators of worker competence and more effective pre-selection mechanisms. Finally, there has been an incomplete consideration of factors that influence and shape the quality of work produced in crowdsourced microtasks. It is important to fully understand different aspects that influence crowd work. In this dissertation, we tackle the aforementioned challenges and propose novel methods to overcome existing problems in each case. Our main contributions are described below. • Advancing the Current Understanding of Task Types, Worker Behavior and Quality Control — Our findings from an extensive study of 1,000 workers on CrowdFlower advance the current understanding of crowdsourced microtasks and corresponding worker behavior. We propose a two-level categorization scheme for microtasks and revealed insights into the task affinity of workers, effort exerted to complete tasks of various types, and worker satisfaction with the monetary incentives. We analyze the prevalent malicious activity on crowdsourcing platforms and study the behavior exhibited by trustworthy and untrustworthy workers, particularly on crowdsourced surveys. To improve the overall quality of results, we propose behavioral metrics that can be used to measure and counter undesirable or potentially malicious activity in crowdsourced tasks. Considering these aspects, we prescribe guidelines for the effective design of crowdsourced tasks. • Novel Mechanisms for Worker Pre-selection — We propose two distinct and novel methods for worker pre-selection that outperform state-ofthe-art approaches across different types of tasks. First, we define a data-driven worker typology. By relying on low-level behavioral traces of workers, we propose behavioral features for worker modeling and preselection. Our findings bear important implications for crowdsourcing systems where a worker’s behavioral type is unknown prior to participation. Next, we make a case for competence-based pre-selection in crowdsourcing marketplaces. We show the implications of flawed selfassessments on real-world microtasks, and propose a novel worker preselection method that considers accuracy of worker self-assessments. Our results confirm that requesters in crowdsourcing platforms can greatly benefit by additionally considering the accuracy of worker selfassessments in the pre-screening phase. • Revealing Hidden Factors that Affect Crowd Work: The Cases of Task Clarity and Work Environments — Workers of microtask crowdsourcing marketplaces strive to find a balance between the need for monetary income and the need for high reputation. Such balance is often threatened by poorly formulated tasks, as workers attempt their execution despite a sub-optimal understanding of the work to be done. We unearth the role of task clarity as a characterising property of tasks in crowdsourcing, and propose a novel model for task clarity based on the goal and role clarity constructs. We reveal that task clarity is coherently perceived by crowd workers, and is affected by the type of the task. We then propose a set of features to capture task clarity, and used the acquired labels to train and validate a supervised machine learning model for task clarity prediction. Another aspect that has remained largely invisible in microtask crowdsourcing is that of work environments ; defined as the hardware and software affordances at the disposal of crowd workers which are used to complete microtasks on crowdsourcing platforms. Through multiple studies, we reveal the significant role of work environments in the shaping of crowd work. Our findings indicate that crowd workers are embedded in a variety of work environments which influence the quality of work produced. Depending on the design of UI elements in microtasks, we found that some work environments support crowd workers more than others.", "title": "" }, { "docid": "10f3cafc05b3fb3b235df34aebbe0e23", "text": "To cope with monolithic controller replicas and the current unbalance situation in multiphase converters, a pseudo-ramp current balance technique is proposed to achieve time-multiplexing current balance in voltage-mode multiphase DC-DC buck converter. With only one modulation controller, silicon area and power consumption caused by the replicas of controller can be reduced significantly. Current balance accuracy can be further enhanced since the mismatches between different controllers caused by process, voltage, and temperature variations are removed. Moreover, the offset cancellation control embedded in the current matching unit is used to eliminate intrinsic offset voltage existing at the operational transconductance amplifier for improved current balance. An explicit model, which contains both voltage and current balance loops with non-ideal effects, is derived for analyzing system stability. Experimental results show that current difference between each phase can be decreased by over 83% under both heavy and light load conditions.", "title": "" }, { "docid": "a412f5facafdb2479521996c05143622", "text": "A temperature and supply independent on-chip reference relaxation oscillator for low voltage design is described. The frequency of oscillation is mainly a function of a PVT robust biasing current. The comparator for the relaxation oscillator is replaced with a high speed common-source stage to eliminate the temperature dependency of the comparator delay. The current sources and voltages are biased by a PVT robust references derived from a bandgap circuitry. This oscillator is designed in TSMC 65 nm CMOS process to operate with a minimum supply voltage of 1.4 V and consumes 100 μW at 157 MHz frequency of oscillation. The oscillator exhibits frequency variation of 1.6% for supply changes from 1.4 V to 1.9 V, and ±1.2% for temperature changes from 20°C to 120°C.", "title": "" }, { "docid": "90f188c1f021c16ad7c8515f1244c08a", "text": "Minimally invasive principles should be the driving force behind rehabilitating young individuals affected by severe dental erosion. The maxillary anterior teeth of a patient, class ACE IV, has been treated following the most conservatory approach, the Sandwich Approach. These teeth, if restored by conventional dentistry (eg, crowns) would have required elective endodontic therapy and crown lengthening. To preserve the pulp vitality, six palatal resin composite veneers and four facial ceramic veneers were delivered instead with minimal, if any, removal of tooth structure. In this article, the details about the treatment are described.", "title": "" }, { "docid": "961c4da65983926a8bc06189f873b006", "text": "By studying two well known hypotheses in economics, this paper illustrates how emergent properties can be shown in an agent-based artificial stock market. The two hypotheses considered are the efficient market hypothesis and the rational expectations hypothesis. We inquire whether the macrobehavior depicted by these two hypotheses is consistent with our understanding of the microbehavior. In this agent-based model, genetic programming is applied to evolving a population of traders learning over time. We first apply a series of econometric tests to show that the EMH and the REH can be satisfied with some portions of the artificial time series. Then, by analyzing traders’ behavior, we show that these aggregate results cannot be interpreted as a simple scaling-up of individual behavior. A conjecture based on sunspot-like signals is proposed to explain why macrobehavior can be very different from microbehavior. We assert that the huge search space attributable to genetic programming can induce sunspot-like signals, and we use simulated evolved complexity of forecasting rules and Granger causality tests to examine this assertion. © 2002 Elsevier Science B.V. All rights reserved. JEL classification: G12: asset pricing; G14: information and market efficiency; D83: search, learning, and information", "title": "" }, { "docid": "e19b68314e61f96dea0d7d98f80ca19b", "text": "With growing interest in adversarial machine learning, it is important for practitioners and users of machine learning to understand how their models may be attacked. We present a web-based visualization tool, ADVERSARIALPLAYGROUND, to demonstrate the efficacy of common adversarial methods against a convolutional neural network. ADVERSARIAL-PLAYGROUND provides users an efficient and effective experience in exploring algorithms for generating adversarial examples — samples crafted by an adversary to fool a machine learning system. To enable fast and accurate responses to users, our webapp employs two key features: (1) We split the visualization and evasive sample generation duties between client and server while minimizing the transferred data. (2) We introduce a variant of the Jacobian Saliency Map Approach that is faster and yet maintains a comparable evasion rate 1.", "title": "" }, { "docid": "bc2bc8b2d9db3eb14e126c627248a66a", "text": "With the growing complexity of today's software applications injunction with the increasing competitive pressure has pushed the quality assurance of developed software towards new heights. Software testing is an inevitable part of the Software Development Lifecycle, and keeping in line with its criticality in the pre and post development process makes it something that should be catered with enhanced and efficient methodologies and techniques. This paper aims to discuss the existing as well as improved testing techniques for the better quality assurance purposes.", "title": "" }, { "docid": "c78922c2c3ee5425701da2ecb67da14d", "text": "We present an analysis of security vulnerabilities in the domain name system (DNS) and the DNS security extensions (DNSSEC). DNS data that is provided by name servers lacks support for data origin authentication and data integrity. This makes DNS vulnerable to man in the middle (MITM) attacks, as well as a range of other attacks. To make DNS more robust, DNSSEC was proposed by the Internet Engineering Task Force (IETF). DNSSEC provides data origin authentication and integrity by using digital signatures. Although DNSSEC provides security for DNS data, it suffers from serious security and operational flaws. We discuss the DNS and DNSSEC architectures, and consider the associated security vulnerabilities", "title": "" }, { "docid": "af3b81357bcb908c290e78412940e2ea", "text": "Ambient occlusion and directional (spherical harmonic) occlusion have become a staple of production rendering because they capture many visually important qualities of global illumination while being reusable across multiple artistic lighting iterations. However, ray-traced solutions for hemispherical occlusion require many rays per shading point (typically 256-1024) due to the full hemispherical angular domain. Moreover, each ray can be expensive in scenes with moderate to high geometric complexity. However, many nearby rays sample similar areas, and the final occlusion result is often low frequency. We give a frequency analysis of shadow light fields using distant illumination with a general BRDF and normal mapping, allowing us to share ray information even among complex receivers. We also present a new rotationally-invariant filter that easily handles samples spread over a large angular domain. Our method can deliver 4x speed up for scenes that are computationally bound by ray tracing costs.", "title": "" }, { "docid": "75a53d2e1f13de6241742b71cf5fdbc4", "text": "Encoders were video recorded giving either truthful or deceptive descriptions of video footage designed to generate either emotional or unemotional responses. Decoders were asked to indicate the truthfulness of each item, what cues they used in making their judgements, and then to complete both the Micro Expression Training Tool (METT) and Subtle Expression Training Tool (SETT). Although overall performance on the deception detection task was no better than chance, performance for emotional lie detection was significantly above chance, while that for unemotional lie detection was significantly below chance. Emotional lie detection accuracy was also significantly positively correlated with reported use of facial expressions and with performance on the SETT, but not on the METT. The study highlights the importance of taking the type of lie into account when assessing skill in deception detection.", "title": "" }, { "docid": "ba5f6d151fea9e8715991ac37448c43e", "text": "In this paper we present an analysis of the effect of large scale video data augmentation for semantic segmentation in driving scenarios. Our work is motivated by a strong correlation between the high performance of most recent deep learning based methods and the availability of large volumes of ground truth labels. To generate additional labelled data, we make use of an occlusion-aware and uncertainty-enabled label propagation algorithm [8]. As a result we increase the availability of high-resolution labelled frames by a factor of 20, yielding in a 6.8% to 10.8% rise in average classification accuracy and/or IoU scores for several semantic segmentation networks. Our key contributions include: (a) augmented CityScapes and CamVid datasets providing 56.2K and 6.5K additional labelled frames of object classes respectively, (b) detailed empirical analysis of the effect of the use of augmented data as well as (c) extension of proposed framework to instance segmentation.", "title": "" }, { "docid": "3e4d937d38a61a94bb8647d3f7b02802", "text": "Most classification algorithms deal with datasets which have a set of input features, the variables to be used as predictors, and only one output class, the variable to be predicted. However, in late years many scenarios in which the classifier has to work with several outputs have come to life. Automatic labeling of text documents, image annotation or protein classification are among them. Multilabel datasets are the product of these new needs, and they have many specific traits. The mldr package allows the user to load datasets of this kind, obtain their characteristics, produce specialized plots, and manipulate them. The goal is to provide the exploratory tools needed to analyze multilabel datasets, as well as the transformation and manipulation functions that will make possible to apply binary and multiclass classification models to this data or the development of new multilabel classifiers. Thanks to its integrated user interface, the exploratory functions will be available even to non-specialized R users.", "title": "" }, { "docid": "3800853b95bad046a25f76ede85ba51c", "text": "Tendon driven mechanisms have been considered in robotic design for several decades. They provide lightweight end effectors with high dynamics. Using remote actuators it is possible to free more space for mechanics or electronics. Nevertheless, lightweight mechanism are fragile and unfortunately their control software can not protect them during the very first instant of an impact. Compliant mechanisms address this issue, providing a mechanical low pass filter, increasing the time available before the controller reacts. Using adjustable stiffness elements and an antagonistic architecture, the joint stiffness can be adjusted by variation of the tendon pre-tension. In this paper, the fundamental equations of m antagonistic tendon driven mechanisms are reviewed. Due to limited tendon forces the maximum torque and the maximum acheivable stiffness are dependent. This implies, that not only the torque workspace, or the stiffness workspace must be considered but also their interactions. Since the results are of high dimensionality, quality measures are necessary to provide a synthetic view. Two quality measures, similar to those used in grasp planning, are presented. They both provide the designer with a more precise insight into the mechanism.", "title": "" } ]
scidocsrr
3bd867174d73b8ae0d7387a7fcd29149
WTF: the who to follow service at Twitter
[ { "docid": "576aa36956f37b491382b0bdd91f4bea", "text": "The generation of RDF data has accelerated to the point where many data sets need to be partitioned across multiple machines in order to achieve reasonable performance when querying the data. Although tremendous progress has been made in the Semantic Web community for achieving high performance data management on a single node, current solutions that allow the data to be partitioned across multiple machines are highly inefficient. In this paper, we introduce a scalable RDF data management system that is up to three orders of magnitude more efficient than popular multi-node RDF data management systems. In so doing, we introduce techniques for (1) leveraging state-of-the-art single node RDF-store technology (2) partitioning the data across nodes in a manner that helps accelerate query processing through locality optimizations and (3) decomposing SPARQL queries into high performance fragments that take advantage of how data is partitioned in a cluster.", "title": "" } ]
[ { "docid": "4c410bb0390cc4611da4df489c89fca0", "text": "In this work, we propose a generalized product of experts (gPoE) framework for combining the predictions of multiple probabilistic models. We identify four desirable properties that are important for scalability, expressiveness and robustness, when learning and inferring with a combination of multiple models. Through analysis and experiments, we show that gPoE of Gaussian processes (GP) have these qualities, while no other existing combination schemes satisfy all of them at the same time. The resulting GP-gPoE is highly scalable as individual GP experts can be independently learned in parallel; very expressive as the way experts are combined depends on the input rather than fixed; the combined prediction is still a valid probabilistic model with natural interpretation; and finally robust to unreliable predictions from individual experts.", "title": "" }, { "docid": "2adf5e06cfc7e6d8cf580bdada485a23", "text": "This paper describes the comprehensive Terrorism Knowledge Base TM (TKB TM) which will ultimately contain all relevant knowledge about terrorist groups, their members, leaders, affiliations , etc., and full descriptions of specific terrorist events. Led by world-class experts in terrorism , knowledge enterers have, with simple tools, been building the TKB at the rate of up to 100 assertions per person-hour. The knowledge is stored in a manner suitable for computer understanding and reasoning. The TKB also utilizes its reasoning modules to integrate data and correlate observations, generate scenarios, answer questions and compose explanations.", "title": "" }, { "docid": "5a97d79641f7006d7b5d0decd3a7ad3e", "text": "We present a cognitive model of inducing verb selectional preferences from individual verb usages. The selectional preferences for each verb argument are represented as a probability distribution over the set of semantic properties that the argument can possess—asemantic profile . The semantic profiles yield verb-specific conceptualizations of the arguments associated with a syntactic position. The proposed model can learn appropriate verb profiles from a small set of noisy training data, and can use them in simulating human plausibility judgments and analyzing implicit object alternation.", "title": "" }, { "docid": "73877d224b5bbbde7ea8185284da3c2d", "text": "With the advancement of web technology and its growth, there is a huge volume of data present in the web for internet users and a lot of data is generated too. Internet has become a platform for online learning, exchanging ideas and sharing opinions. Social networking sites like Twitter, Facebook, Google+ are rapidly gaining popularity as they allow people to share and express their views about topics, have discussion with different communities, or post messages across the world. There has been lot of work in the field of sentiment analysis of twitter data. This survey focuses mainly on sentiment analysis of twitter data which is helpful to analyze the information in the tweets where opinions are highly unstructured, heterogeneous and are either positive or negative, or neutral in some cases. In this paper, we provide a survey and a comparative analyses of existing techniques for opinion mining like machine learning and lexicon-based approaches, together with evaluation metrics. Using various machine learning algorithms like Naive Bayes, Max Entropy, and Support Vector Machine, we provide research on twitter data streams.We have also discussed general challenges and applications of Sentiment Analysis on Twitter.", "title": "" }, { "docid": "0e3ff65bc98bb82fbab44179eb7f2710", "text": "Internet technology is very pervasive today. The number of devices connected to the Internet, thosewith a digital identity, is increasing day by day. With the developments in the technology, Internet of Things (IoT) become important part of human life. However, it is not well defined and secure. Now, various security issues are considered as major problem for a full-fledged IoT environment. There exists a lot of security challenges with the proposed architectures and the technologies which make the backbone of the Internet of Things. Some efficient and promising security mechanisms have been developed to secure the IoT environment, however, there is a lot to do. The challenges are ever increasing and the solutions have to be ever improving. Therefore, aim of this paper is to discuss the history, background, statistics of IoT and security based analysis of IoT architecture. In addition, we will provide taxonomy of security challenges in IoT environment and taxonomy of various defense mechanisms. We conclude our paper discussing various research challenges that still exist in the literature, which provides better understanding of the problem, current solution space, and future research directions to defend IoT against different attacks.", "title": "" }, { "docid": "333800eb8bb529aa724dd43abffd88d8", "text": "The efficiency of Boolean function manipulation depends on the form of representation of Boolean functions. Binary Decision Diagrams (BDD's) are graph representations proposed by Akers and Bryant. BDD's have some properties which can be used to enable efficient Boolean function manipulation.\nIn this paper, we describe a technique of more efficient Boolean function manipulation that uses Shared Binary Decision Diagrams (SBDD's) with attributed edges. Our implements include an ordering algorithm of input variables and a method of handling don't care. We show experimental results produced by the implementation of the Boolean function manipulator.", "title": "" }, { "docid": "05e4168615c39071bb9640bd5aa6f3d9", "text": "The intestinal microbiome plays an important role in the metabolism of chemical compounds found within food. Bacterial metabolites are different from those that can be generated by human enzymes because bacterial processes occur under anaerobic conditions and are based mainly on reactions of reduction and/or hydrolysis. In most cases, bacterial metabolism reduces the activity of dietary compounds; however, sometimes a specific product of bacterial transformation exhibits enhanced properties. Studies on the metabolism of polyphenols by the intestinal microbiota are crucial for understanding the role of these compounds and their impact on our health. This review article presents possible pathways of polyphenol metabolism by intestinal bacteria and describes the diet-derived bioactive metabolites produced by gut microbiota, with a particular emphasis on polyphenols and their potential impact on human health. Because the etiology of many diseases is largely correlated with the intestinal microbiome, a balance between the host immune system and the commensal gut microbiota is crucial for maintaining health. Diet-related and age-related changes in the human intestinal microbiome and their consequences are summarized in the paper.", "title": "" }, { "docid": "c3b4bcf57473321dc401ac583438b3a3", "text": "Face recognition from RGB-D images utilizes 2 complementary types of image data, i.e. colour and depth images, to achieve more accurate recognition. In this paper, we propose a face recognition system based on deep learning, which can be used to verify and identify a subject from the colour and depth face images captured with a consumer-level RGB-D camera. To recognize faces with colour and depth information, our system contains 3 parts: depth image recovery, deep learning for feature extraction, and joint classification. To alleviate the problem of the limited size of available RGB-D data for deep learning, our deep network is firstly trained with colour face dataset, and later fine-tuned on depth face images for transfer learning. Our experiments on some public and our own RGB-D face datasets show that the proposed face recognition system provides very accurate face recognition results and it is robust against variations in head rotation and environmental illumination.", "title": "" }, { "docid": "12e9e147fdcd51e8129ee2a9c80ea9ce", "text": "Interactive image segmentation is an important problem in computer vision with many applications including image editing, object recognition and image retrieval. Most existing interactive segmentation methods only operate on color images. Until recently, very few works have been proposed to leverage depth information from low-cost sensors to improve interactive segmentation. While these methods achieve better results than color-based methods, they are still limited in either using depth as an additional color channel or simply combining depth with color in a linear way. We propose a novel interactive segmentation algorithm which can incorporate multiple feature cues like color, depth, and normals in an unified graph cut framework to leverage these cues more effectively. A key contribution of our method is that it automatically selects a single cue to be used at each pixel, based on the intuition that only one cue is necessary to determine the segmentation label locally. This is achieved by optimizing over both segmentation labels and cue labels, using terms designed to decide where both the segmentation and label cues should change. Our algorithm thus produces not only the segmentation mask but also a cue label map that indicates where each cue contributes to the final result. Extensive experiments on five large scale RGBD datasets show that our proposed algorithm performs significantly better than both other color-based and RGBD based algorithms in reducing the amount of user inputs as well as increasing segmentation accuracy.", "title": "" }, { "docid": "084d376f9aa8d56d6dfec7d78e2c807f", "text": "A comprehensive model is presented which enables the effects of ionizing radiation on bulk CMOS devices and integrated circuits to be simulated with closed form functions. The model adapts general equations for defect formation in uniform SiO2 films to facilitate analytical calculations of trapped charge and interface trap buildup in structurally irregular and radiation sensitive shallow trench isolation (STI) oxides. A new approach whereby non-uniform defect distributions along the STI sidewall are calculated, integrated into implicit surface potential equations, and ultimately used to model radiation-induced ldquoedgerdquo leakage currents in n-channel MOSFETs is described. The results of the modeling approach are compared to experimental data obtained on 130 nm and 90 nm devices. The features having the greatest impact on the increased radiation tolerance of advanced deep-submicron bulk CMOS technologies are also discussed. These features include increased doping levels along the STI sidewall.", "title": "" }, { "docid": "1f56f045a9b262ce5cd6566d47c058bb", "text": "The growing popularity and development of data mining technologies bring serious threat to the security of individual,'s sensitive information. An emerging research topic in data mining, known as privacy-preserving data mining (PPDM), has been extensively studied in recent years. The basic idea of PPDM is to modify the data in such a way so as to perform data mining algorithms effectively without compromising the security of sensitive information contained in the data. Current studies of PPDM mainly focus on how to reduce the privacy risk brought by data mining operations, while in fact, unwanted disclosure of sensitive information may also happen in the process of data collecting, data publishing, and information (i.e., the data mining results) delivering. In this paper, we view the privacy issues related to data mining from a wider perspective and investigate various approaches that can help to protect sensitive information. In particular, we identify four different types of users involved in data mining applications, namely, data provider, data collector, data miner, and decision maker. For each type of user, we discuss his privacy concerns and the methods that can be adopted to protect sensitive information. We briefly introduce the basics of related research topics, review state-of-the-art approaches, and present some preliminary thoughts on future research directions. Besides exploring the privacy-preserving approaches for each type of user, we also review the game theoretical approaches, which are proposed for analyzing the interactions among different users in a data mining scenario, each of whom has his own valuation on the sensitive information. By differentiating the responsibilities of different users with respect to security of sensitive information, we would like to provide some useful insights into the study of PPDM.", "title": "" }, { "docid": "9ffbb99b5cf833b89d46017a77ac00ab", "text": "The effectiveness of many dynamic program analysis techniques depends heavily on the completeness of the test suite applied during the analysis process. Test suites are often composed by developers and aim at testing all of the functionality of a software system. However, test suites may not be complete, if they exist at all. To date, only two methods exist for automatically generating test input for closed binaries: fuzzing and symbolic execution. Despite previous successes of these methods in identifying bugs, both techniques have limitations. In this paper, we propose a new method for autonomously generating valid input and identifying protocols for closed x86 binaries. The method presented can be used as a standalone tool or can be combined with other techniques for improved results. To assess its effectiveness, we test InputFinder, the implementation of our method, against binaries from the DARPA Cyber Grand Challenge example set. Our evaluations show that our method is not only effective in finding input and determining whether a protocol is expected but can also find unexpected control flow paths.", "title": "" }, { "docid": "fb38bdc5772975f9705b2ca90f819b25", "text": "We propose a general approach to the gaze redirection problem in images that utilizes machine learning. The idea is to learn to re-synthesize images by training on pairs of images with known disparities between gaze directions. We show that such learning-based re-synthesis can achieve convincing gaze redirection based on monocular input, and that the learned systems generalize well to people and imaging conditions unseen during training. We describe and compare three instantiations of our idea. The first system is based on efficient decision forest predictors and redirects the gaze by a fixed angle in real-time (on a single CPU), being particularly suitable for the videoconferencing gaze correction. The second system is based on a deep architecture and allows gaze redirection by a range of angles. The second system achieves higher photorealism, while being several times slower. The third system is based on real-time decision forests at test time, while using the supervision from a “teacher” deep network during training. The third system approaches the quality of a teacher network in our experiments, and thus provides a highly realistic real-time monocular solution to the gaze correction problem. We present in-depth assessment and comparisons of the proposed systems based on quantitative measurements and a user study.", "title": "" }, { "docid": "0bc53a10750de315d5a37275dd7ae4a7", "text": "The term stigma refers to problems of knowledge (ignorance), attitudes (prejudice) and behaviour (discrimination). Most research in this area has been based on attitude surveys, media representations of mental illness and violence, has only focused upon schizophrenia, has excluded direct participation by service users, and has included few intervention studies. However, there is evidence that interventions to improve public knowledge about mental illness can be effective. The main challenge in future is to identify which interventions will produce behaviour change to reduce discrimination against people with mental illness.", "title": "" }, { "docid": "85d4ac147a4517092b9f81f89af8b875", "text": "This article is an update of an article five of us published in 1992. The areas of Multiple Criteria Decision Making (MCDM) and Multiattribute Utility Theory (MAUT) continue to be active areas of management science research and application. This paper extends the history of these areas and discusses topics we believe to be important for the future of these fields. as well as two anonymous reviewers for valuable comments.", "title": "" }, { "docid": "e56fb0a5466a2a6067db9016fc1f7f1c", "text": "©Rabobank,2012 iv ManagementSummary", "title": "" }, { "docid": "5bece01bed7c5a9a2433d95379882a37", "text": "n The polarization of electromagnetic signals is an important feature in the design of modern radar and telecommunications. Standard electromagnetic theory readily shows that a linearly polarized plane wave propagating in free space consists of two equal but counter-rotating components of circular polarization. In magnetized media, these circular modes can be arranged to produce the nonreciprocal propagation effects that are the basic properties of isolator and circulator devices. Independent phase control of right-hand (+) and left-hand (–) circular waves is accomplished by splitting their propagation velocities through differences in the e ± m ± parameter. A phenomenological analysis of the permeability m and permittivity e in dispersive media serves to introduce the corresponding magneticand electric-dipole mechanisms of interaction length with the propagating signal. As an example of permeability dispersion, a Lincoln Laboratory quasi-optical Faradayrotation isolator circulator at 35 GHz (l ~ 1 cm) with a garnet-ferrite rotator element is described. At infrared wavelengths (l = 1.55 mm), where fiber-optic laser sources also require protection by passive isolation of the Faraday-rotation principle, e rather than m provides the dispersion, and the frequency is limited to the quantum energies of the electric-dipole atomic transitions peculiar to the molecular structure of the magnetic garnet. For optimum performance, bismuth additions to the garnet chemical formula are usually necessary. Spectroscopic and molecular theory models developed at Lincoln Laboratory to explain the bismuth effects are reviewed. In a concluding section, proposed advances in present technology are discussed in the context of future radar and telecommunications challenges.", "title": "" }, { "docid": "1f4ff9d732b3512ee9b105f084edd3d2", "text": "Today, as Network environments become more complex and cyber and Network threats increase, Organizations use wide variety of security solutions against today's threats. For proper and centralized control and management, range of security features need to be integrated into unified security package. Unified threat management (UTM) as a comprehensive network security solution, integrates all of security services such as firewall, URL filtering, virtual private networking, etc. in a single appliance. PfSense is a variant of UTM, and a customized FreeBSD (Unix-like operating system). Specially is used as a router and statefull firewall. It has many packages extend it's capabilities such as Squid3 package as a as a proxy server that cache data and SquidGuard, redirector and access controller plugin for squid3 proxy server. In this paper, with implementing UTM based on PfSense platform we use Squid3 proxy server and SquidGuard proxy filter to avoid extreme amount of unwanted uploading/ downloading over the internet by users in order to optimize our organization's bandwidth consumption. We begin by defining UTM and types of it, PfSense platform with it's key services and introduce a simple and operational solution for security stability and reducing the cost. Finally, results and statistics derived from this approach compared with the prior condition without PfSense platform.", "title": "" }, { "docid": "63488071159ec9fdab9e0ce0ca96050a", "text": "This paper analyzes the performance of different classification methods for online activity recognition on smart phones using the built-in accelerometers. First, we evaluate the performance of activity recognition using the Naïve Bayes classifier and next we utilize an improvement of Minimum Distance and K-Nearest Neighbor (KNN) classification algorithms, called Clustered KNN. For the purpose of online recognition, clustered KNN eliminates the computational complexity of KNN by creating clusters, i.e., smaller training sets for each activity and classification is performed based on these compact, reduced sets. We evaluate the performance of these classifiers on five test subjects for activities of walking, running, sitting and standing, and find that Naïve Bayes provides not satisfactory results whereas Clustered KNN gives promising results compared to the previous studies and even with the ones which consider offline classification.", "title": "" }, { "docid": "fb89fd2d9bf526b8bc7f1433274859a6", "text": "In multidimensional image analysis, there are, and will continue to be, situations wherein automatic image segmentation methods fail, calling for considerable user assistance in the process. The main goals of segmentation research for such situations ought to be (i) to provide ffective controlto the user on the segmentation process while it is being executed, and (ii) to minimize the total user’s time required in the process. With these goals in mind, we present in this paper two paradigms, referred to aslive wireandlive lane, for practical image segmentation in large applications. For both approaches, we think of the pixel vertices and oriented edges as forming a graph, assign a set of features to each oriented edge to characterize its “boundariness,” and transform feature values to costs. We provide training facilities and automatic optimal feature and transform selection methods so that these assignments can be made with consistent effectiveness in any application. In live wire, the user first selects an initial point on the boundary. For any subsequent point indicated by the cursor, an optimal path from the initial point to the current point is found and displayed in real time. The user thus has a live wire on hand which is moved by moving the cursor. If the cursor goes close to the boundary, the live wire snaps onto the boundary. At this point, if the live wire describes the boundary appropriately, the user deposits the cursor which now becomes the new starting point and the process continues. A few points (livewire segments) are usually adequate to segment the whole 2D boundary. In live lane, the user selects only the initial point. Subsequent points are selected automatically as the cursor is moved within a lane surrounding the boundary whose width changes", "title": "" } ]
scidocsrr
e8e2c70200ca32ef37ef9e45a8641872
A Survey of Traffic Issues in Machine-to-Machine Communications Over LTE
[ { "docid": "e55166985c781a0d8c503561211703c4", "text": "It is one of the key issues in the 4-th generation (4G) cellular networks how to efficiently handle the heavy random access (RA) load caused by newly accommodating the huge population of Machine-to-Machine or Machine-Type Communication (M2M or MTC) customers/devices. We consider two major candidate methods for RA preamble allocation and management, which are under consideration for possible adoption in Long Term Evolution (LTE)-Advanced. One method, Method 1, is to completely split the set of available RA preambles into two disjoint subsets: one is for human-to-human (H2H) customers and the other for M2M customers/devices. The other method, Method 2, is also to split the set into two subsets: one is for H2H customers only whereas the other is for both H2H and M2M customers. We model and analyze the throughput performance of two methods. Our results demonstrate that there is a boundary of RA load below which Method 2 performs slightly better than Method 1 but above which Method 2 degrades throughput to a large extent. Our modeling and analysis can be utilized as a guideline to design the RA preamble resource management method.", "title": "" } ]
[ { "docid": "062f58c5edcebee25ba4e389944dba93", "text": "To increase the probability of destroying a maneuvering target (e.g. ballistic missile), a framework of multi-missiles interception is presented in this paper. Each intercepting missile is equipped with an IR Image seeker which can provide excellent stealth ability during its course of tracking the ballistic missile. Such intelligent ranging system integrates the Interacting Multiple Model (IMM) technique and the concept of reachable set to find the optimal interception results by minimizing the energy of pursuing the maneuvering target. The proposed guidance law of every missile interceptor is designed based on pursuit and evasion game theory while considering the motion of the target in 3-D space such that the distance between the missiles and the target is minimized. Finally, extensive computer simulations have been conducted to validate the performance of the proposed system.", "title": "" }, { "docid": "f670b91f8874c2c2db442bc869889dbd", "text": "This paper summarizes lessons learned from the first Amazon Picking Challenge in which 26 international teams designed robotic systems that competed to retrieve items from warehouse shelves. This task is currently performed by human workers, and there is hope that robots can someday help increase efficiency and throughput while lowering cost. We report on a 28-question survey posed to the teams to learn about each team’s background, mechanism design, perception apparatus, planning and control approach. We identify trends in this data, correlate it with each team’s success in the competition, and discuss observations and lessons learned. Note to Practitioners: Abstract—Perception, motion planning, grasping, and robotic system engineering has reached a level of maturity that makes it possible to explore automating simple warehouse tasks in semi-structured environments that involve high-mix, low-volume picking applications. This survey summarizes lessons learned from the first Amazon Picking Challenge, highlighting mechanism design, perception, and motion planning algorithms, as well as software engineering practices that were most successful in solving a simplified order fulfillment task. While the choice of mechanism mostly affects execution speed, the competition demonstrated the systems challenges of robotics and illustrated the importance of combining reactive control with deliberative planning.", "title": "" }, { "docid": "2013fc509f8f6d3fa2966d7d76169f43", "text": "Graphene, whose discovery won the 2010 Nobel Prize in physics, has been a shining star in the material science in the past few years. Owing to its interesting electrical, optical, mechanical and chemical properties, graphene has found potential applications in a wide range of areas, including biomedicine. In this article, we will summarize the latest progress of using graphene for various biomedical applications, including drug delivery, cancer therapies and biosensing, and discuss the opportunities and challenges in this emerging field.", "title": "" }, { "docid": "6f7c81d869b4389d5b84e80b4c306381", "text": "Environmental, genetic, and immune factors are at play in the development of the variable clinical manifestations of Graves' ophthalmopathy (GO). Among the environmental contributions, smoking is the risk factor most consistently linked to the development or worsening of the disease. The close temporal relationship between the diagnoses of Graves' hyperthyroidism and GO have long suggested that these 2 autoimmune conditions may share pathophysiologic features. The finding that the thyrotropin receptor (TSHR) is expressed in orbital fibroblasts, the target cells in GO, supported the notion of a common autoantigen. Both cellular and humeral immunity directed against TSHR expressed on orbital fibroblasts likely initiate the disease process. Activation of helper T cells recognizing TSHR peptides and ligation of TSHR by TRAb lead to the secretion of inflammatory cytokines and chemokines, and enhanced hyaluronic acid (HA) production and adipogenesis. The resulting connective tissue remodeling results in varying degrees extraocular muscle enlargement and orbital fat expansion. A subset of orbital fibroblasts express CD34, are bone-marrow derived, and circulate as fibrocytes that infiltrate connective tissues at sites of injury or inflammation. As these express high levels of TSHR and are capable of producing copious cytokines and chemokines, they may represent an orbital fibroblast population that plays a central role in GO development. In addition to TSHR, orbital fibroblasts from patients with GO express high levels of IGF-1R. Recent studies suggest that these receptors engage in cross-talk induced by TSHR ligation to synergistically enhance TSHR signaling, HA production, and the secretion of inflammatory mediators.", "title": "" }, { "docid": "3f0b6a3238cf60d7e5d23363b2affe95", "text": "This paper presents a new strategy to control the generated power that comes from the energy sources existing in autonomous and isolated Microgrids. In this particular study, the power system consists of a power electronic converter supplied by a battery bank, which is used to form the AC grid (grid former converter), an energy source based on a wind turbine with its respective power electronic converter (grid supplier converter), and the power consumers (loads). The main objective of this proposed strategy is to control the state of charge of the battery bank limiting the voltage on its terminals by controlling the power generated by the energy sources. This is done without using dump loads or any physical communication among the power electronic converters or the individual energy source controllers. The electrical frequency of the microgrid is used to inform to the power sources and their respective converters the amount of power they need to generate in order to maintain the battery-bank state of charge below or equal its maximum allowable limit. It is proposed a modified droop control to implement this task.", "title": "" }, { "docid": "299ad92581c5018f900962da9275bc83", "text": "The invention of the Bitcoin protocol has opened the door to new forms of financial interaction. One such form may be to adapt Bitcoin technology for use as a community currency. A community currency is a form of money issued by a non-government entity to serve the economic or social interests of a group of people, often in a small geographic area. We propose a model of a community cryptocurrency that includes a community fund from which community members may take out loans if the community votes to approve them. We consider possible vulnerabilities and mitigations to issues that would affect this community fund, including issues of identity, voting protocols and funds management. We conclude that these vulnerabilities are, in most cases, amenable to technological mitigations that must be adaptable to both community values and changing conditions, emphasizing the need for careful currency design.", "title": "" }, { "docid": "2907d1078ce8eaf8b01817cea3b9264c", "text": "Having a reliable understanding about the behaviours, problems, and performance of existing processes is important in enabling a targeted process improvement initiative. Recently, there has been an increase in the application of innovative process mining techniques to facilitate evidence-based understanding about organizations’ business processes. Nevertheless, the application of these techniques in the domain of finance in Australia is, at best, scarce. This paper details a 6-month case study on the application of process mining in one of the largest insurance companies in Australia. In particular, the challenges encountered, the lessons learned, and the results obtained from this case study are detailed. Through this case study, we not only validated existing ‘lessons learned’ from other similar case studies, but also added new insights that can be beneficial to other practitioners in applying process mining in their respective fields.", "title": "" }, { "docid": "341e3832bf751688a9deabdfb5687f69", "text": "The NINCDS-ADRDA and the DSM-IV-TR criteria for Alzheimer's disease (AD) are the prevailing diagnostic standards in research; however, they have now fallen behind the unprecedented growth of scientific knowledge. Distinctive and reliable biomarkers of AD are now available through structural MRI, molecular neuroimaging with PET, and cerebrospinal fluid analyses. This progress provides the impetus for our proposal of revised diagnostic criteria for AD. Our framework was developed to capture both the earliest stages, before full-blown dementia, as well as the full spectrum of the illness. These new criteria are centred on a clinical core of early and significant episodic memory impairment. They stipulate that there must also be at least one or more abnormal biomarkers among structural neuroimaging with MRI, molecular neuroimaging with PET, and cerebrospinal fluid analysis of amyloid beta or tau proteins. The timeliness of these criteria is highlighted by the many drugs in development that are directed at changing pathogenesis, particularly at the production and clearance of amyloid beta as well as at the hyperphosphorylation state of tau. Validation studies in existing and prospective cohorts are needed to advance these criteria and optimise their sensitivity, specificity, and accuracy.", "title": "" }, { "docid": "612416cb82559f94d8d4b888bad17ba1", "text": "Future plastic materials will be very different from those that are used today. The increasing importance of sustainability promotes the development of bio-based and biodegradable polymers, sometimes misleadingly referred to as 'bioplastics'. Because both terms imply \"green\" sources and \"clean\" removal, this paper aims at critically discussing the sometimes-conflicting terminology as well as renewable sources with a special focus on the degradation of these polymers in natural environments. With regard to the former we review innovations in feedstock development (e.g. microalgae and food wastes). In terms of the latter, we highlight the effects that polymer structure, additives, and environmental variables have on plastic biodegradability. We argue that the 'biodegradable' end-product does not necessarily degrade once emitted to the environment because chemical additives used to make them fit for purpose will increase the longevity. In the future, this trend may continue as the plastics industry also is expected to be a major user of nanocomposites. Overall, there is a need to assess the performance of polymer innovations in terms of their biodegradability especially under realistic waste management and environmental conditions, to avoid the unwanted release of plastic degradation products in receiving environments.", "title": "" }, { "docid": "89865dbb80fcb2d9c5d4d4fe4fe10b83", "text": "Elaborate efforts have been made to eliminate fake markings and refine <inline-formula> <tex-math notation=\"LaTeX\">${\\omega }$ </tex-math></inline-formula>-markings in the existing modified or improved Karp–Miller trees for various classes of unbounded Petri nets since the late 1980s. The main issues fundamentally are incurred due to the generation manners of the trees that prematurely introduce some potentially unbounded markings with <inline-formula> <tex-math notation=\"LaTeX\">${\\omega }$ </tex-math></inline-formula> symbols and keep their growth into new ones. Aiming at addressing them, this work presents a non-Karp–Miller tree called a lean reachability tree (LRT). First, a sufficient and necessary condition of the unbounded places and some reachability properties are established to reveal the features of unbounded nets. Then, we present an LRT generation algorithm with a sufficiently enabling condition (SEC). When generating a tree, SEC requires that the components of a covering node are not replaced by <inline-formula> <tex-math notation=\"LaTeX\">${\\omega }$ </tex-math></inline-formula> symbols, but continue to grow until any transition on an output path of an unbounded place has been branch-enabled at least once. In return, no fake marking is produced and no legal marking is lost during the tree generation. We prove that LRT can faithfully express by folding, instead of equivalently representing, the reachability set of an unbounded net. Also, some properties of LRT are examined and a sufficient condition of deadlock existence based on it is given. The case studies show that LRT outperforms the latest modified Karp–Miller trees in terms of size, expressiveness, and applicability. It can be applied to the analysis of the emerging discrete event systems with infinite states.", "title": "" }, { "docid": "45840f792b397da02fadc644d35faaf7", "text": "Do there exist general principles, which any system must obey in order to achieve advanced general intelligence using feasible computational resources? Here we propose one candidate: “cognitive synergy,” a principle which suggests that general intelligences must contain different knowledge creation mechanisms corresponding to different sorts of memory (declarative, procedural, sensory/episodic, attentional, intentional); and that these different mechanisms must be interconnected in such a way as to aid each other in overcoming memory-type-specific combinatorial explosions.", "title": "" }, { "docid": "4a9474c0813646708400fc02c344a976", "text": "Over the years, the Web has shrunk the world, allowing individuals to share viewpoints with many more people than they are able to in real life. At the same time, however, it has also enabled anti-social and toxic behavior to occur at an unprecedented scale. Video sharing platforms like YouTube receive uploads from millions of users, covering a wide variety of topics and allowing others to comment and interact in response. Unfortunately, these communities are periodically plagued with aggression and hate attacks. In particular, recent work has showed how these attacks often take place as a result of “raids,” i.e., organized efforts coordinated by ad-hoc mobs from third-party communities. Despite the increasing relevance of this phenomenon, online services often lack effective countermeasures to mitigate it. Unlike well-studied problems like spam and phishing, coordinated aggressive behavior both targets and is perpetrated by humans, making defense mechanisms that look for automated activity unsuitable. Therefore, the de-facto solution is to reactively rely on user reports and human reviews. In this paper, we propose an automated solution to identify videos that are likely to be targeted by coordinated harassers. First, we characterize and model YouTube videos along several axes (metadata, audio transcripts, thumbnails) based on a ground truth dataset of raid victims. Then, we use an ensemble of classifiers to determine the likelihood that a video will be raided with high accuracy (AUC up to 94%). Overall, our work paves the way for providing video platforms like YouTube with proactive systems to detect and mitigate coordinated hate attacks.", "title": "" }, { "docid": "8ab5ae25073b869ea28fc25df3cfdf5f", "text": "We present the TurkuNLP entry to the BioNLP Shared Task 2016 Bacteria Biotopes event extraction (BB3-event) subtask. We propose a deep learningbased approach to event extraction using a combination of several Long Short-Term Memory (LSTM) networks over syntactic dependency graphs. Features for the proposed neural network are generated based on the shortest path connecting the two candidate entities in the dependency graph. We further detail how this network can be efficiently trained to have good generalization performance even when only a very limited number of training examples are available and part-of-speech (POS) and dependency type feature representations must be learned from scratch. Our method ranked second among the entries to the shared task, achieving an F-score of 52.1% with 62.3% precision and 44.8% recall.", "title": "" }, { "docid": "e548d342a2578add2a8bb12c42f4e465", "text": "Industry-proven field-weakening solutions for nonsalient-pole permanent-magnet synchronous motor drives are presented in this paper. The core algorithm relies on direct symbolic equations. The equations take into account the stator resistance and reveal its effect on overall algorithm quality. They establish a foundation for an offline calculated lookup table which secures effective d-axis current reference over entire field-weakening region. The table has been proven on its own and in combination with a PI compensator. Usage recommendations are given in this paper. Functionality of the proposed solutions has been investigated theoretically and in practice. The investigation has been carried out in the presence of motor magnetic saturation and parameter tolerance, taking into account the change of operating temperature. The results and analysis method are included in this paper.", "title": "" }, { "docid": "5e83743d4a3997afdbf6898e8b5d54b5", "text": "I would like to devote this review to my teachers and colleagues, Nadine Brisson and Gilbert Saint, who passed away too early. I am also grateful to a number of people who contributed directly and indirectly to this paper: Antonio Formaggio and Yosio Shimabokuro and their team from INPE (Sao Jose dos Campos) for shared drafting of a related research proposal for a Brazilian monitoring system; Felix Rembold (JRC Ispra) for discussing the original manuscript; Zoltan Balint, Peris Muchiri and SWALIM at FAO Somalia (Nairobi, Kenya) for contributions regarding the CDI drought index; and Anja Klisch, Francesco Vuolo and Matteo Mattiuzzi (BOKU Vienna) for providing inputs related to vegetation phenology and the MODIS processing chain.", "title": "" }, { "docid": "91dbb5df6bc5d3db43b51fc7a4c84468", "text": "An assortment of algorithms, termed three-dimensional (3D) scan-conversion algorithms, is presented. These algorithms scan-convert 3D geometric objects into their discrete voxel-map representation within a Cubic Frame Buffer (CFB). The geometric objects that are studied here include three-dimensional lines, polygons (optionally filled), polyhedra (optionally filled), cubic parametric curves, bicubic parametric surface patches, circles (optionally filled), and quadratic objects (optionally filled) like those used in constructive solid geometry: cylinders, cones, and spheres.\nAll algorithms presented here do scan-conversion with computational complexity which is linear in the number of voxels written to the CFB. All algorithms are incremental and use only additions, subtractions, tests and simpler operations inside the inner algorithm loops. Since the algorithms are basically sequential, the temporal complexity is also linear. However, the polyhedron-fill and sphere-fill algorithms have less than linear temporal complexity, as they use a mechanism for writing a voxel run into the CFB. The temporal complexity would then be linear with the number of pixels in the object's 2D projection. All algorithms have been implemented as part of the CUBE Architecture, which is a voxel-based system for 3D graphics. The CUBE architecture is also presented.", "title": "" }, { "docid": "c102e00d44d335b344b56423bd16e7c5", "text": "PURPOSE\nTo evaluate the association between social networking site (SNS) use and depression in older adolescents using an experience sample method (ESM) approach.\n\n\nMETHODS\nOlder adolescent university students completed an online survey containing the Patient Health Questionnaire-9 depression screen (PHQ) and a week-long ESM data collection period to assess SNS use.\n\n\nRESULTS\nParticipants (N = 190) included in the study were 58% female and 91% Caucasian. The mean age was 18.9 years (standard deviation = .8). Most used SNSs for either <30 minutes (n = 100, 53%) or between 30 minutes and 2 hours (n = 74, 39%); a minority of participants reported daily use of SNS >2 hours (n = 16, 8%). The mean PHQ score was 5.4 (standard deviation = 4.2). No associations were seen between SNS use and either any depression (p = .519) or moderate to severe depression (p = .470).\n\n\nCONCLUSIONS\nWe did not find evidence supporting a relationship between SNS use and clinical depression. Counseling patients or parents regarding the risk of \"Facebook Depression\" may be premature.", "title": "" }, { "docid": "3cb255c8799252093d04d9fa24c52296", "text": "Modern computing tasks such as real-time analytics require refresh of query results under high update rates. Incremental View Maintenance (IVM) approaches this problem by materializing results in order to avoid recomputation. IVM naturally induces a trade-off between the space needed to maintain the materialized results and the time used to process updates. In this paper, we show that the full materialization of results is a barrier for more general optimization strategies. In particular, we present a new approach for evaluating queries under updates. Instead of the materialization of results, we require a data structure that allows: (1) linear time maintenance under updates, (2) constant-delay enumeration of the output, (3) constant-time lookups in the output, while (4) using only linear space in the size of the database. We call such a structure a Dynamic Constant-delay Linear Representation (DCLR) for the query. We show that DYN, a dynamic version of the Yannakakis algorithm, yields DCLRs for the class of free-connex acyclic CQs. We show that this is optimal in the sense that no DCLR can exist for CQs that are not free-connex acyclic. Moreover, we identify a sub-class of queries for which DYN features constant-time update per tuple and show that this class is maximal. Finally, using the TPC-H and TPC-DS benchmarks, we experimentally compare DYN and a higher-order IVM (HIVM) engine. Our approach is not only more efficient in terms of memory consumption (as expected), but is also consistently faster in processing updates.", "title": "" }, { "docid": "68cd83d94d67c16b19668f1fba62a50e", "text": "This report presents the results of a friendly competition for formal verification of continuous and hybrid systems with linear continuous dynamics. The friendly competition took place as part of the workshop Applied Verification for Continuous and Hybrid Systems (ARCH) in 2018. In its second edition, 9 tools have been applied to solve six different benchmark problems in the category for linear continuous dynamics (in alphabetical order): CORA, CORA/SX, C2E2, Flow*, HyDRA, Hylaa, Hylaa-Continuous, JuliaReach, SpaceEx, and XSpeed. This report is a snapshot of the current landscape of tools and the types of benchmarks they are particularly suited for. Due to the diversity of problems, we are not ranking tools, yet the presented results probably provide the most complete assessment of tools for the safety verification of continuous and hybrid systems with linear continuous dynamics up to this date. G. Frehse (ed.), ARCH18 (EPiC Series in Computing, vol. 54), pp. 23–52 ARCH-COMP18 Linear Dynamics Althoff et al.", "title": "" } ]
scidocsrr
e741c777aa9670e2b602fe05d26ecb67
Credit Card Fraud Detection: A Realistic Modeling and a Novel Learning Strategy
[ { "docid": "5e95aaa54f8acf073ccc11c08c148fe0", "text": "Billions of dollars of loss are caused every year due to fraudulent credit card transactions. The design of efficient fraud detection algorithms is key for reducing these losses, and more and more algorithms rely on advanced machine learning techniques to assist fraud investigators. The design of fraud detection algorithms is however particularly challenging due to non stationary distribution of the data, highly imbalanced classes distributions and continuous streams of transactions. At the same time public data are scarcely available for confidentiality issues, leaving unanswered many questions about which is the best strategy to deal with them. In this paper we provide some answers from the practitioner’s perspective by focusing on three crucial issues: unbalancedness, non-stationarity and assessment. The analysis is made possible by a real credit card dataset provided by our industrial partner.", "title": "" }, { "docid": "0a635352cb1b97f2e1b07954b5967808", "text": "Nowadays, credit card fraud detection is of great importance to financial institutions. This article presents an automated credit card fraud detection system based on the neural network technology. The authors apply the Self-Organizing Map algorithm to create a model of typical cardholder’s behavior and to analyze the deviation of transactions, thus finding suspicious transactions.", "title": "" }, { "docid": "36142a4c0639662fe52dcc3fdf7b1ca4", "text": "We present hierarchical change-detection tests (HCDTs), as effective online algorithms for detecting changes in datastreams. HCDTs are characterized by a hierarchical architecture composed of a detection layer and a validation layer. The detection layer steadily analyzes the input datastream by means of an online, sequential CDT, which operates as a low-complexity trigger that promptly detects possible changes in the process generating the data. The validation layer is activated when the detection one reveals a change, and performs an offline, more sophisticated analysis on recently acquired data to reduce false alarms. Our experiments show that, when the process generating the datastream is unknown, as it is mostly the case in the real world, HCDTs achieve a far more advantageous tradeoff between false-positive rate and detection delay than their single-layered, more traditional counterpart. Moreover, the successful interplay between the two layers permits HCDTs to automatically reconfigure after having detected and validated a change. Thus, HCDTs are able to reveal further departures from the postchange state of the data-generating process.", "title": "" } ]
[ { "docid": "f6722a91421e1efb8865a6504fcb1b95", "text": "This paper proposes a modified p-q theory based control of solar photovoltaic array integrated unified power quality conditioner (PV-UPQC-S). The fundamental frequency positive sequence (FFPS) voltages are extracted using generalized cascaded delay signal cancellation (GCDSC) which is used in p-q theory based control to generate reference grid currents for the shunt compensator. The solar PV (SPV) array integrated at the DC-bus of the UPQC, provides a part of active load power. The series VSC operates such that it shares a part of the reactive power of the load even under nominal grid conditions. The dynamic performance of proposed system is verified by simulating it in Matlab-Simulink using a combination of linear and non-linear load.", "title": "" }, { "docid": "064b68836d9e11db6d183f3a7621e082", "text": "Three-dimensional computerized characterization of biomedical solid textures is key to large-scale and high-throughput screening of imaging data. Such data increasingly become available in the clinical and research environments with an ever increasing spatial resolution. In this text we exhaustively analyze the state-of-the-art in 3-D biomedical texture analysis to identify the specific needs of the application domains and extract promising trends in image processing algorithms. The geometrical properties of biomedical textures are studied both in their natural space and on digitized lattices. It is found that most of the tissue types have strong multi-scale directional properties, that are well captured by imaging protocols with high resolutions and spherical spatial transfer functions. The information modeled by the various image processing techniques is analyzed and visualized by displaying their 3-D texture primitives. We demonstrate that non-convolutional approaches are expected to provide best results when the size of structures are inferior to five voxels. For larger structures, it is shown that only multi-scale directional convolutional approaches that are non-separable allow for an unbiased modeling of 3-D biomedical textures. With the increase of high-resolution isotropic imaging protocols in clinical routine and research, these models are expected to best leverage the wealth of 3-D biomedical texture analysis in the future. Future research directions and opportunities are proposed to efficiently model personalized image-based phenotypes of normal biomedical tissue and its alterations. The integration of the clinical and genomic context is expected to better explain the intra class variation of healthy biomedical textures. Using texture synthesis, this provides the exciting opportunity to simulate and visualize texture atlases of normal ageing process and disease progression for enhanced treatment planning and clinical care management.", "title": "" }, { "docid": "630e44732755c47fc70be111e40c7b67", "text": "An algebra for geometric reasoning is developed that is amenable to software implementation. The features of the algebra are chosen to support geometric programming of the variety found in computer graphics and computer aided geometric design applications. The implementation of the algebra in C++ is described, and several examples illustrating the use of this software are given.", "title": "" }, { "docid": "e04f3ce645037e2d8e0cd83d884686d2", "text": "Because of Twitter’s popularity and the viral nature of information dissemination on Twitter, predicting which Twitter topics will become popular in the near future becomes a task of considerable economic importance. Many Twitter topics are annotated by hashtags. In this article, we propose methods to predict the popularity of new hashtags on Twitter by formulating the problem as a classification task. We use five standard classification models (i.e., Naïve bayes, k-nearest neighbors, decision trees, support vector machines, and logistic regression) for prediction. The main challenge is the identification of effective features for describing new hashtags. We extract 7 content features from a hashtag string and the collection of tweets containing the hashtag and 11 contextual features from the social graph formed by users who have adopted the hashtag. We conducted experiments on a Twitter data set consisting of 31 million tweets from 2 million Singapore-based users. The experimental results show that the standard classifiers using the extracted features significantly outperform the baseline methods that do not use these features. Among the five classifiers, the logistic regression model performs the best in terms of the Micro-F1 measure. We also observe that contextual features are more effective than content features.", "title": "" }, { "docid": "98e069b5cfa44a3d412b16ceb809fa51", "text": "Mutations in EMBRYONIC FLOWER (EMF) genes EMF1 and EMF2 abolish rosette development, and the mutants produce either a much reduced inflorescence or a transformed flower. These mutant characteristics suggest a repressive effect of EMF activities on reproductive development. To investigate the role of EMF genes in regulating reproductive development, we studied the relationship between EMF genes and the genes regulating inflorescence and flower development. We found that APETALA1 and AGAMOUS promoters were activated in germinating emf seedlings, suggesting that these genes may normally be suppressed in wild-type seedlings in which EMF activities are high. The phenotype of double mutants combining emf1-2 and apetala1, apetala2, leafy1, apetala1 cauliflower, and terminal flower1 showed that emf1-2 is epistatic in all cases, suggesting that EMF genes act downstream from these genes in mediating the inflorescence-to-flower transition. Constitutive expression of LEAFY in weak emf1, but not emf2, mutants increased the severity of the emf phenotype, indicating an inhibition of EMF activity by LEAFY, as was deduced from double mutant analysis. These results suggest that a mechanism involving a reciprocal negative regulation between the EMF genes and the floral genes regulates Arabidopsis inflorescence development.", "title": "" }, { "docid": "1fcdfd02a6ecb12dec5799d6580c67d4", "text": "One of the major problems in developing countries is maintenance of roads. Well maintained roads contribute a major portion to the country's economy. Identification of pavement distress such as potholes and humps not only helps drivers to avoid accidents or vehicle damages, but also helps authorities to maintain roads. This paper discusses previous pothole detection methods that have been developed and proposes a cost-effective solution to identify the potholes and humps on roads and provide timely alerts to drivers to avoid accidents or vehicle damages. Ultrasonic sensors are used to identify the potholes and humps and also to measure their depth and height, respectively. The proposed system captures the geographical location coordinates of the potholes and humps using a global positioning system receiver. The sensed-data includes pothole depth, height of hump, and geographic location, which is stored in the database (cloud). This serves as a valuable source of information to the government authorities and vehicle drivers. An android application is used to alert drivers so that precautionary measures can be taken to evade accidents. Alerts are given in the form of a flash messages with an audio beep.", "title": "" }, { "docid": "7d855d8af156f414395479a193cb38ba", "text": "How to effectively learn temporal variation of target appearance, to exclude the interference of cluttered background, while maintaining real-time response, is an essential problem of visual object tracking. Recently, Siamese networks have shown great potentials of matching based trackers in achieving balanced accuracy and beyond realtime speed. However, they still have a big gap to classification & updating based trackers in tolerating the temporal changes of objects and imaging conditions. In this paper, we propose dynamic Siamese network, via a fast transformation learning model that enables effective online learning of target appearance variation and background suppression from previous frames. We then present elementwise multi-layer fusion to adaptively integrate the network outputs using multi-level deep features. Unlike state-of-theart trackers, our approach allows the usage of any feasible generally- or particularly-trained features, such as SiamFC and VGG. More importantly, the proposed dynamic Siamese network can be jointly trained as a whole directly on the labeled video sequences, thus can take full advantage of the rich spatial temporal information of moving objects. As a result, our approach achieves state-of-the-art performance on OTB-2013 and VOT-2015 benchmarks, while exhibits superiorly balanced accuracy and real-time response over state-of-the-art competitors.", "title": "" }, { "docid": "a25e2540e97918b954acbb6fdee57eb7", "text": "Tweet streams provide a variety of real-life and real-time information on social events that dynamically change over time. Although social event detection has been actively studied, how to efficiently monitor evolving events from continuous tweet streams remains open and challenging. One common approach for event detection from text streams is to use single-pass incremental clustering. However, this approach does not track the evolution of events, nor does it address the issue of efficient monitoring in the presence of a large number of events. In this paper, we capture the dynamics of events using four event operations (create, absorb, split, and merge), which can be effectively used to monitor evolving events. Moreover, we propose a novel event indexing structure, called Multi-layer Inverted List (MIL), to manage dynamic event databases for the acceleration of large-scale event search and update. We thoroughly study the problem of nearest neighbour search using MIL based on upper bound pruning, along with incremental index maintenance. Extensive experiments have been conducted on a large-scale real-life tweet dataset. The results demonstrate the promising performance of our event indexing and monitoring methods on both efficiency and effectiveness.", "title": "" }, { "docid": "f13715ad6bcf35826c38f0f58b6ede46", "text": "Recent development of hand-held plenoptic cameras has brought light field acquisition into many practical and low-cost imaging applications. We address a crucial challenge in light field data processing: dense depth estimation of 3D scenes captured by camera arrays or plenoptic cameras. We first propose a method for construction of light field scale-depth spaces, by convolving a given light field with a special kernel adapted to the light field structure. We detect local extrema in such scale-depth spaces, which indicate the regions of constant depth, and convert them to dense depth maps after solving occlusion conflicts in a consistent way across all views. Due to the multi-scale characterization of objects in proposed representations, our method provides depth estimates for both uniform and textured regions, where uniform regions with large spatial extent are captured at coarser scales and textured regions are found at finer scales. Experimental results on the HCI (Heidelberg Collaboratory for Image Processing) light field benchmark show that our method gives state of the art depth accuracy. We also show results on plenoptic images from the RAYTRIX camera and our plenoptic camera prototype.", "title": "" }, { "docid": "492c5a20c4ef5b7a3ea08083ecf66bce", "text": "We present the design for an absorbing metamaterial (MM) with near unity absorbance A(omega). Our structure consists of two MM resonators that couple separately to electric and magnetic fields so as to absorb all incident radiation within a single unit cell layer. We fabricate, characterize, and analyze a MM absorber with a slightly lower predicted A(omega) of 96%. Unlike conventional absorbers, our MM consists solely of metallic elements. The substrate can therefore be optimized for other parameters of interest. We experimentally demonstrate a peak A(omega) greater than 88% at 11.5 GHz.", "title": "" }, { "docid": "57e16fe9f238c79d1ffd746aa4b84cfc", "text": "We evaluate transfer representation-learning for anomaly detection using convolutional neural networks by: (i) transfer learning from pretrained networks, and (ii) transfer learning from an auxiliary task by defining sub-categories of the normal class. We empirically show that both approaches offer viable representations for the task of anomaly detection, without explicitly imposing a prior on the data.", "title": "" }, { "docid": "c72e0e79f83b59af58e5d8bc7d9244d5", "text": "A novel deep learning architecture (XmasNet) based on convolutional neural networks was developed for the classification of prostate cancer lesions, using the 3D multiparametric MRI data provided by the PROSTATEx challenge. End-to-end training was performed for XmasNet, with data augmentation done through 3D rotation and slicing, in order to incorporate the 3D information of the lesion. XmasNet outperformed traditional machine learning models based on engineered features, for both train and test data. For the test data, XmasNet outperformed 69 methods from 33 participating groups and achieved the second highest AUC (0.84) in the PROSTATEx challenge. This study shows the great potential of deep learning for cancer imaging.", "title": "" }, { "docid": "6fe71d8d45fa940f1a621bfb5b4e14cd", "text": "We present Attract-Repel, an algorithm for improving the semantic quality of word vectors by injecting constraints extracted from lexical resources. Attract-Repel facilitates the use of constraints from mono- and cross-lingual resources, yielding semantically specialized cross-lingual vector spaces. Our evaluation shows that the method can make use of existing cross-lingual lexicons to construct high-quality vector spaces for a plethora of different languages, facilitating semantic transfer from high- to lower-resource ones. The effectiveness of our approach is demonstrated with state-of-the-art results on semantic similarity datasets in six languages. We next show that Attract-Repel-specialized vectors boost performance in the downstream task of dialogue state tracking (DST) across multiple languages. Finally, we show that cross-lingual vector spaces produced by our algorithm facilitate the training of multilingual DST models, which brings further performance improvements.", "title": "" }, { "docid": "a9d22e2568bcae7a98af7811546c7853", "text": "This thesis addresses the challenges of building a software system for general-purpose runtime code manipulation. Modern applications, with dynamically-loaded modules and dynamicallygenerated code, are assembled at runtime. While it was once feasible at compile time to observe and manipulate every instruction — which is critical for program analysis, instrumentation, trace gathering, optimization, and similar tools — it can now only be done at runtime. Existing runtime tools are successful at inserting instrumentation calls, but no general framework has been developed for fine-grained and comprehensive code observation and modification without high overheads. This thesis demonstrates the feasibility of building such a system in software. We present DynamoRIO, a fully-implemented runtime code manipulation system that supports code transformations on any part of a program, while it executes. DynamoRIO uses code caching technology to provide efficient, transparent, and comprehensive manipulation of an unmodified application running on a stock operating system and commodity hardware. DynamoRIO executes large, complex, modern applications with dynamically-loaded, generated, or even modified code. Despite the formidable obstacles inherent in the IA-32 architecture, DynamoRIO provides these capabilities efficiently, with zero to thirty percent time and memory overhead on both Windows and Linux. DynamoRIO exports an interface for building custom runtime code manipulation tools of all types. It has been used by many researchers, with several hundred downloads of our public release, and is being commercialized in a product for protection against remote security exploits, one of numerous applications of runtime code manipulation. Thesis Supervisor: Saman Amarasinghe Title: Associate Professor of Electrical Engineering and Computer Science", "title": "" }, { "docid": "a0d4e1038ac7309260d984f4e39d5c91", "text": "Modeling plays a central role in design automation of embedded processors. It is necessary to develop a specification language that can model complex processors at a higher level of abstraction and enable automatic analysis and generation of efficient tools and prototypes. The language should be powerful enough to capture high-level description of the processor architectures. On the other hand, the language should be simple enough to allow correlation of the information between the specification and the architecture manual.", "title": "" }, { "docid": "00410fcb0faa85d5423ccf0a7cc2f727", "text": "Phishing is form of identity theft that combines social engineering techniques and sophisticated attack vectors to harvest financial information from unsuspecting consumers. Often a phisher tries to lure her victim into clicking a URL pointing to a rogue page. In this paper, we focus on studying the structure of URLs employed in various phishing attacks. We find that it is often possible to tell whether or not a URL belongs to a phishing attack without requiring any knowledge of the corresponding page data. We describe several features that can be used to distinguish a phishing URL from a benign one. These features are used to model a logistic regression filter that is efficient and has a high accuracy. We use this filter to perform thorough measurements on several million URLs and quantify the prevalence of phishing on the Internet today", "title": "" }, { "docid": "e85a019405a29e19670c99f9eabfff78", "text": "Online shopping, different from traditional shopping behavior, is characterized with uncertainty, anonymity, and lack of control and potential opportunism. Therefore, trust is an important factor to facilitate online transactions. The purpose of this study is to explore the role of trust in consumer online purchase behavior. This study undertook a comprehensive survey of online customers having e-shopping experiences in Taiwan and we received 1258 valid questionnaires. The empirical results, using structural equation modeling, indicated that perceived ease of use and perceived usefulness affect have a significant impact on trust in e-commerce. Trust also has a significant influence on attitude towards online purchase. However, there is no significant impact from trust on the intention of online purchase.", "title": "" }, { "docid": "14b15f15cb7dbb3c19a09323b4b67527", "text": " Establishing mechanisms for sharing knowledge and technology among experts in different fields related to automated de-identification and reversible de-identification  Providing innovative solutions for concealing, or removal of identifiers while preserving data utility and naturalness  Investigating reversible de-identification and providing a thorough analysis of security risks of reversible de-identification  Providing a detailed analysis of legal, ethical and social repercussion of reversible/non-reversible de-identification  Promoting and facilitating the transfer of knowledge to all stakeholders (scientific community, end-users, SMEs) through workshops, conference special sessions, seminars and publications", "title": "" }, { "docid": "a2d76e1217b0510f82ebccab39b7d387", "text": "The floating photovoltaic system is a new concept in energy technology to meet the needs of our time. The system integrates existing land based photovoltaic technology with a newly developed floating photovoltaic technology. K-water has already completed two floating photovoltaic systems that enable generation of 100kW and 500kW respectively. In this paper, the generation efficiency of floating and land photovoltaic systems were compared and analyzed. Floating PV has shown greater generation efficiency by over 10% compared with the general PV systems installed overland", "title": "" }, { "docid": "d76d09ca1e87eb2e08ccc03428c62be0", "text": "Face recognition has the perception of a solved problem, however when tested at the million-scale exhibits dramatic variation in accuracies across the different algorithms [11]. Are the algorithms very different? Is access to good/big training data their secret weapon? Where should face recognition improve? To address those questions, we created a benchmark, MF2, that requires all algorithms to be trained on same data, and tested at the million scale. MF2 is a public large-scale set with 672K identities and 4.7M photos created with the goal to level playing field for large scale face recognition. We contrast our results with findings from the other two large-scale benchmarks MegaFace Challenge and MS-Celebs-1M where groups were allowed to train on any private/public/big/small set. Some key discoveries: 1) algorithms, trained on MF2, were able to achieve state of the art and comparable results to algorithms trained on massive private sets, 2) some outperformed themselves once trained on MF2, 3) invariance to aging suffers from low accuracies as in MegaFace, identifying the need for larger age variations possibly within identities or adjustment of algorithms in future testing.", "title": "" } ]
scidocsrr
b49ff0db5d098321d177ba90827ac711
SNAP: A General-Purpose Network Analysis and Graph-Mining Library
[ { "docid": "4253afeaeb2f238339611e5737ed3e06", "text": "Over the past decade there has been a growing public fascination with the complex connectedness of modern society. This connectedness is found in many incarnations: in the rapid growth of the Internet, in the ease with which global communication takes place, and in the ability of news and information as well as epidemics and financial crises to spread with surprising speed and intensity. These are phenomena that involve networks, incentives, and the aggregate behavior of groups of people; they are based on the links that connect us and the ways in which our decisions can have subtle consequences for others. This introductory undergraduate textbook takes an interdisciplinary look at economics, sociology, computing and information science, and applied mathematics to understand networks and behavior. It describes the emerging field of study that is growing at the interface of these areas, addressing fundamental questions about how the social, economic, and technological worlds are connected.", "title": "" }, { "docid": "8560ee360fdd21eec240673c9cad503a", "text": "We present new algorithms for Personalized PageRank estimation and Personalized PageRank search. First, for the problem of estimating Personalized PageRank (PPR) from a source distribution to a target node, we present a new bidirectional estimator with simple yet strong guarantees on correctness and performance, and 3x to 8x speedup over existing estimators in experiments on a diverse set of networks. Moreover, it has a clean algebraic structure which enables it to be used as a primitive for the Personalized PageRank Search problem: Given a network like Facebook, a query like \"people named John,\" and a searching user, return the top nodes in the network ranked by PPR from the perspective of the searching user. Previous solutions either score all nodes or score candidate nodes one at a time, which is prohibitively slow for large candidate sets. We develop a new algorithm based on our bidirectional PPR estimator which identifies the most relevant results by sampling candidates based on their PPR; this is the first solution to PPR search that can find the best results without iterating through the set of all candidate results. Finally, by combining PPR sampling with sequential PPR estimation and Monte Carlo, we develop practical algorithms for PPR search, and we show via experiments that our algorithms are efficient on networks with billions of edges.", "title": "" } ]
[ { "docid": "e74298e5bfd1cde8aaed2465cfb6ed33", "text": "We introduce a new low-distortion embedding of l<sub>2</sub><sup>d</sup> into l<sub>p</sub><sup>O(log n)</sup> (p=1,2), called the <i>Fast-Johnson-Linden-strauss-Transform</i>. The FJLT is faster than standard random projections and just as easy to implement. It is based upon the preconditioning of a sparse projection matrix with a randomized Fourier transform. Sparse random projections are unsuitable for low-distortion embeddings. We overcome this handicap by exploiting the \"Heisenberg principle\" of the Fourier transform, ie, its local-global duality. The FJLT can be used to speed up search algorithms based on low-distortion embeddings in l<sub>1</sub> and l<sub>2</sub>. We consider the case of approximate nearest neighbors in l<sub>2</sub><sup>d</sup>. We provide a faster algorithm using classical projections, which we then further speed up by plugging in the FJLT. We also give a faster algorithm for searching over the hypercube.", "title": "" }, { "docid": "cd094cc790b51c34ce315b59ae08b6d9", "text": "We present a framework and supporting algorithms to automate the use of temporal data reprojection as a general tool for optimizing procedural shaders. Although the general strategy of caching and reusing expensive intermediate shading calculations across consecutive frames has previously been shown to provide an effective trade-off between speed and accuracy, the critical choices of what to reuse and at what rate to refresh cached entries have been left to a designer. The fact that these decisions require a deep understanding of a procedure's semantic structure makes it challenging to select optimal candidates among possibly hundreds of alternatives. Our automated approach relies on parametric models of the way possible caching decisions affect the shader's performance and visual fidelity. These models are trained using a sample rendering session and drive an interactive profiler in which the user can explore the error/performance trade-offs associated with incorporating temporal reprojection. We evaluate the proposed models and selection algorithm with a prototype system used to optimize several complex shaders and compare our approach to current alternatives.", "title": "" }, { "docid": "eb286d4a7406dc235820ccb848844840", "text": "This paper describes the design and testing of a new introductory programming language, GRAIL1. GRAIL was designed to minimise student syntax errors, and hence allow the study of the impact of syntax errors on learning to program. An experiment was conducted using students learning programming for the first time. The students were split into two groups, one group learning LOGO and the other GRAIL. The resulting code was then analysed for syntax and logic errors. The groups using LOGO made more errors than the groups using GRAIL, which shows that choice of programming language can have a substantial impact on error rates of novice programmers.", "title": "" }, { "docid": "34ceb0e84b4e000b721f87bcbec21094", "text": "The principal goal guiding the design of any encryption algorithm must be security against unauthorized attacks. However, for all practical applications, performance and the cost of implementation are also important concerns. A data encryption algorithm would not be of much use if it is secure enough but slow in performance because it is a common practice to embed encryption algorithms in other applications such as e-commerce, banking, and online transaction processing applications. Embedding of encryption algorithms in other applications also precludes a hardware implementation, and is thus a major cause of degraded overall performance of the system. In this paper, the four of the popular secret key encryption algorithms, i.e., DES, 3DES, AES (Rijndael), and the Blowfish have been implemented, and their performance is compared by encrypting input files of varying contents and sizes, on different Hardware platforms. The algorithms have been implemented in a uniform language, using their standard specifications, to allow a fair comparison of execution speeds. The performance results have been summarized and a conclusion has been presented. Based on the experiments, it has been concluded that the Blowfish is the best performing algorithm among the algorithms chosen for implementation.", "title": "" }, { "docid": "a4030b9aa31d4cc0a2341236d6f18b5a", "text": "Generative adversarial networks (GANs) have achieved huge success in unsupervised learning. Most of GANs treat the discriminator as a classifier with the binary sigmoid cross entropy loss function. However, we find that the sigmoid cross entropy loss function will sometimes lead to the saturation problem in GANs learning. In this work, we propose to adopt the L2 loss function for the discriminator. The properties of the L2 loss function can improve the stabilization of GANs learning. With the usage of the L2 loss function, we propose the multi-class generative adversarial networks for the purpose of image generation with multiple classes. We evaluate the multi-class GANs on a handwritten Chinese characters dataset with 3740 classes. The experiments demonstrate that the multi-class GANs can generate elegant images on datasets with a large number of classes. Comparison experiments between the L2 loss function and the sigmoid cross entropy loss function are also conducted and the results demonstrate the stabilization of the L2 loss function.", "title": "" }, { "docid": "b4d7a17eb034bcf5f6616d9338fe4265", "text": "Accessory breasts, usually with a protuberant appearance, are composed of both the central accessory breast tissue and adjacent fat tissue. They are a palpable convexity and cosmetically unsightly. Consequently, patients often desire cosmetic improvement. The traditional general surgical treatment for accessory breasts is removal of the accessory breast tissue, fat tissue, and covering skin as a whole unit. A rather long ugly scar often is left after this operation. A minimally invasive method frequently used by the plastic surgeon is to “dig out” the accessory breast tissue. A central depression appearance often is left due to the adjacent fat tissue remnant. From the cosmetic point of view, neither a long scar nor a bulge is acceptable. A minimal incision is made, and the tumescent liposuction technique is used to aspirate out both the central accessory breast tissue and adjacent fat tissue. If there is an areola or nipple in the accessory breast, either the areola or nipple is excised after liposuction during the same operation. For patients who have too much extra skin in the accessory breast area, a small fusiform incision is made to remove the extra skin after the accessory breast tissue and fat tissue have been aspirated out. From August 2003 to January 2008, 51 patients underwent surgery using the described technique. All were satisfied with their appearance after their initial surgery except for two patients with minimal associated morbidity. This report describes a new approach for treating accessory breasts that results in minimal scarring and a better appearance than can be achieved with traditional methods.", "title": "" }, { "docid": "a4dd8ab8b45a8478ca4ac7e19debf777", "text": "Most sensory, cognitive and motor functions depend on the interactions of many neurons. In recent years, there has been rapid development and increasing use of technologies for recording from large numbers of neurons, either sequentially or simultaneously. A key question is what scientific insight can be gained by studying a population of recorded neurons beyond studying each neuron individually. Here, we examine three important motivations for population studies: single-trial hypotheses requiring statistical power, hypotheses of population response structure and exploratory analyses of large data sets. Many recent studies have adopted dimensionality reduction to analyze these populations and to find features that are not apparent at the level of individual neurons. We describe the dimensionality reduction methods commonly applied to population activity and offer practical advice about selecting methods and interpreting their outputs. This review is intended for experimental and computational researchers who seek to understand the role dimensionality reduction has had and can have in systems neuroscience, and who seek to apply these methods to their own data.", "title": "" }, { "docid": "a5f78c3708a808fd39c4ced6152b30b8", "text": "Building ontology for wireless network intrusion detection is an emerging method for the purpose of achieving high accuracy, comprehensive coverage, self-organization and flexibility for network security. In this paper, we leverage the power of Natural Language Processing (NLP) and Crowdsourcing for this purpose by constructing lightweight semi-automatic ontology learning framework which aims at developing a semantic-based solution-oriented intrusion detection knowledge map using documents from Scopus. Our proposed framework uses NLP as its automatic component and Crowdsourcing is applied for the semi part. The main intention of applying both NLP and Crowdsourcing is to develop a semi-automatic ontology learning method in which NLP is used to extract and connect useful concepts while in uncertain cases human power is leveraged for verification. This heuristic method shows a theoretical contribution in terms of lightweight and timesaving ontology learning model as well as practical value by providing solutions for detecting different types of intrusions.", "title": "" }, { "docid": "4ba308bd5ff2196b8ca34d170acb8275", "text": "This paper reviews the state-of-the-art in automatic genre classification of music collections through three main paradigms: expert systems, unsupervised classification, and supervised classification. The paper discusses the importance of music genres with their definitions and hierarchies. It also presents techniques to extract meaningful information from audio data to characterize musical excerpts. The paper also presents the results of new emerging research fields and techniques that investigate the proximity of music genres", "title": "" }, { "docid": "3c75d05e1b6abf2cb03573e1162954a7", "text": "With the increasing popularity of portable camera devices and embedded visual processing, text extraction from natural scene images has become a key problem that is deemed to change our everyday lives via novel applications such as augmented reality. Text extraction from natural scene images algorithms is generally composed of the following three stages: (i) detection and localization, (ii) text enhancement to variations in the font size and color, text alignment, illumination change and reflections. This paper aims to classify and assess the latest algorithms. More specifically, we draw attention to studies on the first two steps in the extraction process, since OCR is a well-studied area where powerful algorithms already exist. This paper offers to the researchers a link to public image database for the algorithm assessment of text extraction from natural scene images. & 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "9973ae8007662ae54ad272f84c771f69", "text": "Skeletal deficiency in the central midface impacts nasal aesthetics. This lack of lower face projection can be corrected by alloplastic augmentation of the pyriform aperture. Creating convexity in the deficient midface will make the nose seem less prominent. Augmentation of the pyriform aperture is, therefore, often a useful adjunct during the rhinoplasty procedure. Augmenting the skeleton in this area can alter the projection of the nasal base, the nasolabial angle, and the vertical plane of the lip. The implant design and surgical techniques described here are extensions of others' previous efforts to improve paranasal aesthetics.", "title": "" }, { "docid": "a696fd5e0328b27d8d952bdadfd6f58c", "text": "Aiming at the problem of low speed of 3D reconstruction of indoor scenes with monocular vision, the color images and depth images of indoor scenes based on ASUS Xtion monocular vision sensor were used for 3D reconstruction. The image feature extraction using the ORB feature detection algorithm, and compared the efficiency of several kinds of classic feature detection algorithm in image matching, Ransac algorithm and ICP algorithm are used to point cloud fusion. Through experiments, a fast 3D reconstruction method for indoor, simple and small-scale static environment is realized. Have good accuracy, robustness, real-time and flexibility.", "title": "" }, { "docid": "dc2f4cbd2c18e4f893750a0a1a40002b", "text": "A microstrip half-grid array antenna (HGA) based on low temperature co-fired ceramic (LTCC) technology is presented in this paper. The antenna is designed for the 77-81 GHz radar frequency band and uses a high permittivity material (εr = 7.3). The traditional single-grid array antenna (SGA) uses two radiating elements in the H-plane. For applications using digital beam forming, the focusing of an SGA in the scanning plane (H-plane) limits the field of view (FoV) of the radar system and the width of the SGA enlarges the minimal spacing between the adjacent channels. To overcome this, an array antenna using only half of the grid as radiating element was designed. As feeding network, a laminated waveguide with a vertically arranged power divider was adopted. For comparison, both an SGA and an HGA were fabricated. The measured results show: using an HGA, an HPBW increment in the H-plane can be achieved and their beam patterns in the E-plane remain similar. This compact LTCC antenna is suitable for radar application with a large FoV requirement.", "title": "" }, { "docid": "f542805c0efd007b1e032eeec9146992", "text": "This study presents the Inflexible Eating Questionnaire (IEQ), which measures the inflexible adherence to subjective eating rules. The scale's structure and psychometric properties were examined in distinct samples from the general population comprising both men and women. IEQ presented an 11-item one-dimensional structure, revealed high internal consistency, construct and temporal stability, and discriminated eating psychopathology cases from non-cases. The IEQ presented significant associations with dietary restraint, eating psychopathology, body image inflexibility, general psychopathology symptoms, and decreased intuitive eating. IEQ was a significant moderator on the association between dietary restraint and eating psychopathology symptoms. Findings suggested that the IEQ is a valid and useful instrument with potential implications for research on psychological inflexibility in disordered eating.", "title": "" }, { "docid": "f0b95e707c172bbe5fbbf8d6d80836d4", "text": "While in supervised learning, the validation error is an unbiased estimator of the generalization (test) error and complexity-based generalization bounds are abundant, no such bounds exist for learning a mapping in an unsupervised way. As a result, when training GANs and specifically when using GANs for learning to map between domains in a completely unsupervised way, one is forced to select the hyperparameters and the stopping epoch by subjectively examining multiple options. We propose a novel bound for predicting the success of unsupervised cross domain mapping methods, which is motivated by the recently proposed Simplicity Principle. The bound can be applied both in expectation, for comparing hyperparameters and for selecting a stopping criterion, or per sample, in order to predict the success of a specific cross-domain translation. The utility of the bound is demonstrated in an extensive set of experiments employing multiple recent algorithms. Our code is available at https: //github.com/sagiebenaim/gan bound.", "title": "" }, { "docid": "75a1832a5fdd9c48f565eb17e8477b4b", "text": "We introduce a new interactive system: a game that is fun and can be used to create valuable output. When people play the game they help determine the contents of images by providing meaningful labels for them. If the game is played as much as popular online games, we estimate that most images on the Web can be labeled in a few months. Having proper labels associated with each image on the Web would allow for more accurate image search, improve the accessibility of sites (by providing descriptions of images to visually impaired individuals), and help users block inappropriate images. Our system makes a significant contribution because of its valuable output and because of the way it addresses the image-labeling problem. Rather than using computer vision techniques, which don't work well enough, we encourage people to do the work by taking advantage of their desire to be entertained.", "title": "" }, { "docid": "4f2a890264de889ce290e9252282901f", "text": "Combining Generative Adversarial Networks (GANs) with encoders that learn to encode data points has shown promising results in learning data representations in an unsupervised way. We propose a framework that combines an encoder and a generator to learn disentangled representations which encode meaningful information about the data distribution without the need for any labels. While current approaches focus mostly on the generative aspects of GANs, our framework can be used to perform inference on both real and generated data points. Experiments on several data sets show that the encoder learns interpretable, disentangled representations which encode descriptive properties and can be used to sample images that exhibit specific characteristics.", "title": "" }, { "docid": "825099c399077e6b759effed0908d11d", "text": "Neuronal oscillations are ubiquitous in the brain and may contribute to cognition in several ways: for example, by segregating information and organizing spike timing. Recent data show that delta, theta and gamma oscillations are specifically engaged by the multi-timescale, quasi-rhythmic properties of speech and can track its dynamics. We argue that they are foundational in speech and language processing, 'packaging' incoming information into units of the appropriate temporal granularity. Such stimulus-brain alignment arguably results from auditory and motor tuning throughout the evolution of speech and language and constitutes a natural model system allowing auditory research to make a unique contribution to the issue of how neural oscillatory activity affects human cognition.", "title": "" }, { "docid": "f8f1a840930a66795e6b04f8ece1cd63", "text": "This is a talk given at UW optimization seminar, November 7 2006.", "title": "" } ]
scidocsrr
4cebc49fd707ca900d1f063ec05e22bc
Toward Tutoring Help Seeking: Applying Cognitive Modeling to Meta-cognitive Skills
[ { "docid": "e45204012e5a12504cbb4831c9b5d629", "text": "The focus of this paper is the application of the theory of contingent tutoring to the design of a computer-based system designed to support learning in aspects of algebra. Analyses of interactions between a computer-based tutoring system and 42, 14and 15-year-old pupils are used to explore and explain the relations between individual di€erences in learner±tutor interaction, learners' prior knowledge and learning outcomes. Parallels between the results of these analyses and empirical investigations of help seeking in adult±child tutoring are drawn out. The theoretical signi®cance of help seeking as a basis for studying the impact of individual learner di€erences in the collaborative construction of `zones of proximal development' is assessed. In addition to demonstrating the signi®cance of detailed analyses of learner±system interaction as a basis for inferences about learning processes, the investigation also attempts to show the value of exploiting measures of on-line help seeking as a means of assessing learning transfer. Finally, the implications of the ®ndings for contingency theory are discussed, and the theoretical and practical bene®ts of integrating psychometric assessment, interaction process analyses, and knowledge-based learner modelling in the design and evaluation of computer-based tutoring are explored. # 2000 Published by Elsevier Science Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "af47d1cc068467eaee7b6129682c9ee3", "text": "Diffusion kurtosis imaging (DKI) is gaining rapid adoption in the medical imaging community due to its ability to measure the non-Gaussian property of water diffusion in biological tissues. Compared to traditional diffusion tensor imaging (DTI), DKI can provide additional details about the underlying microstructural characteristics of the neural tissues. It has shown promising results in studies on changes in gray matter and mild traumatic brain injury where DTI is often found to be inadequate. The DKI dataset, which has high-fidelity spatio-angular fields, is difficult to visualize. Glyph-based visualization techniques are commonly used for visualization of DTI datasets; however, due to the rapid changes in orientation, lighting, and occlusion, visually analyzing the much more higher fidelity DKI data is a challenge. In this paper, we provide a systematic way to manage, analyze, and visualize high-fidelity spatio-angular fields from DKI datasets, by using spherical harmonics lighting functions to facilitate insights into the brain microstructure.", "title": "" }, { "docid": "2d30ed139066b025dcb834737d874c99", "text": "Considerable advances have occurred in recent years in the scientific knowledge of the benefits of breastfeeding, the mechanisms underlying these benefits, and in the clinical management of breastfeeding. This policy statement on breastfeeding replaces the 1997 policy statement of the American Academy of Pediatrics and reflects this newer knowledge and the supporting publications. The benefits of breastfeeding for the infant, the mother, and the community are summarized, and recommendations to guide the pediatrician and other health care professionals in assisting mothers in the initiation and maintenance of breastfeeding for healthy term infants and high-risk infants are presented. The policy statement delineates various ways in which pediatricians can promote, protect, and support breastfeeding not only in their individual practices but also in the hospital, medical school, community, and nation.", "title": "" }, { "docid": "c05bcb214a7ac6d18c839c70f56b05db", "text": "A 0.3-1.4 GHz all-digital phase locked loop (ADPLL) with an adaptive loop gain controller (ALGC), a 1/8-resolution fractional divider and a frequency search block is presented. The ALGC reduces the nonlinearity of the bang-bang phase-frequency detector (BBPFD), reducing output jitter. The fractional divider partially compensates for the large input phase error caused by fractional-N frequency synthesis. A fast frequency search unit using the false position method achieves frequency lock in 6 iterations that correspond to 192 reference clock cycles. A prototype ADPLL using a BBPFD with a dead-zone-free retimer, an ALGC, a fractional divider, and a digital logic implementation of a frequency search algorithm was fabricated in a 0.13-μm CMOS logic process. The core occupies 0.2 mm2 and consumes 16.5 mW with a 1.2-V supply at 1.35-GHz. Measured RMS and peak-to-peak jitter with activating the ALGC are 3.7 ps and 32 ps respectively.", "title": "" }, { "docid": "0b7f00dcdfdd1fe002b2363097914bba", "text": "A new field of research, visual analytics, has been introduced. This has been defined as \"the science of analytical reasoning facilitated by interactive visual interfaces\" (Thomas and Cook, 2005). Visual analytic environments, therefore, support analytical reasoning using visual representations and interactions, with data representations and transformation capabilities, to support production, presentation, and dissemination. As researchers begin to develop visual analytic environments, it is advantageous to develop metrics and methodologies to help researchers measure the progress of their work and understand the impact their work has on the users who work in such environments. This paper presents five areas or aspects of visual analytic environments that should be considered as metrics and methodologies for evaluation are developed. Evaluation aspects need to include usability, but it is necessary to go beyond basic usability. The areas of situation awareness, collaboration, interaction, creativity, and utility are proposed as the five evaluation areas for initial consideration. The steps that need to be undertaken to develop systematic evaluation methodologies and metrics for visual analytic environments are outlined", "title": "" }, { "docid": "46a8022eea9ed7bcfa1cd8041cab466f", "text": "In this paper, a bidirectional converter with a uniform controller for Vehicle to grid (V2G) application is designed. The bidirectional converter consists of two stages one is ac-dc converter and second is dc-dc converter. For ac-dc converter bipolar modulation is used. Two separate controller systems are designed for converters which follow active and reactive power commands from grid. Uniform controller provides reactive power support to the grid. The charger operates in two quadrants I and IV. There are three modes of operation viz. charging only operation, charging-capacitive operation and charging-inductive operation. During operation under these three operating modes vehicle's battery is not affected. The whole system is tested using MATLAB/SIMULINK.", "title": "" }, { "docid": "57ce739b1845a4b7e0ff5e2ebdd3b16d", "text": "Public key infrastructures (PKIs) enable users to look up and verify one another’s public keys based on identities. Current approaches to PKIs are vulnerable because they do not offer sufficiently strong guarantees of identity retention; that is, they do not effectively prevent one user from registering a public key under another’s already-registered identity. In this paper, we leverage the consistency guarantees provided by cryptocurrencies such as Bitcoin and Namecoin to build a PKI that ensures identity retention. Our system, called Certcoin, has no central authority and thus requires the use of secure distributed dictionary data structures to provide efficient support for key lookup.", "title": "" }, { "docid": "8069999c95b31e8c847091f72b694af7", "text": "Software defined radio (SDR) is a rapidly evolving technology which implements some functional modules of a radio system in software executing on a programmable processor. SDR provides a flexible mechanism to reconfigure the radio, enabling networked devices to easily adapt to user preferences and the operating environment. However, the very mechanisms that provide the ability to reconfigure the radio through software also give rise to serious security concerns such as unauthorized modification of the software, leading to radio malfunction and interference with other users' communications. Both the SDR device and the network need to be protected from such malicious radio reconfiguration.\n In this paper, we propose a new architecture to protect SDR devices from malicious reconfiguration. The proposed architecture is based on robust separation of the radio operation environment and user application environment through the use of virtualization. A secure radio middleware layer is used to intercept all attempts to reconfigure the radio, and a security policy monitor checks the target configuration against security policies that represent the interests of various parties. Therefore, secure reconfiguration can be ensured in the radio operation environment even if the operating system in the user application environment is compromised. We have prototyped the proposed secure SDR architecture using VMware and the GNU Radio toolkit, and demonstrate that the overheads incurred by the architecture are small and tolerable. Therefore, we believe that the proposed solution could be applied to address SDR security concerns in a wide range of both general-purpose and embedded computing systems.", "title": "" }, { "docid": "ba6fe1b26d76d7ff3e84ddf3ca5d3e35", "text": "The spacing effect describes the robust finding that long-term learning is promoted when learning events are spaced out in time rather than presented in immediate succession. Studies of the spacing effect have focused on memory processes rather than for other types of learning, such as the acquisition and generalization of new concepts. In this study, early elementary school children (5- to 7-year-olds; N = 36) were presented with science lessons on 1 of 3 schedules: massed, clumped, and spaced. The results revealed that spacing lessons out in time resulted in higher generalization performance for both simple and complex concepts. Spaced learning schedules promote several types of learning, strengthening the implications of the spacing effect for educational practices and curriculum.", "title": "" }, { "docid": "8f177b79f0b89510bd84e1f503b5475f", "text": "We propose a distributed cooperative framework among base stations (BS) with load balancing (dubbed as inter-BS for simplicity) for improving energy efficiency of OFDMA-based cellular access networks. Proposed inter-BS cooperation is formulated following the principle of ecological self-organization. Based on the network traffic, BSs mutually cooperate for distributing traffic among themselves and thus, the number of active BSs is dynamically adjusted for energy savings. For reducing the number of inter-BS communications, a three-step measure is taken by using estimated load factor (LF), initializing the algorithm with only the active BSs and differentiating neighboring BSs according to their operating modes for distributing traffic. An exponentially weighted moving average (EWMA)-based technique is proposed for estimating the LF in advance based on the historical data. Various selection schemes for finding the best BSs to distribute traffic are also explored. Furthermore, we present an analytical formulation for modeling the dynamic switching of BSs. A thorough investigation under a wide range of network settings is carried out in the context of an LTE system. Results demonstrate a significant enhancement in network energy efficiency yielding a much higher savings than the compared schemes. Moreover, frequency of inter-BS correspondences can be reduced by over 80%.", "title": "" }, { "docid": "ceb270c07d26caec5bc20e7117690f9f", "text": "Pesticides including insecticides and miticides are primarily used to regulate arthropod (insect and mite) pest populations in agricultural and horticultural crop production systems. However, continual reliance on pesticides may eventually result in a number of potential ecological problems including resistance, secondary pest outbreaks, and/or target pest resurgence [1,2]. Therefore, implementation of alternative management strategies is justified in order to preserve existing pesticides and produce crops with minimal damage from arthropod pests. One option that has gained interest by producers is integrating pesticides with biological control agents or natural enemies including parasitoids and predators [3]. This is often referred to as ‘compatibility,’ which is the ability to integrate or combine natural enemies with pesticides so as to regulate arthropod pest populations without directly or indirectly affecting the life history parameters or population dynamics of natural enemies [2,4]. This may also refer to pesticides being effective against targeted arthropod pests but relatively non-harmful to natural enemies [5,6].", "title": "" }, { "docid": "8988aaa4013ef155cbb09644ca491bab", "text": "Uses and gratification theory aids in the assessment of how audiences use a particular medium and the gratifications they derive from that use. In this paper this theory has been applied to derive Internet uses and gratifications for Indian Internet users. This study proceeds in four stages. First, six first-order gratifications namely self development, wide exposure, user friendliness, relaxation, career opportunities, and global exchange were identified using an exploratory factor analysis. Then the first order gratifications were subjected to firstorder confirmatory factor analysis. Third, using second-order confirmatory factor analysis three types of secondorder gratifications were obtained, namely process gratifications, content gratifications and social gratifications. Finally, with the use of t-tests the study has shown that males and females differ significantly on the gratification factors “self development”, “user friendliness”, “wide exposure” and “relaxation.” The intended audience consists of masters’ level students and doctoral students who want to learn exploratory factor analysis and confirmatory factor analysis. This case study can also be used to teach the basics of structural equation modeling using the software AMOS.", "title": "" }, { "docid": "ca768eb654b323354b7d78969162cb81", "text": "Hyper-redundant manipulators can be fragile, expensive, and limited in their flexibility due to the distributed and bulky actuators that are typically used to achieve the precision and degrees of freedom (DOFs) required. Here, a manipulator is proposed that is robust, high-force, low-cost, and highly articulated without employing traditional actuators mounted at the manipulator joints. Rather, local tunable stiffness is coupled with off-board spooler motors and tension cables to achieve complex manipulator configurations. Tunable stiffness is achieved by reversible jamming of granular media, which-by applying a vacuum to enclosed grains-causes the grains to transition between solid-like states and liquid-like ones. Experimental studies were conducted to identify grains with high strength-to-weight performance. A prototype of the manipulator is presented with performance analysis, with emphasis on speed, strength, and articulation. This novel design for a manipulator-and use of jamming for robotic applications in general-could greatly benefit applications such as human-safe robotics and systems in which robots need to exhibit high flexibility to conform to their environments.", "title": "" }, { "docid": "ae4cd27601b799821222db6f546d1127", "text": "An RF high-voltage CMOS technology is presented for cost-effective monolithic integration of cellular RF transmit functions. The technology integrates a modified LDMOS RF power transistor capable of nearly comparable linear and saturated RF power characteristics to GaAs solutions at cellular frequency bands. Measured results for multistage cellular power amplifier (PA) designs processed on bulk-Si and silicon-on-insulator on high-resistivity Si substrates (1 kΩ·cm ) are presented. The low-band multistage PA achieves greater than 60% power-added efficiency (PAE) with more than 35.5-dBm output power. The high-band PA achieves 45%-53% PAE across the band with greater than 33.4-dBm output power. Measured linearity performance is presented using an EDGE modulation source. A dc/dc buck converter was also integrated in the PA die as the power management circuitry. Measured results for the output power, PAE, and spurious emissions in the receive band while the dc/dc converter is biasing the PA and running at different modes are reported.", "title": "" }, { "docid": "f4f9a79bf6dc7afac056e9615c25c7f4", "text": "Multi-scanner Antivirus systems provide insightful information on the nature of a suspect application; however there is o‰en a lack of consensus and consistency between di‚erent Anti-Virus engines. In this article, we analyze more than 250 thousand malware signatures generated by 61 di‚erent Anti-Virus engines a‰er analyzing 82 thousand di‚erent Android malware applications. We identify 41 di‚erent malware classes grouped into three major categories, namely Adware, Harmful Œreats and Unknown or Generic signatures. We further investigate the relationships between such 41 classes using community detection algorithms from graph theory to identify similarities between them; and we €nally propose a Structure Equation Model to identify which Anti-Virus engines are more powerful at detecting each macro-category. As an application, we show how such models can help in identifying whether Unknown malware applications are more likely to be of Harmful or Adware type.", "title": "" }, { "docid": "f29f529ee14f4ae90ebb08ba26f8a8c1", "text": "After completing this article, the reader should be able to:  Describe the various biopsy types that require specimen imaging.  List methods of guiding biopsy procedures.  Explain the reasons behind specimen imaging.  Describe various methods for imaging specimens.", "title": "" }, { "docid": "403310053251e81cdad10addedb64c87", "text": "Many types of data are best analyzed by fitting a curve using nonlinear regression, and computer programs that perform these calculations are readily available. Like every scientific technique, however, a nonlinear regression program can produce misleading results when used inappropriately. This article reviews the use of nonlinear regression in a practical and nonmathematical manner to answer the following questions: Why is nonlinear regression superior to linear regression of transformed data? How does nonlinear regression differ from polynomial regression and cubic spline? How do nonlinear regression programs work? What choices must an investigator make before performing nonlinear regression? What do the final results mean? How can two sets of data or two fits to one set of data be compared? What problems can cause the results to be wrong? This review is designed to demystify nonlinear regression so that both its power and its limitations will be appreciated.", "title": "" }, { "docid": "9fcf513f9f8c7f3e00ae78b55618af8b", "text": "Graph analysis is becoming increasingly important in many research fields - biology, social sciences, data mining - and daily applications - path finding, product recommendation. Many different large-scale graph-processing systems have been proposed for different platforms. However, little effort has been placed on designing systems for hybrid CPU-GPU platforms.In this work, we present HyGraph, a novel graph-processing systems for hybrid platforms which delivers performance by using CPUs and GPUs concurrently. Its core feature is a specialized data structure which enables dynamic scheduling of jobs onto both the CPU and the GPUs, thus (1) supersedes the need for static workload distribution, (2) provides load balancing, and (3) minimizes inter-process communication overhead by overlapping computation and communication.Our preliminary results demonstrate that HyGraph outperforms CPU-only and GPU-only solutions, delivering close-to-optimal performance on the hybrid system. Moreover, it supports large-scale graphs which do not fit into GPU memory, and it is competitive against state-of-the-art systems.", "title": "" }, { "docid": "1e30d2f8e11bfbd868fdd0dfc0ea4179", "text": "In this paper, I study how companies can use their personnel data and information from job satisfaction surveys to predict employee quits. An important issue discussed at length in the paper is how employers can ensure the anonymity of employees in surveys used for management and HR analytics. I argue that a simple mechanism where the company delegates the implementation of job satisfaction surveys to an external consulting company can be optimal. In the subsequent empirical analysis, I use a unique combination of firm-level data (personnel records) and information from job satisfaction surveys to assess the benefits for companies using data in their decision-making. Moreover, I show how companies can move from a descriptive to a predictive approach.", "title": "" }, { "docid": "7c4104651e484e4cbff5735d62f114ef", "text": "A pair of salient tradeoffs have driven the multiple-input multiple-output (MIMO) systems developments. More explicitly, the early era of MIMO developments was predominantly motivated by the multiplexing-diversity tradeoff between the Bell Laboratories layered space-time and space-time block coding. Later, the linear dispersion code concept was introduced to strike a flexible tradeoff. The more recent MIMO system designs were motivated by the performance-complexity tradeoff, where the spatial modulation and space-time shift keying concepts eliminate the problem of inter-antenna interference and perform well with the aid of low-complexity linear receivers without imposing a substantial performance loss on generic maximum-likelihood/max a posteriori -aided MIMO detection. Against the background of the MIMO design tradeoffs in both uncoded and coded MIMO systems, in this treatise, we offer a comprehensive survey of MIMO detectors ranging from hard decision to soft decision. The soft-decision MIMO detectors play a pivotal role in approaching to the full-performance potential promised by the MIMO capacity theorem. In the near-capacity system design, the soft-decision MIMO detection dominates the total complexity, because all the MIMO signal combinations have to be examined, when both the channel’s output signal and the a priori log-likelihood ratios gleaned from the channel decoder are taken into account. Against this background, we provide reduced-complexity design guidelines, which are conceived for a wide-range of soft-decision MIMO detectors.", "title": "" } ]
scidocsrr
4c587003ab58730e7dfa82602fcf0664
Graph-Based Named Entity Linking with Wikipedia
[ { "docid": "9d918a69a2be2b66da6ecf1e2d991258", "text": "We designed and implemented TAGME, a system that is able to efficiently and judiciously augment a plain-text with pertinent hyperlinks to Wikipedia pages. The specialty of TAGME with respect to known systems [5,8] is that it may annotate texts which are short and poorly composed, such as snippets of search-engine results, tweets, news, etc.. This annotation is extremely informative, so any task that is currently addressed using the bag-of-words paradigm could benefit from using this annotation to draw upon (the millions of) Wikipedia pages and their inter-relations.", "title": "" }, { "docid": "9118de2f5c7deebb9c3c6175c0b507b2", "text": "The integration of facts derived from information extraction systems into existing knowledge bases requires a system to disambiguate entity mentions in the text. This is challenging due to issues such as non-uniform variations in entity names, mention ambiguity, and entities absent from a knowledge base. We present a state of the art system for entity disambiguation that not only addresses these challenges but also scales to knowledge bases with several million entries using very little resources. Further, our approach achieves performance of up to 95% on entities mentioned from newswire and 80% on a public test set that was designed to include challenging queries.", "title": "" } ]
[ { "docid": "eb7c34c4959c39acb18fc5920ff73dba", "text": "Acoustic evidence suggests that contemporary Seoul Korean may be developing a tonal system, which is arising in the context of a nearly completed change in how speakers use voice onset time (VOT) to mark the language’s distinction among tense, lax and aspirated stops.Data from 36 native speakers of varying ages indicate that while VOT for tense stops has not changed since the 1960s, VOT differences between lax and aspirated stops have decreased, in some cases to the point of complete overlap. Concurrently, the mean F0 for words beginning with lax stops is significantly lower than the mean F0 for comparable words beginning with tense or aspirated stops. Hence the underlying contrast between lax and aspirated stops is maintained by younger speakers, but is phonetically manifested in terms of differentiated tonal melodies: laryngeally unmarked (lax) stops trigger the introduction of a default L tone, while laryngeally marked stops (aspirated and tense) introduce H, triggered by a feature specification for [stiff].", "title": "" }, { "docid": "f35f7aab4bf63527abbc3d7f4515b6d2", "text": "The elements of the Hessian matrix consist of the second derivatives of the error measure with respect to the weights and thresholds in the network. They are needed in Bayesian estimation of network regularization parameters, for estimation of error bars on the network outputs, for network pruning algorithms, and for fast retraining of the network following a small change in the training data. In this paper we present an extended backpropagation algorithm that allows all elements of the Hessian matrix to be evaluated exactly for a feedforward network of arbitrary topology. Software implementation of the algorithm is straightforward.", "title": "" }, { "docid": "62029c586f65cb6708255517b485526f", "text": "In this work, SDN has been utilized to alleviate and eliminate the problem of ARP poisoning attack. This attack is the underlying infrastructure for many other network attacks, such as, man in the middle, denial of service and session hijacking. In this paper we propose a new algorithm to resolve the problem of ARP spoofing. The algorithm can be applied in two different scenarios. The two scenarios are based on whether a network host will be assigned a dynamic or a static IP address. We call the first scenario SDN_DYN; the second scenario is called SDN_STA. For the evaluation process, a physical SDN-enabled switch has been utilized with Ryu controller. Our results show that the new algorithm can prevent ARP spoofing and other attacks exploiting it.", "title": "" }, { "docid": "e2280986abcec2d54ea68bd03bfea295", "text": "Image captioning is a challenging task that combines the field of computer vision and natural language processing. A variety of approaches have been proposed to achieve the goal of automatically describing an image, and recurrent neural network (RNN) or long-short term memory (LSTM) based models dominate this field. However, RNNs or LSTMs cannot be calculated in parallel and ignore the underlying hierarchical structure of a sentence. In this paper, we propose a framework that only employs convolutional neural networks (CNNs) to generate captions. Owing to parallel computing, our basic model is around 3× faster than NIC (an LSTM-based model) during training time, while also providing better results. We conduct extensive experiments on MSCOCO and investigate the influence of the model width and depth. Compared with LSTM-based models that apply similar attention mechanisms, our proposed models achieves comparable scores of BLEU-1,2,3,4 and METEOR, and higher scores of CIDEr. We also test our model on the paragraph annotation dataset [22], and get higher CIDEr score compared with hierarchical LSTMs.", "title": "" }, { "docid": "63b983921f19775f4e598b4b2111b084", "text": "This paper deals with the emergence of perceived age discrimination climate on the company level and its performance consequences. In this new approach to the field of diversity research, we investigated (a) the effect of organizational level age diversity on collective perceptions of age discrimination climate that (b) in turn should influence the collective affective commit ment of employees, which is (c) an important trigger for overall company performance. In a large scale study that included 128 companies, a total of 8,651 employees provided data on their perceptions of age discrimination and affective commitment on the company level. Information on firm level performance was collected from key informants. We tested the proposed model using structural equation modeling (SEM) procedures and, overall, found support for all hypothesized relationships. The findings demonstrated that age diversity seems to be related to the emergence of an age discrimination climate in companies, which negatively impacts overall firm performance through the mediation of affective commitment. These results make valuable contributions to the diversity and discrimination literature by establish ing perceived age discrimination on the company level as a decisive mediator in the age diversity/performance link. The results also suggest important practical implications for the effective management of an increasingly age diverse workforce. Copyright # 2010 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "2b310a05b6a0c0fae45a2e15f8d52101", "text": "Cyber threats and the field of computer cyber defense are gaining more and more an increased importance in our lives. Starting from our regular personal computers and ending with thin clients such as netbooks or smartphones we find ourselves bombarded with constant malware attacks. In this paper we will present a new and novel way in which we can detect these kind of attacks by using elements of modern game theory. We will present the effects and benefits of game theory and we will talk about a defense exercise model that can be used to train cyber response specialists.", "title": "" }, { "docid": "57a2ef4a644f0fc385185a381f309fcd", "text": "Despite recent emergence of adversarial based methods for video prediction, existing algorithms often produce unsatisfied results in image regions with rich structural information (i.e., object boundary) and detailed motion (i.e., articulated body movement). To this end, we present a structure preserving video prediction framework to explicitly address above issues and enhance video prediction quality. On one hand, our framework contains a two-stream generation architecture which deals with high frequency video content (i.e., detailed object or articulated motion structure) and low frequency video content (i.e., location or moving directions) in two separate streams. On the other hand, we propose a RNN structure for video prediction, which employs temporal-adaptive convolutional kernels to capture time-varying motion patterns as well as tiny objects within a scene. Extensive experiments on diverse scenes, ranging from human motion to semantic layout prediction, demonstrate the effectiveness of the proposed video prediction approach.", "title": "" }, { "docid": "4f50fb108ba0e42ef1e61d00f847f3bf", "text": "This paper describes the use of decision tree and rule induction in data-mining applications. Of methods for classification and regression that have been developed in the fields of pattern recognition, statistics, and machine learning, these are of particular interest for data mining since they utilize symbolic and interpretable representations. Symbolic solutions can provide a high degree of insight into the decision boundaries that exist in the data, and the logic underlying them. This aspect makes these predictive-mining techniques particularly attractive in commercial and industrial data-mining applications. We present here a synopsis of some major state-of-the-art tree and rule mining methodologies, as well as some recent advances.", "title": "" }, { "docid": "d1525fdab295a16d5610210e80fb8104", "text": "The analysis of big data requires powerful, scalable, and accurate data analytics techniques that the traditional data mining and machine learning do not have as a whole. Therefore, new data analytics frameworks are needed to deal with the big data challenges such as volumes, velocity, veracity, variety of the data. Distributed data mining constitutes a promising approach for big data sets, as they are usually produced in distributed locations, and processing them on their local sites will reduce significantly the response times, communications, etc. In this paper, we propose to study the performance of a distributed clustering, called Dynamic Distributed Clustering (DDC). DDC has the ability to remotely generate clusters and then aggregate them using an efficient aggregation algorithm. The technique is developed for spatial datasets. We evaluated the DDC using two types of communications (synchronous and asynchronous), and tested using various load distributions. The experimental results show that the approach has super-linear speed-up, scales up very well, and can take advantage of the recent programming models, such as MapReduce model, as its results are not affected by the types of communications.", "title": "" }, { "docid": "1e1706e1bd58a562a43cc7719f433f4f", "text": "In this paper, we present the use of D-higraphs to perform HAZOP studies. D-higraphs is a formalism that includes in a single model the functional as well as the structural (ontological) components of any given system. A tool to perform a semi-automatic guided HAZOP study on a process plant is presented. The diagnostic system uses an expert system to predict the behavior modeled using D-higraphs. This work is applied to the study of an industrial case and its results are compared with other similar approaches proposed in previous studies. The analysis shows that the proposed methodology fits its purpose enabling causal reasoning that explains causes and consequences derived from deviations, it also fills some of the gaps and drawbacks existing in previous reported HAZOP assistant tools.", "title": "" }, { "docid": "8e5c07dc210a75619414130913030985", "text": "Flexible and stretchable electronics and optoelectronics configured in soft, water resistant formats uniquely address seminal challenges in biomedicine. Over the past decade, there has been enormous progress in the materials, designs, and manufacturing processes for flexible/stretchable system subcomponents, including transistors, amplifiers, bio-sensors, actuators, light emitting diodes, photodetector arrays, photovoltaics, energy storage elements, and bare die integrated circuits. Nanomaterials prepared using top-down processing approaches and synthesis-based bottom-up methods have helped resolve the intrinsic mechanical mismatch between rigid/planar devices and soft/curvilinear biological structures, thereby enabling a broad range of non-invasive, minimally invasive, and implantable systems to address challenges in biomedicine. Integration of therapeutic functional nanomaterials with soft bioelectronics demonstrates therapeutics in combination with unconventional diagnostics capabilities. Recent advances in soft materials, devices, and integrated systems are reviewes, with representative examples that highlight the utility of soft bioelectronics for advanced medical diagnostics and therapies.", "title": "" }, { "docid": "0cf1f63fd39c8c74465fad866958dac6", "text": "Software development organizations that have been employing capability maturity models, such as SW-CMM or CMMI for improving their processes are now increasingly interested in the possibility of adopting agile development methods. In the context of project management, what can we say about Scrum’s alignment with CMMI? The aim of our paper is to present the mapping between CMMI and the agile method Scrum, showing major gaps between them and identifying how organizations are adopting complementary practices in their projects to make these two approaches more compliant. This is useful for organizations that have a plan-driven process based on the CMMI model and are planning to improve the agility of processes or to help organizations to define a new project management framework based on both CMMI and Scrum practices.", "title": "" }, { "docid": "4eb27527c174bf7a31887a88f48ee423", "text": "Because of the increasing portability and wearability of noninvasive electrophysiological systems that record and process electrical signals from the human brain, automated systems for assessing changes in user cognitive state, intent, and response to events are of increasing interest. Brain-computer interface (BCI) systems can make use of such knowledge to deliver relevant feedback to the user or to an observer, or within a human-machine system to increase safety and enhance overall performance. Building robust and useful BCI models from accumulated biological knowledge and available data is a major challenge, as are technical problems associated with incorporating multimodal physiological, behavioral, and contextual data that may in the future be increasingly ubiquitous. While performance of current BCI modeling methods is slowly increasing, current performance levels do not yet support widespread uses. Here we discuss the current neuroscientific questions and data processing challenges facing BCI designers and outline some promising current and future directions to address them.", "title": "" }, { "docid": "7d4d0e4d99b5dfe675f5f4eff5e5679f", "text": "Remote work and intensive use of Information Technologies (IT) are increasingly common in organizations. At the same time, professional stress seems to develop. However, IS research has paid little attention to the relationships between these two phenomena. The purpose of this research in progress is to present a framework that introduces the influence of (1) new spatial and temporal constraints and of (2) intensive use of IT on employee emotions at work. Specifically, this paper relies on virtuality (e.g. Chudoba et al. 2005) and media richness (Daft and Lengel 1984) theories to determine the emotional consequences of geographically distributed work.", "title": "" }, { "docid": "2000c393acd11a31331d234fb56b8abd", "text": "This letter reports the fabrication of a GaN heterostructure field-effect transistor with oxide spacer placed on the mesa sidewalls. The presence of an oxide spacer effectively eliminates the gate leakage current that occurs at the channel edge, where the gate metal is in contact with the 2-D electron gas edge on the mesa sidewall. From the two-terminal gate leakage current measurements, the leakage current was found to be several nA at VG=-12 V and at VG=-450 V. The benefits of the proposed spacer scheme include the patterning of the metal electrodes by plasma etching and a lower manufacturing cost.", "title": "" }, { "docid": "efd1e2aa69306bde416065547585813b", "text": "Numerous approaches based on metrics, token sequence pattern-matching, abstract syntax tree (AST) or program dependency graph (PDG) analysis have already been proposed to highlight similarities in source code: in this paper we present a simple and scalable architecture based on AST fingerprinting. Thanks to a study of several hashing strategies reducing false-positive collisions, we propose a framework that efficiently indexes AST representations in a database, that quickly detects exact (w.r.t source code abstraction) clone clusters and that easily retrieves their corresponding ASTs. Our aim is to allow further processing of neighboring exact matches in order to identify the larger approximate matches, dealing with the common modification patterns seen in the intra-project copy-pastes and in the plagiarism cases.", "title": "" }, { "docid": "35a063ab339f32326547cc54bee334be", "text": "We present a model for attacking various cryptographic schemes by taking advantage of random hardware faults. The model consists of a black-box containing some cryptographic secret. The box interacts with the outside world by following a cryptographic protocol. The model supposes that from time to time the box is affected by a random hardware fault causing it to output incorrect values. For example, the hardware fault flips an internal register bit at some point during the computation. We show that for many digital signature and identification schemes these incorrect outputs completely expose the secrets stored in the box. We present the following results: (1) The secret signing key used in an implementation of RSA based on the Chinese Remainder Theorem (CRT) is completely exposed from a single erroneous RSA signature, (2) for non-CRT implementations of RSA the secret key is exposed given a large number (e.g. 1000) of erroneous signatures, (3) the secret key used in Fiat—Shamir identification is exposed after a small number (e.g. 10) of faulty executions of the protocol, and (4) the secret key used in Schnorr's identification protocol is exposed after a much larger number (e.g. 10,000) of faulty executions. Our estimates for the number of necessary faults are based on standard security parameters such as a 1024-bit modulus, and a 2 -40 identification error probability. Our results demonstrate the importance of preventing errors in cryptographic computations. We conclude the paper with various methods for preventing these attacks.", "title": "" }, { "docid": "be3204a5a4430cc3150bf0368a972e38", "text": "Deep learning has exploded in the public consciousness, primarily as predictive and analytical products suffuse our world, in the form of numerous human-centered smart-world systems, including targeted advertisements, natural language assistants and interpreters, and prototype self-driving vehicle systems. Yet to most, the underlying mechanisms that enable such human-centered smart products remain obscure. In contrast, researchers across disciplines have been incorporating deep learning into their research to solve problems that could not have been approached before. In this paper, we seek to provide a thorough investigation of deep learning in its applications and mechanisms. Specifically, as a categorical collection of state of the art in deep learning research, we hope to provide a broad reference for those seeking a primer on deep learning and its various implementations, platforms, algorithms, and uses in a variety of smart-world systems. Furthermore, we hope to outline recent key advancements in the technology, and provide insight into areas, in which deep learning can improve investigation, as well as highlight new areas of research that have yet to see the application of deep learning, but could nonetheless benefit immensely. We hope this survey provides a valuable reference for new deep learning practitioners, as well as those seeking to innovate in the application of deep learning.", "title": "" }, { "docid": "a00f39476d72dfd7e244c3588ced3ca5", "text": "---------------------------------------------------------------------***--------------------------------------------------------------------Abstract This paper holds a survey on leaf disease detection using various image processing technique. Digital image processing is fast, reliable and accurate technique for detection of diseases also various algorithms can be used for identification and classification of leaf diseases in plant. This paper presents techniques used by different author to identify disease such as clustering method, color base image analysis method, classifier and artificial neural network for classification of diseases. The main focus of our work is on the analysis of different leaf disease detection techniques and also provides an overview of different image processing techniques.", "title": "" }, { "docid": "b1b6e670f21479956d2bbe281c6ff556", "text": "Near real-time data from the MODIS satellite sensor was used to detect and trace a harmful algal bloom (HAB), or red tide, in SW Florida coastal waters from October to December 2004. MODIS fluorescence line height (FLH in W m 2 Am 1 sr ) data showed the highest correlation with near-concurrent in situ chlorophyll-a concentration (Chl in mg m ). For Chl ranging between 0.4 to 4 mg m 3 the ratio between MODIS FLH and in situ Chl is about 0.1 W m 2 Am 1 sr 1 per mg m 3 chlorophyll (Chl=1.255 (FLH 10), r =0.92, n =77). In contrast, the band-ratio chlorophyll product of either MODIS or SeaWiFS in this complex coastal environment provided false information. Errors in the satellite Chl data can be both negative and positive (3–15 times higher than in situ Chl) and these data are often inconsistent either spatially or temporally, due to interferences of other water constituents. The red tide that formed from November to December 2004 off SW Florida was revealed by MODIS FLH imagery, and was confirmed by field sampling to contain medium (10 to 10 cells L ) to high (>10 cells L ) concentrations of the toxic dinoflagellate Karenia brevis. The FLH imagery also showed that the bloom started in midOctober south of Charlotte Harbor, and that it developed and moved to the south and southwest in the subsequent weeks. Despite some artifacts in the data and uncertainty caused by factors such as unknown fluorescence efficiency, our results show that the MODIS FLH data provide an unprecedented tool for research and managers to study and monitor algal blooms in coastal environments. D 2005 Elsevier Inc. All rights reserved.", "title": "" } ]
scidocsrr
13ac666dc0a875c9b5dfe5c8a812f0fc
Deep View Morphing
[ { "docid": "d37dbb21f338016c49df79988e0226b0", "text": "We propose a multi-view depthmap estimation approach aimed at adaptively ascertaining the pixel level data associations between a reference image and all the elements of a source image set. Namely, we address the question, what aggregation subset of the source image set should we use to estimate the depth of a particular pixel in the reference image? We pose the problem within a probabilistic framework that jointly models pixel-level view selection and depthmap estimation given the local pairwise image photoconsistency. The corresponding graphical model is solved by EM-based view selection probability inference and PatchMatch-like depth sampling and propagation. Experimental results on standard multi-view benchmarks convey the state-of-the art estimation accuracy afforded by mitigating spurious pixel level data associations. Additionally, experiments on large Internet crowd sourced data demonstrate the robustness of our approach against unstructured and heterogeneous image capture characteristics. Moreover, the linear computational and storage requirements of our formulation, as well as its inherent parallelism, enables an efficient and scalable GPU-based implementation.", "title": "" } ]
[ { "docid": "bfdfac980d1629f85f5bd57705b11b19", "text": "Deduplication is an approach of avoiding storing data blocks with identical content, and has been shown to effectively reduce the disk space for storing multi-gigabyte virtual machine (VM) images. However, it remains challenging to deploy deduplication in a real system, such as a cloud platform, where VM images are regularly inserted and retrieved. We propose LiveDFS, a live deduplication file system that enables deduplication storage of VM images in an open-source cloud that is deployed under low-cost commodity hardware settings with limited memory footprints. LiveDFS has several distinct features, including spatial locality, prefetching of metadata, and journaling. LiveDFS is POSIXcompliant and is implemented as a Linux kernel-space file system. We deploy our LiveDFS prototype as a storage layer in a cloud platform based on OpenStack, and conduct extensive experiments. Compared to an ordinary file system without deduplication, we show that LiveDFS can save at least 40% of space for storing VM images, while achieving reasonable performance in importing and retrieving VM images. Our work justifies the feasibility of deploying LiveDFS in an open-source cloud.", "title": "" }, { "docid": "f76717050a5d891f63e475ba3e3ff955", "text": "Computational Advertising is the currently emerging multidimensional statistical modeling sub-discipline in digital advertising industry. Web pages visited per user every day is considerably increasing, resulting in an enormous access to display advertisements (ads). The rate at which the ad is clicked by users is termed as the Click Through Rate (CTR) of an advertisement. This metric facilitates the measurement of the effectiveness of an advertisement. The placement of ads in appropriate location leads to the rise in the CTR value that influences the growth of customer access to advertisement resulting in increased profit rate for the ad exchange, publishers and advertisers. Thus it is imperative to predict the CTR metric in order to formulate an efficient ad placement strategy. This paper proposes a predictive model that generates the click through rate based on different dimensions of ad placement for display advertisements using statistical machine learning regression techniques such as multivariate linear regression (LR), poisson regression (PR) and support vector regression(SVR). The experiment result reports that SVR based click model outperforms in predicting CTR through hyperparameter optimization.", "title": "" }, { "docid": "ee4416a05b955cdbd83b1819f0152665", "text": "relative densities of pharmaceutical solids play an important role in determining their performance (e.g., flow and compaction properties) in both tablet and capsule dosage forms. In this article, the authors report the densities of a wide variety of solid pharmaceutical formulations and intermediates. The variance of density with chemical structure, processing history, and dosage-form type is significant. This study shows that density can be used as an equipment-independent scaling parameter for several common drug-product manufacturing operations. any physical responses of powders, granules, and compacts such as powder flow and tensile strength are determined largely by their absolute and relative densities (1–8). Although measuring these properties is a simple task, a review of the literature reveals that a combined source of density data that formulation scientists can refer to does not exist. The purpose of this article is to provide such a reference source and to give insight about how these critical properties can be measured for common pharmaceutical solids and how they can be used for monitoring common drugproduct manufacturing operations.", "title": "" }, { "docid": "690603bd37dd8376893fc1bb1946fc03", "text": "Recently, the use of herbal medicines has been increased all over the world due to their therapeutic effects and fewer adverse effects as compared to the modern medicines. However, many herbal drugs and herbal extracts despite of their impressive in-vitro findings demonstrates less or negligible in-vivo activity due to their poor lipid solubility or improper molecular size, resulting in poor absorption and hence poor bioavailability. Nowadays with the advancement in the technology, novel drug delivery systems open the door towards the development of enhancing bioavailability of herbal drug delivery systems. For last one decade many novel carriers such as liposomes, microspheres, nanoparticles, transferosomes, ethosomes, lipid based systems etc. have been reported for successful modified delivery of various herbal drugs. Many herbal compounds including quercetin, genistein, naringin, sinomenine, piperine, glycyrrhizin and nitrile glycoside have demonstrated capability to enhance the bioavailability. The objective of this review is to summarize various available novel drug delivery technologies which have been developed for delivery of drugs (herbal), and to achieve better therapeutic response. An attempt has also been made to compile a profile on bioavailability enhancers of herbal origin with the mechanism of action (wherever reported) and studies on improvement in drug bioavailability, exhibited particularly by natural compounds.", "title": "" }, { "docid": "c75388c19397bf1e743970cb32649b17", "text": "In recent years, there has been a substantial amount of work on large-scale data analytics using Hadoop-based platforms running on large clusters of commodity machines. A lessexplored topic is how those data, dominated by application logs, are collected and structured to begin with. In this paper, we present Twitter’s production logging infrastructure and its evolution from application-specific logging to a unified “client events” log format, where messages are captured in common, well-formatted, flexible Thrift messages. Since most analytics tasks consider the user session as the basic unit of analysis, we pre-materialize “session sequences”, which are compact summaries that can answer a large class of common queries quickly. The development of this infrastructure has streamlined log collection and data analysis, thereby improving our ability to rapidly experiment and iterate on various aspects of the service.", "title": "" }, { "docid": "d1fa7cf9a48f1ad5502f6aec2981f79a", "text": "Despite the increasing use of social media platforms for information and news gathering, its unmoderated nature often leads to the emergence and spread of rumours, i.e., items of information that are unverified at the time of posting. At the same time, the openness of social media platforms provides opportunities to study how users share and discuss rumours, and to explore how to automatically assess their veracity, using natural language processing and data mining techniques. In this article, we introduce and discuss two types of rumours that circulate on social media: long-standing rumours that circulate for long periods of time, and newly emerging rumours spawned during fast-paced events such as breaking news, where reports are released piecemeal and often with an unverified status in their early stages. We provide an overview of research into social media rumours with the ultimate goal of developing a rumour classification system that consists of four components: rumour detection, rumour tracking, rumour stance classification, and rumour veracity classification. We delve into the approaches presented in the scientific literature for the development of each of these four components. We summarise the efforts and achievements so far toward the development of rumour classification systems and conclude with suggestions for avenues for future research in social media mining for the detection and resolution of rumours.", "title": "" }, { "docid": "23987d01051f470e26666d6db340018b", "text": "This paper presents a device that is able to reproduce atmospheric discharges in a small scale. First, there was simulated an impulse generator circuit that could meet the main characteristics of the common lightning strokes waveform. Later, four different generator circuits were developed with the selection made by a microcontroller. Finally, the output was subject to amplification circuits that increased its amplitude. The impulses generated had a very similar shape compared to the real atmospheric discharges to the international standards for impulse testing. The apparatus is meant for application in electric grounding systems and for tests in high frequency to measure the soil impedance.", "title": "" }, { "docid": "eaec7fb5490ccabd52ef7b4b5abd25f6", "text": "Automatic and reliable segmentation of the prostate is an important but difficult task for various clinical applications such as prostate cancer radiotherapy. The main challenges for accurate MR prostate localization lie in two aspects: (1) inhomogeneous and inconsistent appearance around prostate boundary, and (2) the large shape variation across different patients. To tackle these two problems, we propose a new deformable MR prostate segmentation method by unifying deep feature learning with the sparse patch matching. First, instead of directly using handcrafted features, we propose to learn the latent feature representation from prostate MR images by the stacked sparse auto-encoder (SSAE). Since the deep learning algorithm learns the feature hierarchy from the data, the learned features are often more concise and effective than the handcrafted features in describing the underlying data. To improve the discriminability of learned features, we further refine the feature representation in a supervised fashion. Second, based on the learned features, a sparse patch matching method is proposed to infer a prostate likelihood map by transferring the prostate labels from multiple atlases to the new prostate MR image. Finally, a deformable segmentation is used to integrate a sparse shape model with the prostate likelihood map for achieving the final segmentation. The proposed method has been extensively evaluated on the dataset that contains 66 T2-wighted prostate MR images. Experimental results show that the deep-learned features are more effective than the handcrafted features in guiding MR prostate segmentation. Moreover, our method shows superior performance than other state-of-the-art segmentation methods.", "title": "" }, { "docid": "71cc535dcae1b50f9fe3314f4140d916", "text": "Information and communications technology has fostered the rise of the sharing economy, enabling individuals to share excess capacity. In this paper, we focus on Airbnb.com, which is among the most prominent examples of the sharing economy. We take the perspective of an accommodation provider and investigate the concept of trust, which facilitates complete strangers to form temporal C2C relationships on Airbnb.com. In fact, the implications of trust in the sharing economy fundamentally differ to related online industries. In our research model, we investigate the formation of trust by incorporating two antecedents – ‘Disposition to trust’ and ‘Familiarity with Airbnb.com’. Furthermore, we differentiate between ‘Trust in Airbnb.com’ and ‘Trust in renters’ and examine their implications on two provider intentions. To seek support for our research model, we conducted a survey with 189 participants. The results show that both trust constructs are decisive to successfully initiate a sharing deal between two parties.", "title": "" }, { "docid": "f348748d56ee099c5f30a2629c878f37", "text": "Agency in interactive narrative is often narrowly understood as a user’s freedom to either perform virtually embodied actions or alter the mechanics of narration at will, followed by an implicit assumption of “the more agency the better.” This paper takes notice of a broader range of agency phenomena in interactive narrative and gaming that may be addressed by integrating accounts of agency from diverse fields such as sociology of science, digital media studies, philosophy, and cultural theory. The upshot is that narrative agency is contextually situated, distributed between the player and system, and mediated through user interpretation of system behavior and system affordances for user actions. In our new and developing model of agency play, multiple dimensions of agency can be tuned during story execution as a narratively situated mechanism to convey meaning. More importantly, we propose that this model of variable dimensions of agency can be used as an expressive theoretical tool for interactive narrative design. Finally, we present our current interactive narrative work under development as a case study for how the agency play model can be deployed expressively.", "title": "" }, { "docid": "5205a0cc19d6e1fe9d0521ba4e914cb5", "text": "AIMS\nTo investigate changes in left ventricular function in the first 6 months after acute myocardial infarction treated with primary angioplasty. To assess clinical variables, associated with recovery of left ventricular function after acute myocardial infarction.\n\n\nMETHODS\nChanges in left ventricular function were studied in 600 consecutive patients with acute myocardial infarction, all treated with primary angioplasty. Left ventricular ejection fraction was measured by radionuclide ventriculography in survivors at day 4 and after 6 months. Patients with a recurrent myocardial infarction within the 6 months were excluded.\n\n\nRESULTS\nSuccessful reperfusion (TIMI 3 flow) by primary angioplasty was achieved in 89% of patients. The mean ejection fraction at discharge was 43.7%+/-11.4, whereas the mean ejection fraction after 6 months was 46.3%+/-11.5 (P<0.01). During the 6 months, the mean relative improvement in left ventricular ejection fraction was 6%. An improvement in left ventricular function was observed in 48% of the patients; 25% of the patients had a decrease, whereas in the remaining patients there was no change. After univariate and multivariate analysis, an anterior infarction location, an ejection fraction at discharge < or =40% and single-vessel disease were significant predictors of left ventricular improvement during the 6 months.\n\n\nCONCLUSIONS\nAfter acute myocardial infarction treated with primary angioplasty there was a significant recovery of left ventricular function during the first 6 months after the infarction. An anterior myocardial infarction, single-vessel coronary artery disease, and an initially depressed left ventricular function were independently associated with recovery of left ventricular function. Multivessel disease was associated with absence of functional recovery. Additional studies, investigating complete revascularization are needed, as this approach may potentially improve long-term left ventricular function.", "title": "" }, { "docid": "3abdace0aaee5e68ad48045e267634fb", "text": "This paper presents a two-transformer active- clamping zero-voltage-switching (ZVS) flyback converter, which is mainly composed of two active-clamping flyback converters. [1]-[2] By utilizing two separate transformers[3], the proposed converter allows a low-profile design to be readily implemented while retaining the merits of a conventional single-transformer topology. The presented two-transformer active-clamping ZVS flyback converter can approximately share the total load current between two secondaries. Therefore the transformer copper loss and the rectifier diode conduction loss can be decreased. Detailed analysis and design of this new two-transformer active-clamping ZVS flyback converter are described.", "title": "" }, { "docid": "5f444ae08376785535b76aef298c39a4", "text": "Matrix approximation (MA) is one of the most popular techniques for collaborative filtering (CF). Most existing MA methods train user/item latent factors based on a user-item rating matrix and then use the global latent factors to model all users/items. However, globally optimized latent factors may not reflect the unique interests shared among only subsets of users/items, without which unique interests of users may not be accurately modelled. As a result, existing MA methods, which cannot capture the uniqueness of different user/item, cannot provide optimal recommendation. In this paper, a mixture probabilistic matrix approximation (MPMA) method is proposed, which unifies globally optimized user/item feature vectors (on the entire rating matrix) and locally optimized user/item feature vectors (on subsets of user/item ratings) to improve recommendation accuracy. More specifically, in MPMA, a method is developed to find both globally and locally optimized user/item feature vectors. Then, a Gaussian mixture model is adopted to combine global predictions and local predictions to produce accurate rating predictions. Experimental study using MovieLens and Netflix datasets demonstrates that MPMA outperforms five state-of-the-art MA based CF methods in recommendation accuracy with good scalability.", "title": "" }, { "docid": "68476d2f68f20a4c7910fe0811c5a2d8", "text": "The Resource Description Framework (RDF) and SPARQL Protocol and RDF Query Language (SPARQL) were introduced about a decade ago to enable flexible schema-free data interchange on the Semantic Web. Today, data scientists use the framework as a scalable graph representation for integrating, querying, exploring and analyzing data sets hosted at different sources. With increasing adoption, the need for graph mining capabilities for the Semantic Web has emerged. We address that need through implementation of three popular iterative Graph Mining algorithms (Triangle count, Connected component analysis, and PageRank). We implement these algorithms as SPARQL queries, wrapped within Python scripts. We evaluate the performance of our implementation on 6 real world data sets and show graph mining algorithms (that have a linear-algebra formulation) can indeed be unleashed on data represented as RDF graphs using the SPARQL query interface.", "title": "" }, { "docid": "d52ba071c7790478235e364fc1cfab83", "text": "We study parameter inference in large-scale latent variable models. We first propose a unified treatment of online inference for latent variable models from a non-canonical exponential family, and draw explicit links between several previously proposed frequentist or Bayesian methods. We then propose a novel inference method for the frequentist estimation of parameters, that adapts MCMC methods to online inference of latent variable models with the proper use of local Gibbs sampling. Then, for latent Dirichlet allocation,we provide an extensive set of experiments and comparisons with existing work, where our new approach outperforms all previously proposed methods. This work is currently under review for JMLR [1] (submitted on July, 27 2016).", "title": "" }, { "docid": "83c407843732c4d237ff6e07da40297f", "text": "Although deep reinforcement learning has achieved great success recently, there are still challenges in Real Time Strategy (RTS) games. Due to its large state and action space, as well as hidden information, RTS games require macro strategies as well as micro level manipulation to obtain satisfactory performance. In this paper, we present a novel hierarchical reinforcement learning model for mastering Multiplayer Online Battle Arena (MOBA) games, a sub-genre of RTS games. In this hierarchical framework, agents make macro strategies by imitation learning and do micromanipulations through reinforcement learning. Moreover, we propose a simple self-learning method to get better sample efficiency for reinforcement part and extract some global features by multi-target detection method in the absence of game engine or API. In 1v1 mode, our agent successfully learns to combat and defeat built-in AI with 100% win rate, and experiments show that our method can create a competitive multi-agent for a kind of mobile MOBA game King of Glory (KOG) in 5v5 mode.", "title": "" }, { "docid": "046f6c5cc6065c1cb219095fb0dfc06f", "text": "In this paper, we describe COLABA, a large effort to create resources and processing tools for Dialectal Arabic Blogs. We describe the objectives of the project, the process flow and the interaction between the different components. We briefly describe the manual annotation effort and the resources created. Finally, we sketch how these resources and tools are put together to create DIRA, a termexpansion tool for information retrieval over dialectal Arabic collections using Modern Standard Arabic queries.", "title": "" }, { "docid": "37c7c5a21de8d076773d8149c5dcf205", "text": "This paper presents a 1.6-mW 2.4-GHz receiver that operates from a single supply of 300 mV allowing direct powering from various energy harvesting sources. We extensively utilize transformer coupling between stages to reduce headroom requirements. We forward-bias bulk-source junctions to lower threshold voltages where appropriate. A single-ended 2.4 GHz RF input is amplified and converted to a differential signal before down-converting to a low IF of 1 to 10 MHz. A chain of IF amplifiers and narrowband filters are interleaved to perform programmable channel selection. The chip is fabricated in a 65-nm CMOS process. The receiver achieves -91.5-dBm sensitivity for a BER of 10e-3.", "title": "" }, { "docid": "aeb3e0b089e658b532b3ed6c626898dd", "text": "Semantics is seen as the key ingredient in the next phase of the Web infrastructure as well as the next generation of information systems applications. In this context, we review some of the reservations expressed about the viability of the Semantic Web. We respond to these by identifying a Semantic Technology that supports the key capabilities also needed to realize the Semantic Web vision, namely representing, acquiring and utilizing knowledge. Given that scalability is a key challenge, we briefly review our observations from developing three classes of real world applications and corresponding technology components: search/browsing, integration, and analytics. We distinguish this proven technology from some parts of the Semantic Web approach and offer subjective remarks which we hope will foster additional debate.", "title": "" }, { "docid": "5d606ffa39642e0d353b5a565383905f", "text": "Our comprehensive meta-analysis combined prevalence figures of childhood sexual abuse (CSA) reported in 217 publications published between 1980 and 2008, including 331 independent samples with a total of 9,911,748 participants. The overall estimated CSA prevalence was 127/1000 in self-report studies and 4/1000 in informant studies. Self-reported CSA was more common among female (180/1000) than among male participants (76/1000). Lowest rates for both girls (113/1000) and boys (41/1000) were found in Asia, and highest rates were found for girls in Australia (215/1000) and for boys in Africa (193/1000). The results of our meta-analysis confirm that CSA is a global problem of considerable extent, but also show that methodological issues drastically influence the self-reported prevalence of CSA.", "title": "" } ]
scidocsrr
dce7f174f02edd8d0b61b5bf32291931
Bottom-Up and Top-Down Attention for Image Captioning and VQA
[ { "docid": "6af09f57f2fcced0117dca9051917a0d", "text": "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.", "title": "" }, { "docid": "bb00d7cb37248e4974319ba8d5306bbe", "text": "Attention can be focused volitionally by \"top-down\" signals derived from task demands and automatically by \"bottom-up\" signals from salient stimuli. The frontal and parietal cortices are involved, but their neural activity has not been directly compared. Therefore, we recorded from them simultaneously in monkeys. Prefrontal neurons reflected the target location first during top-down attention, whereas parietal neurons signaled it earlier during bottom-up attention. Synchrony between frontal and parietal areas was stronger in lower frequencies during top-down attention and in higher frequencies during bottom-up attention. This result indicates that top-down and bottom-up signals arise from the frontal and sensory cortex, respectively, and different modes of attention may emphasize synchrony at different frequencies.", "title": "" }, { "docid": "06c0ee8d139afd11aab1cc0883a57a68", "text": "In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.", "title": "" }, { "docid": "97aab319e3d38d755860b141c5a4fa38", "text": "Automatically generating a natural language description of an image has attracted interests recently both because of its importance in practical applications and because it connects two major artificial intelligence fields: computer vision and natural language processing. Existing approaches are either top-down, which start from a gist of an image and convert it into words, or bottom-up, which come up with words describing various aspects of an image and then combine them. In this paper, we propose a new algorithm that combines both approaches through a model of semantic attention. Our algorithm learns to selectively attend to semantic concept proposals and fuse them into hidden states and outputs of recurrent neural networks. The selection and fusion form a feedback connecting the top-down and bottom-up computation. We evaluate our algorithm on two public benchmarks: Microsoft COCO and Flickr30K. Experimental results show that our algorithm significantly outperforms the state-of-the-art approaches consistently across different evaluation metrics.", "title": "" } ]
[ { "docid": "616ac87318a75c430149e254f4a0b931", "text": "Research on large shared medical datasets and data-driven research are gaining fast momentum and provide major opportunities for improving health systems as well as individual care. Such open data can shed light on the causes of disease and effects of treatment, including adverse reactions side-effects of treatments, while also facilitating analyses tailored to an individual's characteristics, known as personalized or \"stratified medicine.\" Developments, such as crowdsourcing, participatory surveillance, and individuals pledging to become \"data donors\" and the \"quantified self\" movement (where citizens share data through mobile device-connected technologies), have great potential to contribute to our knowledge of disease, improving diagnostics, and delivery of -healthcare and treatment. There is not only a great potential but also major concerns over privacy, confidentiality, and control of data about individuals once it is shared. Issues, such as user trust, data privacy, transparency over the control of data ownership, and the implications of data analytics for personal privacy with potentially intrusive inferences, are becoming increasingly scrutinized at national and international levels. This can be seen in the recent backlash over the proposed implementation of care.data, which enables individuals' NHS data to be linked, retained, and shared for other uses, such as research and, more controversially, with businesses for commercial exploitation. By way of contrast, through increasing popularity of social media, GPS-enabled mobile apps and tracking/wearable devices, the IT industry and MedTech giants are pursuing new projects without clear public and policy discussion about ownership and responsibility for user-generated data. In the absence of transparent regulation, this paper addresses the opportunities of Big Data in healthcare together with issues of responsibility and accountability. It also aims to pave the way for public policy to support a balanced agenda that safeguards personal information while enabling the use of data to improve public health.", "title": "" }, { "docid": "3c4def6958085fc4e9b3de48a1d4a9ec", "text": "We propose a novel approach to 3D human pose estimation from a single depth map. Recently, convolutional neural network (CNN) has become a powerful paradigm in computer vision. Many of computer vision tasks have benefited from CNNs, however, the conventional approach to directly regress 3D body joint locations from an image does not yield a noticeably improved performance. In contrast, we formulate the problem as estimating per-voxel likelihood of key body joints from a 3D occupancy grid. We argue that learning a mapping from volumetric input to volumetric output with 3D convolution consistently improves the accuracy when compared to learning a regression from depth map to 3D joint coordinates. We propose a two-stage approach to reduce the computational overhead caused by volumetric representation and 3D convolution: Holistic 2D prediction and Local 3D prediction. In the first stage, Planimetric Network (P-Net) estimates per-pixel likelihood for each body joint in the holistic 2D space. In the second stage, Volumetric Network (V-Net) estimates the per-voxel likelihood of each body joints in the local 3D space around the 2D estimations of the first stage, effectively reducing the computational cost. Our model outperforms existing methods by a large margin in publicly available datasets.", "title": "" }, { "docid": "0ee2d665f4b275bcc8187ab8b1fbae1b", "text": "With the rise of large-scale, Web-based applications, users are increasingly adopting a new class of document-oriented database management systems (DBMSs) that allow for rapid prototyping while also achieving scalable performance. Like for other distributed storage systems, replication is important for document DBMSs in order to guarantee availability. The network bandwidth required to keep replicas synchronized is expensive and is often a performance bottleneck. As such, there is a strong need to reduce the replication bandwidth, especially for geo-replication scenarios where wide-area network (WAN) bandwidth is limited.\n This paper presents a deduplication system called sDedup that reduces the amount of data transferred over the network for replicated document DBMSs. sDedup uses similarity-based deduplication to remove redundancy in replication data by delta encoding against similar documents selected from the entire database. It exploits key characteristics of document-oriented workloads, including small item sizes, temporal locality, and the incremental nature of document edits. Our experimental evaluation of sDedup with three real-world datasets shows that it is able to achieve up to 38X reduction in data sent over the network, significantly outperforming traditional chunk-based deduplication techniques while incurring negligible performance overhead.", "title": "" }, { "docid": "06ca9b3cdeeae59e67d25235ee410f73", "text": "Since many years ago, the scientific community is concerned about how to increase the accuracy of different classification methods, and major achievements have been made so far. Besides this issue, the increasing amount of data that is being generated every day by remote sensors raises more challenges to be overcome. In this work, a tool within the scope of InterIMAGE Cloud Platform (ICP), which is an open-source, distributed framework for automatic image interpretation, is presented. The tool, named ICP: Data Mining Package, is able to perform supervised classification procedures on huge amounts of data, usually referred as big data, on a distributed infrastructure using Hadoop MapReduce. The tool has four classification algorithms implemented, taken from WEKA’s machine learning library, namely: Decision Trees, Naïve Bayes, Random Forest and Support Vector Machines (SVM). The results of an experimental analysis using a SVM classifier on data sets of different sizes for different cluster configurations demonstrates the potential of the tool, as well as aspects that affect its performance. * Corresponding author", "title": "" }, { "docid": "04f4c18860a98284de6d6a7e66592336", "text": "According to published literature : “Actigraphy is a non-invasive method of monitoring human rest/activity cycles. A small actigraph unit, also called an actimetry sensor is worn for a week or more to measure gross motor activity. The unit is usually, in a wrist-watch-like package, worn on the wrist. The movements the actigraph unit undergoes are continually recorded and some units also measure light exposure. The data can be later read to a computer and analysed offline; in some brands of sensors the data are transmitted and analysed in real time.”[1-9].We are interested in focusing on the above mentioned research topic as per the title of this communication.Interested in suggesting an informatics and computational framework in the context of Actigraphy using ImageJ/Actigraphy Plugin by using JikesRVM as the Java Virtual Machine.", "title": "" }, { "docid": "d311bfc22c30e860c529b2aeb16b6d40", "text": "We study the emergence of communication in multiagent adversarial settings inspired by the classic Imitation game. A class of three player games is used to explore how agents based on sequence to sequence (Seq2Seq) models can learn to communicate information in adversarial settings. We propose a modeling approach, an initial set of experiments and use signaling theory to support our analysis. In addition, we describe how we operationalize the learning process of actor-critic Seq2Seq based agents in these communicational games.", "title": "" }, { "docid": "027e2fb796b09eed65fb36ff80e3fb60", "text": "Penile dysmorphic disorder (PDD) is shorthand for men diagnosed with body dysmorphic disorder, in whom the size or shape of the penis is their main, if not their exclusive, preoccupation causing significant shame or handicap. There are no specific measures for identifying men with PDD compared to men who are anxious about the size of their penis but do not have PDD. Such a measure might be helpful for treatment planning, reducing unrealistic expectations, and measuring outcome after any psychological or physical intervention. Our aim was, therefore, to validate a specific measure, termed the Cosmetic Procedure Screening Scale for PDD (COPS-P). Eighty-one male participants were divided into three groups: a PDD group (n = 21), a small penis anxiety group (n = 37), and a control group (n = 23). All participants completed the COPS-P as well as standardized measures of depression, anxiety, social phobia, body image, quality of life, and erectile function. Penis size was also measured. The final COPS-P was based on nine items. The scale had good internal reliability and significant convergent validity with measures of related constructs. It discriminated between the PDD group, the small penis anxiety group, and the control group. This is the first study to develop a scale able to discriminate between those with PDD and men anxious about their size who did not have PDD. Clinicians and researchers may use the scale as part of an assessment for men presenting with anxiety about penis size and as an audit or outcome measure after any intervention for this population.", "title": "" }, { "docid": "7ed693c8f8dfa62842304f4c6783af03", "text": "Indian Sign Language (ISL) or Indo-Pakistani Sign Language is possibly the prevalent sign language variety in South Asia used by at least several hundred deaf signers. It is different in the phonetics, grammar and syntax from other country’s sign languages. Since ISL got standardized only recently, there is very little research work that has happened in ISL recognition. Considering the challenges in ISL gesture recognition, a novel method for recognition of static signs of Indian sign language alphabets and numerals for Human Computer Interaction (HCI) has been proposed in this thesis work. The developed algorithm for the hand gesture recognition system in ISL formulates a vision-based approach, using the Two-Dimensional Discrete Cosine Transform (2D-DCT) for image compression and the Self-Organizing Map (SOM) or Kohonen Self Organizing Feature Map (SOFM) Neural Network for pattern recognition purpose, simulated in MATLAB. To design an efficient and user friendly hand gesture recognition system, a GUI model has been implemented. The main advantage of this algorithm is its high-speed processing capability and low computational requirements, in terms of both speed and memory utilization. KeywordsArtificial Neural Network, Hand Gesture Recognition, Human Computer Interaction (HCI), Indian Sign Language (ISL), Kohonen Self Organizing Feature Map (SOFM), Two-Dimensional Discrete Cosine Transform (2D-", "title": "" }, { "docid": "2227ed368491a2f20ca35df73616b00e", "text": "The aim of this mixed-methods exploratory study was to examine the relationship between narcissism, self-esteem and Instagram usage and was motivated by unsubstantiated media claims of increasing narcissism due to excessive use of social networks. A sample of 200 participants responded to an online survey which consisted of the Five Factor Narcissism Inventory (FFNI), the Rosenberg Self-Esteem scale, and the Instagram Usage, Behaviours, and Affective Responses Questionnaire (IUBARQ) constructed specifically for the purposes of this study. There was only weak evidence for any relationship between narcissism and Instagram usage, suggesting that media concerns are somewhat exaggerated. However the negative correlation between vulnerable narcissism and self-esteem warrants further examination.", "title": "" }, { "docid": "7b1cc7b3f8e31828900c4d53ab295db5", "text": "Unsupervised domain mapping aims to learn a function to translate domain X to Y by a function GXY in the absence of paired examples. Finding the optimal GXY without paired data is an ill-posed problem, so appropriate constraints are required to obtain reasonable solutions. One of the most prominent constraints is cycle consistency, which enforces the translated image by GXY to be translated back to the input image by an inverse mapping GY X . While cycle consistency requires the simultaneous training of GXY and GY X , recent studies have shown that one-sided domain mapping can be achieved by preserving pairwise distances between images. Although cycle consistency and distance preservation successfully constrain the solution space, they overlook the special properties of images that simple geometric transformations do not change the image’s semantic structure. Based on this special property, we develop a geometry-consistent generative adversarial network (GcGAN), which enables one-sided unsupervised domain mapping. GcGAN takes the original image and its counterpart image transformed by a predefined geometric transformation as inputs and generates two images in the new domain coupled with the corresponding geometry-consistency constraint. The geometryconsistency constraint reduces the space of possible solutions while keep the correct solutions in the search space. Quantitative and qualitative comparisons with the baseline (GAN alone) and the state-of-the-art methods including CycleGAN [62] and DistanceGAN [5] demonstrate the effectiveness of our method.", "title": "" }, { "docid": "d3de408430dfffb345db3f216a7d9eaa", "text": "Attacks on humans by large predators are rare, especially in Northern Europe. In cases of involvement of the craniocervical compartment, most of the attacks are not survived. We report on a case where the patient survived a tiger attack despite severe head trauma and discuss the circumstances leading to the patient’s survival and excellent outcome. The patient we report on is a 28-year-old tamer, who was attacked by three tigers during an evening show. A bite to the head resulted in multiple injuries including left-sided skull penetration wounds with dislocated fractures, dural perforations, and brain parenchyma lesions. The patient recovered without neurological deficits after initial ICU treatment. No infection occurred. In order to understand the mechanism of the tiger’s bite to the patient’s cranium, a simulation of the attack was performed using a human and a tiger skull put together at identical positions to the bite marks in a CT scan. It seems that during the bite, the animal was not able to clamp down on the patient’s skull between its canine teeth and therefore reduced bite forces were applied. Survival of an attack by a large predator that targeted the cervical–cranial compartment with an excellent outcome is not described in the literature. We were surprised to find only minor lesions of the brain parenchyma despite the obvious penetration of the skull by the tiger’s canines. This seems to be related to the specific dynamics of the cranial assault and the reduced forces applied to the patient’s head demonstrated in a 3D bite simulation.", "title": "" }, { "docid": "8fca64bb24d9adc445fec504ee8efa5a", "text": "In this paper, the permeation properties of three types of liquids into HTV silicone rubber with different Alumina Tri-hydrate (ATH) contents had been investigated by weight gain experiments. The influence of differing exposure conditions on the diffusion into silicone rubber, in particular the effect of solution type, solution concentration, and test temperature were explored. Experimental results indicated that the liquids permeation into silicone rubber obeyed anomalous diffusion ways instead of the Fick diffusion model. Moreover, higher temperature would accelerate the permeation process, and silicone rubber with higher ATH content absorbed more liquids than that with lower ATH content. Furthermore, the material properties of silicone rubber before and after liquid permeation were examined using Fourier infrared spectroscopy (FTIR), thermal gravimetric analysis (TGA) and scanning electron microscopy (SEM), respectively. The permeation mechanisms and process were discussed in depth by combining the weight gain experiment results and the material properties analyses.", "title": "" }, { "docid": "ad82f1ee8fd6239d8fe39778db838a6a", "text": "This article is motivated by a formulation of biotic self-organization in Friston (2013), where the emergence of \"life\" in coupled material entities (e.g., macromolecules) was predicated on bounded subsets that maintain a degree of statistical independence from the rest of the network. Boundary elements in such systems constitute a Markov blanket; separating the internal states of a system from its surrounding states. In this article, we ask whether Markov blankets operate in the nervous system and underlie the development of intelligence, enabling a progression from the ability to sense the environment to the ability to understand it. Markov blankets have been previously hypothesized to form in neuronal networks as a result of phase transitions that cause network subsets to fold into bounded assemblies, or packets (Yufik and Sheridan, 1997; Yufik, 1998a). The ensuing neuronal packets hypothesis builds on the notion of neuronal assemblies (Hebb, 1949, 1980), treating such assemblies as flexible but stable biophysical structures capable of withstanding entropic erosion. In other words, structures that maintain their integrity under changing conditions. In this treatment, neuronal packets give rise to perception of \"objects\"; i.e., quasi-stable (stimulus bound) feature groupings that are conserved over multiple presentations (e.g., the experience of perceiving \"apple\" can be interrupted and resumed many times). Monitoring the variations in such groups enables the apprehension of behavior; i.e., attributing to objects the ability to undergo changes without loss of self-identity. Ultimately, \"understanding\" involves self-directed composition and manipulation of the ensuing \"mental models\" that are constituted by neuronal packets, whose dynamics capture relationships among objects: that is, dependencies in the behavior of objects under varying conditions. For example, movement is known to involve rotation of population vectors in the motor cortex (Georgopoulos et al., 1988, 1993). The neuronal packet hypothesis associates \"understanding\" with the ability to detect and generate coordinated rotation of population vectors-in neuronal packets-in associative cortex and other regions in the brain. The ability to coordinate vector representations in this way is assumed to have developed in conjunction with the ability to postpone overt motor expression of implicit movement, thus creating a mechanism for prediction and behavioral optimization via mental modeling that is unique to higher species. This article advances the notion that Markov blankets-necessary for the emergence of life-have been subsequently exploited by evolution and thus ground the ways that living organisms adapt to their environment, culminating in their ability to understand it.", "title": "" }, { "docid": "7c3f14bbbb3cf2bbe7c9caaf42361445", "text": "In this paper, we present a method for generating fast conceptual urban design prototypes. We synthesize spatial configurations for street networks, parcels and building volumes. Therefore, we address the problem of implementing custom data structures for these configurations and how the generation process can be controlled and parameterized. We exemplify our method by the development of new components for Grasshopper/Rhino3D and their application in the scope of selected case studies. By means of these components, we show use case applications of the synthesis algorithms. In the conclusion, we reflect on the advantages of being able to generate fast urban design prototypes, but we also discuss the disadvantages of the concept and the usage of Grasshopper as a user interface.", "title": "" }, { "docid": "73577f4be1e148387ce747546c31b161", "text": "Previous models for the assessment of commitment towards a predicate in a sentence (also known as factuality prediction) were trained and tested against a specific annotated dataset, subsequently limiting the generality of their results. In this work we propose an intuitive method for mapping three previously annotated corpora onto a single factuality scale, thereby enabling models to be tested across these corpora. In addition, we design a novel model for factuality prediction by first extending a previous rule-based factuality prediction system and applying it over an abstraction of dependency trees, and then using the output of this system in a supervised classifier. We show that this model outperforms previous methods on all three datasets. We make both the unified factuality corpus and our new model publicly available.", "title": "" }, { "docid": "678558c9c8d629f98b77a61082bd9b95", "text": "Internet of Things (IoT) makes all objects become interconnected and smart, which has been recognized as the next technological revolution. As its typical case, IoT-based smart rehabilitation systems are becoming a better way to mitigate problems associated with aging populations and shortage of health professionals. Although it has come into reality, critical problems still exist in automating design and reconfiguration of such a system enabling it to respond to the patient's requirements rapidly. This paper presents an ontology-based automating design methodology (ADM) for smart rehabilitation systems in IoT. Ontology aids computers in further understanding the symptoms and medical resources, which helps to create a rehabilitation strategy and reconfigure medical resources according to patients' specific requirements quickly and automatically. Meanwhile, IoT provides an effective platform to interconnect all the resources and provides immediate information interaction. Preliminary experiments and clinical trials demonstrate valuable information on the feasibility, rapidity, and effectiveness of the proposed methodology.", "title": "" }, { "docid": "701fe507d3efe69f82f040967d6e246f", "text": "The performance of the brain is constrained by wiring length and maintenance costs. The apparently inverse relationship between number of neurons in the various interneuron classes and the spatial extent of their axon trees suggests a mathematically definable organization, reminiscent of 'small-world' or scale-free networks observed in other complex systems. The wiring-economy-based classification of cortical inhibitory interneurons is supported by the distinct physiological patterns of class members in the intact brain. The complex wiring of diverse interneuron classes could represent an economic solution for supporting global synchrony and oscillations at multiple timescales with minimum axon length.", "title": "" }, { "docid": "60a9030ddf88347f9a75ce24f52f9768", "text": "The phenotype of patients with a chromosome 1q43q44 microdeletion (OMIM; 612337) is characterized by intellectual disability with no or very limited speech, microcephaly, growth retardation, a recognizable facial phenotype, seizures, and agenesis of the corpus callosum. Comparison of patients with different microdeletions has previously identified ZBTB18 (ZNF238) as a candidate gene for the 1q43q44 microdeletion syndrome. Mutations in this gene have not yet been described. We performed exome sequencing in a patient with features of the 1q43q44 microdeletion syndrome that included short stature, microcephaly, global developmental delay, pronounced speech delay, and dysmorphic facial features. A single de novo non-sense mutation was detected, which was located in ZBTB18. This finding is consistent with an important role for haploinsufficiency of ZBTB18 in the phenotype of chromosome 1q43q44 microdeletions. The corpus callosum is abnormal in mice with a brain-specific knock-out of ZBTB18. Similarly, most (but not all) patients with the 1q43q44 microdeletion syndrome have agenesis or hypoplasia of the corpus callosum. In contrast, the patient with a ZBTB18 point mutation reported here had a structurally normal corpus callosum on brain MRI. Incomplete penetrance or haploinsufficiency of other genes from the critical region may explain the absence of corpus callosum agenesis in this patient with a ZBTB18 point mutation. The findings in this patient with a mutation in ZBTB18 will contribute to our understanding of the 1q43q44 microdeletion syndrome.", "title": "" }, { "docid": "abca2da2772fa97aee12110b4cb7ff18", "text": "The key challenge of intelligent fault diagnosis is to develop features that can distinguish different categories. Because of the unique properties of mechanical data, predetermined features based on prior knowledge are usually used as inputs for fault classification. However, proper selection of features often requires expertise knowledge and becomes more difficult and time consuming when volume of data increases. In this paper, a novel deep learning network (LiftingNet) is proposed to learn features adaptively from raw mechanical data without prior knowledge. Inspired by convolutional neural network and second generation wavelet transform, the LiftingNet is constructed to classify mechanical data even though inputs contain considerable noise and randomness. The LiftingNet consists of split layer, predict layer, update layer, pooling layer, and full-connection layer. Different kernel sizes are allowed in convolutional layers to improve learning ability. As a multilayer neural network, deep features are learned from shallow ones to represent complex structures in raw data. Feasibility and effectiveness of the LiftingNet is validated by two motor bearing datasets. Results show that the proposed method could achieve layerwise feature learning and successfully classify mechanical data even with different rotating speed and under the influence of random noise.", "title": "" }, { "docid": "9db3c4995c3b8eeca0521bf4d5824f04", "text": "A The potential ecological effects of rising levels of heavy metals concentrations in the environment are of great concern due to their highly bioaccumulative nature, persistent behavior and higher toxicity. These chemicals biomagnify in the food chain and impose various toxic effects in aquatic organisms. Molluscs reflect the higher degree of environmental contamination by heavy metals and are the most useful bioindicator tools. Several studies and research work have been cited to establish and evaluate the relationship between metal contents of water column, sediment fractions, suspended matter and mollusc tissue concentrations. The metals body burden in molluscs may reflect the concentrations of metals in surrounding water and sediment, and may thus be an indication of quality of the surrounding environment. The objectives of this work are to gather more information on the use of different species of molluscs as cosmopolitan bioindicators for heavy metal pollution in aquatic", "title": "" } ]
scidocsrr
0ef95c8681673c396c3a79c1e0f2300d
Scene labeling with LSTM recurrent neural networks
[ { "docid": "350c899dbd0d9ded745b70b6f5e97d19", "text": "We propose an approach to include contextual features for labeling images, in which each pixel is assigned to one of a finite set of labels. The features are incorporated into a probabilistic framework, which combines the outputs of several components. Components differ in the information they encode. Some focus on the image-label mapping, while others focus solely on patterns within the label field. Components also differ in their scale, as some focus on fine-resolution patterns while others on coarser, more global structure. A supervised version of the contrastive divergence algorithm is applied to learn these features from labeled image data. We demonstrate performance on two real-world image databases and compare it to a classifier and a Markov random field.", "title": "" } ]
[ { "docid": "861e2a3c19dafdd3273dc718416309c2", "text": "For the last 40 years high - capacity Unmanned Air Vehicles have been use mostly for military services such as tracking, surveillance, engagement with active weapon or in the simplest term for data acquisition purpose. Unmanned Air Vehicles are also demanded commercially because of their advantages in comparison to manned vehicles such as their low manufacturing and operating cost, configuration flexibility depending on customer request, not risking pilot in the difficult missions. Nevertheless, they have still open issues such as integration to the manned flight air space, reliability and airworthiness. Although Civil Unmanned Air Vehicles comprise 3% of the UAV market, it is estimated that they will reach 10% level within the next 5 years. UAV systems with their useful equipment (camera, hyper spectral imager, air data sensors and with similar equipment) have been in use more and more for civil applications: Tracking and monitoring in the event of agriculture / forest / marine pollution / waste / emergency and disaster situations; Mapping for land registry and cadastre; Wildlife and ecologic monitoring; Traffic Monitoring and; Geology and mine researches. They can bring minimal risk and cost advantage to many civil applications, in which it was risky and costly to use manned air vehicles before. When the cost of Unmanned Air Vehicles designed and produced for military service is taken into account, civil market demands lower cost and original products which are suitable for civil applications. Most of civil applications which are mentioned above require UAVs that are able to take off and land on limited runway, and moreover move quickly in the operation region for mobile applications but hover for immobile measurement and tracking when necessary. This points to a hybrid unmanned vehicle concept optimally, namely the Vertical Take Off and Landing (VTOL) UAVs. At the same time, this system requires an efficient cost solution for applicability / convertibility for different civil applications. It means an Air Vehicle having easily portability of payload depending on application concept and programmability of operation (hover and cruise flight time) specific to the application. The main topic of this project is designing, producing and testing the TURAC VTOL UAV that have the following features : Vertical takeoff and landing, and hovering like helicopter ; High cruise speed and fixed-wing ; Multi-functional and designed for civil purpose ; The project involves two different variants ; The TURAC A variant is a fully electrical platform which includes 2 tilt electric motors in the front, and a fixed electric motor and ducted fan in the rear ; The TURAC B variant uses fuel cells.", "title": "" }, { "docid": "57c705e710f99accab3d9242fddc5ac8", "text": "Although much research has been conducted in the area of organizational commitment, few studies have explicitly examined how organizations facilitate commitment among members. Using a sample of 291 respondents from 45 firms, the results of this study show that rigorous recruitment and selection procedures and a strong, clear organizational value system are associated with higher levels of employee commitment based on internalization and identification. Strong organizational career and reward systems are related to higher levels of instrumental or compliance-based commitment.", "title": "" }, { "docid": "7bb04f2163e253068ac665f12a5dd35c", "text": "Automatic segmentation of the liver and hepatic lesions is an important step towards deriving quantitative biomarkers for accurate clinical diagnosis and computer-aided decision support systems. This paper presents a method to automatically segment liver and lesions in CT and MRI abdomen images using cascaded fully convolutional neural networks (CFCNs) enabling the segmentation of large-scale medical trials and quantitative image analyses. We train and cascade two FCNs for the combined segmentation of the liver and its lesions. As a first step, we train an FCN to segment the liver as ROI input for a second FCN. The second FCN solely segments lesions within the predicted liver ROIs of step 1. CFCN models were trained on an abdominal CT dataset comprising 100 hepatic tumor volumes. Validation results on further datasets show that CFCN-based semantic liver and lesion segmentation achieves Dice scores over 94% for the liver with computation times below 100s per volume. We further experimentally demonstrate the robustness of the proposed method on 38 MRI liver tumor volumes and the public 3DIRCAD dataset.", "title": "" }, { "docid": "4b7ffae0dfa7e43b5456ec08fbd0824e", "text": "METHODS\nIn this study of patients who underwent internal fixation without fusion for a burst thoracolumbar or lumbar fracture, we compared the serial changes in the injured disc height (DH), and the fractured vertebral body height (VBH) and kyphotic angle between patients in whom the implants were removed and those in whom they were not. Radiological parameters such as injured DH, fractured VBH and kyphotic angle were measured. Functional outcomes were evaluated using the Greenough low back outcome scale and a VAS scale for pain.\n\n\nRESULTS\nBetween June 1996 and May 2012, 69 patients were analysed retrospectively; 47 were included in the implant removal group and 22 in the implant retention group. After a mean follow-up of 66 months (48 to 107), eight patients (36.3%) in the implant retention group had screw breakage. There was no screw breakage in the implant removal group. All radiological and functional outcomes were similar between these two groups. Although solid union of the fractured vertebrae was achieved, the kyphotic angle and the anterior third of the injured DH changed significantly with time (p < 0.05).\n\n\nDISCUSSION\nThe radiological and functional outcomes of both implant removal and retention were similar. Although screw breakage may occur, the implants may not need to be removed.\n\n\nTAKE HOME MESSAGE\nImplant removal may not be needed for patients with burst fractures of the thoracolumbar and lumbar spine after fixation without fusion. However, information should be provided beforehand regarding the possibility of screw breakage.", "title": "" }, { "docid": "4c588e5f05c3e4c2f3b974306095af02", "text": "Software Development Life Cycle (SDLC) is a model which provides us the basic information about the methods/techniques to develop software. It is concerned with the software management processes that examine the area of software development through the development models, which are known as software development life cycle. There are many development models namely Waterfall model, Iterative model, V-shaped model, Spiral model, Extreme programming, Iterative and Incremental Method, Rapid prototyping model and Big Bang Model. This is paper is concerned with the study of these different software development models and to compare their advantages and disadvantages.", "title": "" }, { "docid": "2c3c227a8fd9f2a96e61549b962d3741", "text": "Developmental dyslexia is an unexplained inability to acquire accurate or fluent reading that affects approximately 5-17% of children. Dyslexia is associated with structural and functional alterations in various brain regions that support reading. Neuroimaging studies in infants and pre-reading children suggest that these alterations predate reading instruction and reading failure, supporting the hypothesis that variant function in dyslexia susceptibility genes lead to atypical neural migration and/or axonal growth during early, most likely in utero, brain development. Yet, dyslexia is typically not diagnosed until a child has failed to learn to read as expected (usually in second grade or later). There is emerging evidence that neuroimaging measures, when combined with key behavioral measures, can enhance the accuracy of identification of dyslexia risk in pre-reading children but its sensitivity, specificity, and cost-efficiency is still unclear. Early identification of dyslexia risk carries important implications for dyslexia remediation and the amelioration of the psychosocial consequences commonly associated with reading failure.", "title": "" }, { "docid": "d5002f61112022bf6026ccd01a3fd82f", "text": "Software engineering data (such as code bases, execution traces, historical code changes, mailing lists, and bug databases) contains a wealth of information about a project's status, progress, and evolution. Using well-established data mining techniques, practitioners and researchers have started exploring the potential of this valuable data in order to better manage their projects and to produce higher quality software systems that are delivered on time and within budget.\n This tutorial presents the latest research in mining software engineering data, discusses challenges associated with mining software engineering data, highlights success stories of mining software engineering data, and outlines future research directions. Attendees will acquire the knowledge and skills needed to integrate the mining of software engineering data in their own research or practice. This tutorial builds on several successful offerings at ICSE since 2007.", "title": "" }, { "docid": "b31aaa6805524495f57a2f54d0dd86f1", "text": "CLINICAL HISTORY A 54-year-old white female was seen with a 10-year history of episodes of a burning sensation of the left ear. The episodes are preceded by nausea and a hot feeling for about 15 seconds and then the left ear becomes visibly red for an average of about 1 hour, with a range from about 30 minutes to 2 hours. About once every 2 years, she would have a flurry of episodes occurring over about a 1-month period during which she would average about five episodes with a range of 1 to 6. There was also an 18-year history of migraine without aura occurring about once a year. At the age of 36 years, she developed left-sided pulsatile tinnitus. A cerebral arteriogram revealed a proximal left internal carotid artery occlusion of uncertain etiology after extensive testing. An MRI scan at the age of 45 years was normal. Neurological examination was normal. A carotid ultrasound study demonstrated complete occlusion of the left internal carotid artery and a normal right. Question.—What is the diagnosis?", "title": "" }, { "docid": "0249db106163559e34ff157ad6d45bf5", "text": "We present an interpolation-based planning and replanning algorithm for generating low-cost paths through uniform and nonuniform resolution grids. Most grid-based path planners use discrete state transitions that artificially constrain an agent’s motion to a small set of possible headings e.g., 0, /4, /2, etc. . As a result, even “optimal” gridbased planners produce unnatural, suboptimal paths. Our approach uses linear interpolation during planning to calculate accurate path cost estimates for arbitrary positions within each grid cell and produce paths with a range of continuous headings. Consequently, it is particularly well suited to planning low-cost trajectories for mobile robots. In this paper, we introduce a version of the algorithm for uniform resolution grids and a version for nonuniform resolution grids. Together, these approaches address two of the most significant shortcomings of grid-based path planning: the quality of the paths produced and the memory and computational requirements of planning over grids. We demonstrate our approaches on a number of example planning problems, compare them to related algorithms, and present several implementations on real robotic systems. © 2006 Wiley Periodicals, Inc.", "title": "" }, { "docid": "893a8c073b8bd935fbea419c0f3e0b17", "text": "Computing as a service model in cloud has encouraged High Performance Computing to reach out to wider scientific and industrial community. Many small and medium scale HPC users are exploring Infrastructure cloud as a possible platform to run their applications. However, there are gaps between the characteristic traits of an HPC application and existing cloud scheduling algorithms. In this paper, we propose an HPC-aware scheduler and implement it atop Open Stack scheduler. In particular, we introduce topology awareness and consideration for homogeneity while allocating VMs. We demonstrate the benefits of these techniques by evaluating them on a cloud setup on Open Cirrus test-bed.", "title": "" }, { "docid": "a2b3cdf440dd6aa139ea51865d8f81cc", "text": "Hyperspectral image (HSI) classification is a hot topic in the remote sensing community. This paper proposes a new framework of spectral-spatial feature extraction for HSI classification, in which for the first time the concept of deep learning is introduced. Specifically, the model of autoencoder is exploited in our framework to extract various kinds of features. First we verify the eligibility of autoencoder by following classical spectral information based classification and use autoencoders with different depth to classify hyperspectral image. Further in the proposed framework, we combine PCA on spectral dimension and autoencoder on the other two spatial dimensions to extract spectral-spatial information for classification. The experimental results show that this framework achieves the highest classification accuracy among all methods, and outperforms classical classifiers such as SVM and PCA-based SVM.", "title": "" }, { "docid": "0a26e03606cdf93de0958c01ca4c693a", "text": "A bidirectional full-bridge LLC resonant converter with a new symmetric LLC-type resonant network using a digital control scheme is proposed for a 380V dc power distribution system. This converter can operate under high power conversion efficiency since the symmetric LLC resonant network has zero voltage switching capability for primary power switches and soft commutation capability for output rectifiers. In addition, the proposed topology does not require any clamp circuits to reduce the voltage stress of the switches because the switch voltage of the primary inverting stage is confined by the input voltage, and that of the secondary rectifying stage is limited by the output voltage. Therefore, the power conversion efficiency of any directions is exactly the same as each other. In addition, intelligent digital control schemes such as dead-band control and switch transition control are proposed to regulate output voltage for any power flow directions. A prototype converter designed for a high-frequency galvanic isolation of 380V dc buses was developed with a rated power rating of 5kW using a digital signal processor to verify the performance of the proposed topology and algorithms. The maximum power conversion efficiency was 97.8% during bidirectional operations.", "title": "" }, { "docid": "ae956d5e1182986505ff8b4de8b23777", "text": "Device classification is important for many applications such as industrial quality controls, through-wall imaging, and network security. A novel approach to detection is proposed using a random noise radar (RNR), coupled with Radio Frequency “Distinct Native Attribute (RF-DNA)” fingerprinting processing algorithms to non-destructively interrogate microwave devices. RF-DNA has previously demonstrated “serial number” discrimination of passive Radio Frequency (RF) emissions such as Orthogonal Frequency Division Multiplexed (OFDM) signals, Worldwide Interoperability for Microwave Access (WiMAX) signals and others with classification accuracies above 80% using a Multiple Discriminant Analysis/Maximum Likelihood (MDAML) classifier. This approach proposes to couple the classification successes of the RF-DNA fingerprint processing with a non-destructive active interrogation waveform. An Ultra Wideband (UWB) noise waveform is uniquely suitable as an active interrogation method since it will not cause damage to sensitive microwave components and multiple RNRs can operate simultaneously in close proximity, allowing for significant parallelization of detection systems.", "title": "" }, { "docid": "6206b0e393c54cfe2921604c2405bfeb", "text": "and fibrils along a greater length of tendon would follow, converting the new fibrils bridging the repair site into fibers and bundles. The discrete mass of collagen needed to completely heal a transected tendon would be quite small relative to the total collagen mass of the tendon; this is consistent with previous findings (Goldfarb et al., 2001). The tendon could be the sole source of this existing collagen with little effect on its overall strength once the remodelling process was complete, decreasing the need for new collagen synthesis. Postoperative ruptures at or adjacent to the healing site, the hand surgeon’s feared yet poorly understood complication of flexor tendon repair, may be explained by the inherent weakness caused by the recycling process itself. Biochemical modifications at the time of repair may evolve through an improved understanding of the seemingly paradoxical role of collagen fibril segment recycling in temporarily weakening healing tendon so that it may be strengthened. The principal author wishes to thank H.P. Ehrlich, PhD, Hershey, P.A. and M.R. Forough, PhD, Seattle, WA, for their comments and insight, and the late H. Sittertz-Bhatkar, PhD, for the images herein.", "title": "" }, { "docid": "d5330d3045a27f2c59ef01903b87a54e", "text": "Industrial Control and SCADA (Supervisory Control and Data Acquisition) networks control critical infrastructure such as power plants, nuclear facilities, and water supply systems. These systems are increasingly the target of cyber attacks by threat actors of different kinds, with successful attacks having the potential to cause damage, cost and injury/loss of life. As a result, there is a strong need for enhanced tools to detect cyber threats in SCADA networks. This paper makes a number of contributions to advance research in this area. First, we study the level of support for SCADA protocols in well-known open source intrusion detection systems (IDS). Second, we select a specific IDS, Suricata, and enhance it to include support for detecting threats against SCADA systems running the EtherNet/IP (ENIP) industrial control protocol. Finally, we conduct a traffic-based study to evaluate the performance of the new ENIP module in Suricata - analyzing its performance in low performance hardware systems.", "title": "" }, { "docid": "de754b316e18ec06f8c8bf944e0669ad", "text": "We present PubMed 200k RCT1, a new dataset based on PubMed for sequential sentence classification. The dataset consists of approximately 200,000 abstracts of randomized controlled trials, totaling 2.3 million sentences. Each sentence of each abstract is labeled with their role in the abstract using one of the following classes: background, objective, method, result, or conclusion. The purpose of releasing this dataset is twofold. First, the majority of datasets for sequential shorttext classification (i.e., classification of short texts that appear in sequences) are small: we hope that releasing a new large dataset will help develop more accurate algorithms for this task. Second, from an application perspective, researchers need better tools to efficiently skim through the literature. Automatically classifying each sentence in an abstract would help researchers read abstracts more efficiently, especially in fields where abstracts may be long, such as the medical field.", "title": "" }, { "docid": "86aeb2e62f01f64cc73f6d2ff764e1d7", "text": "This paper aims to make two contributions to the sustainability transitions literature, in particular the Geels and Schot (2007. Res. Policy 36(3), 399) transition pathways typology. First, it reformulates and differentiates the typology through the lens of endogenous enactment, identifying the main patterns for actors, formal institutions, and technologies. Second, it suggests that transitions may shift between pathways, depending on struggles over technology deployment and institutions. Both contributions are demonstrated with a comparative analysis of unfolding low-carbon electricity transitions in Germany and the UK between 1990–2014. The analysis shows that Germany is on a substitution pathway, enacted by new entrants deploying small-scale renewable electricity technologies (RETs), while the UK is on a transformation pathway, enacted by incumbent actors deploying large-scale RETs. Further analysis shows that the German transition has recently shifted from a ‘stretch-and-transform’ substitution pathway to a ‘fit-and-conform’ pathway, because of a fightback from utilities and altered institutions. It also shows that the UK transition moved from moderate to substantial incumbent reorientation, as government policies became stronger. Recent policy changes, however, substantially downscaled UK renewables support, which is likely to shift the transition back to weaker reorientation. © 2016 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).", "title": "" }, { "docid": "be3204a5a4430cc3150bf0368a972e38", "text": "Deep learning has exploded in the public consciousness, primarily as predictive and analytical products suffuse our world, in the form of numerous human-centered smart-world systems, including targeted advertisements, natural language assistants and interpreters, and prototype self-driving vehicle systems. Yet to most, the underlying mechanisms that enable such human-centered smart products remain obscure. In contrast, researchers across disciplines have been incorporating deep learning into their research to solve problems that could not have been approached before. In this paper, we seek to provide a thorough investigation of deep learning in its applications and mechanisms. Specifically, as a categorical collection of state of the art in deep learning research, we hope to provide a broad reference for those seeking a primer on deep learning and its various implementations, platforms, algorithms, and uses in a variety of smart-world systems. Furthermore, we hope to outline recent key advancements in the technology, and provide insight into areas, in which deep learning can improve investigation, as well as highlight new areas of research that have yet to see the application of deep learning, but could nonetheless benefit immensely. We hope this survey provides a valuable reference for new deep learning practitioners, as well as those seeking to innovate in the application of deep learning.", "title": "" }, { "docid": "b5c64ddf3be731a281072a21700a85ee", "text": "This paper addresses the problem of joint detection and recounting of abnormal events in videos. Recounting of abnormal events, i.e., explaining why they are judged to be abnormal, is an unexplored but critical task in video surveillance, because it helps human observers quickly judge if they are false alarms or not. To describe the events in the human-understandable form for event recounting, learning generic knowledge about visual concepts (e.g., object and action) is crucial. Although convolutional neural networks (CNNs) have achieved promising results in learning such concepts, it remains an open question as to how to effectively use CNNs for abnormal event detection, mainly due to the environment-dependent nature of the anomaly detection. In this paper, we tackle this problem by integrating a generic CNN model and environment-dependent anomaly detectors. Our approach first learns CNN with multiple visual tasks to exploit semantic information that is useful for detecting and recounting abnormal events. By appropriately plugging the model into anomaly detectors, we can detect and recount abnormal events while taking advantage of the discriminative power of CNNs. Our approach outperforms the state-of-the-art on Avenue and UCSD Ped2 benchmarks for abnormal event detection and also produces promising results of abnormal event recounting.", "title": "" }, { "docid": "54afd49e0853e258916e2a36605177f0", "text": "Novolac type liquefied wood/phenol/formaldehyde (LWPF) resins were synthesized from liquefied wood and formaldehyde. The average molecular weight of the LWPF resin made from the liquefied wood reacted in an atmospheric three neck flask increased with increasing P/W ratio. However, it decreased with increasing phenol/wood ratio when using a sealed Parr reactor. On average, the LWPF resin made from the liquefied wood reacted in the Parr reactor had lower molecular weight than those from the atmospheric three neck flask. The infrared spectra of the LWPF resins were similar to that of the conventional novolac resin but showed a major difference at the 1800–1600 cm-1 region. These results indicate that liquefied wood could partially substitute phenol in the novolac resin synthesis. The composites with the liquefied wood resin from the sealed Parr reactor yielded higher thickness swelling than those with the liquefied wood resin from the three neck flask likely due to the hydrophilic wood components incorporated in it and the lower cross-link density than the liquefied wood resin from the three neck flask during the resin cure process. Novolakartige LWPF-Harze wurden aus verflüssigtem Holz und Formaldehyd synthetisch hergestellt. Das mittlere Molekülgewicht des LWPF-Harzes, das aus verflüssigtem Holz in einem atmosphärischen Dreihals-Kolben hergestellt worden war, nahm mit steigendem Phenol/Holz-Verhältnis (P/W) zu, wohingegen es bei der Herstellung in einem versiegelten Parr Reaktor mit steigendem P/W-Verhältnis abnahm. LWPF-Harz, das aus verflüssigtem Holz in einem Parr Reaktor hergestellt worden war, hatte durchschnittlich ein niedrigeres Molekülgewicht als LWPF-Harz, das in einem atmosphärischen Dreihals-Kolben hergestellt worden war. Die Infrarot-Spektren der LWPF-Harze ähnelten denjenigen von konventionellem Novolak Harz, unterschieden sich jedoch im 1800–1600 cm-1 Bereich deutlich. Diese Ergebnisse zeigen, dass das Phenol bei der Synthese von Novolak-Harz teilweise durch verflüssigtes Holz ersetzt werden kann. Verbundwerkstoffe mit LWPF-Harz, das aus verflüssigtem Holz im versiegelten Parr Reaktor hergestellt worden war, wiesen eine höhere Dickenquellung auf als diejenigen mit LWPF-Harz, das im Dreihals-Kolben hergestellt worden war. Der Grund besteht wahrscheinlich in den im Vergleich zu LWPF-Harz aus dem Dreihals-Kolben eingebundenen hydrophilen Holzbestandteilen und der niedrigeren Vernetzungsdichte während der Aushärtung.", "title": "" } ]
scidocsrr
31e7693bcbeaf251e55864fb6132ddf8
A brief review on multi-task learning
[ { "docid": "471db984564becfea70fb2946ef4871e", "text": "We propose a novel group regularization which we call exclusive lasso. Unlike the group lasso regularizer that assumes covarying variables in groups, the proposed exclusive lasso regularizer models the scenario when variables in the same group compete with each other. Analysis is presented to illustrate the properties of the proposed regularizer. We present a framework of kernel based multi-task feature selection algorithm based on the proposed exclusive lasso regularizer. An efficient algorithm is derived to solve the related optimization problem. Experiments with document categorization show that our approach outperforms state-of-theart algorithms for multi-task feature selection.", "title": "" } ]
[ { "docid": "d4f3cc4ac102fc922499001c8a8ab6af", "text": "This four-part series of articles provides an overview of the neurological examination of the elderly patient, particularly as it applies to patients with cognitive impairment, dementia or cerebrovascular disease.The focus is on the method and interpretation of the bedside physical examination; the mental state and cognitive examinations are not covered in this review.Part 1 (featured in the September issue of Geriatrics & Aging) began with an approach to the neurological examination in normal aging and in disease,and reviewed components of the general physical, head and neck, neurovascular and cranial nerve examinations relevant to aging and dementia. Part 2 (featured in the October issue) covered the motor examination with an emphasis on upper motor neuron signs and movement disorders. Part 3, featured here, reviews the assessment of coordination,balance and gait,and Part 4 will discuss the muscle stretch reflexes, pathological and primitive reflexes, sensory examination and concluding remarks. Throughout this series, special emphasis is placed on the evaluation and interpretation of neurological signs in light of findings considered normal in the", "title": "" }, { "docid": "9b07a147a3492d53a6a996697f66a342", "text": "We present a method for real-time 3D object instance detection that does not require a time-consuming training stage, and can handle untextured objects. At its core, our approach is a novel image representation for template matching designed to be robust to small image transformations. This robustness is based on spread image gradient orientations and allows us to test only a small subset of all possible pixel locations when parsing the image, and to represent a 3D object with a limited set of templates. In addition, we demonstrate that if a dense depth sensor is available we can extend our approach for an even better performance also taking 3D surface normal orientations into account. We show how to take advantage of the architecture of modern computers to build an efficient but very discriminant representation of the input images that can be used to consider thousands of templates in real time. We demonstrate in many experiments on real data that our method is much faster and more robust with respect to background clutter than current state-of-the-art methods.", "title": "" }, { "docid": "51a37ec1069dceb1a532235f4702682f", "text": "Abstract— This paper presented the design of four-port network directional coupler at X-band frequency (8.2-12.4 GHz) by using substrate integrated waveguide (SIW) technique. SIW appears few years back which provides an excellent platform in order to design millimeter-wave circuits such as filter, antenna, resonator, coupler and power divider. It offers great compensation for smaller size and can be easily integrated with other planar circuits. The fabrication process can simply be done by using standard Printed Circuit Board (PCB) process where the cost of the manufacturing process will be reduced compared to the conventional waveguide. The directional coupler basically implemented at radar, satellite and point-to-point radio. The simulations for this SIW directional coupler design shows good performances with low insertion loss, low return loss, broad operational bandwidth and have high isolation. Keyword-Bandwidth, Coupling, Directional coupler, Four-port network, Isolation", "title": "" }, { "docid": "a60b7dd3a6fa142abde3cec712ecc1b7", "text": "Agile methodologies like Scrum propose drastic changes with respect to team hierarchies, organizational structures, planning or controlling processes. To mitigate the level of change and retain some established processes, many organizations prefer to introduce hybrid agile-traditional methodologies that combine agile with traditional development practices. Despite their importance in practice, only a few studies have examined the acceptance of such methodologies, however. In this paper, we present the results of a qualitative study that was conducted at a Swiss bank. It uses Water-Scrum-Fall, which combines Scrum with traditional practices. Based on the Diffusion of Innovations theory, we discuss several acceptance factors and investigate how they are perceived. The results indicate that, compared to traditional development methodologies, some aspects of Water-Scrum-Fall bring relative advantages and are more compatible to the way developers prefer to work. Yet, there also exist potential acceptance barriers such as a restricted individual autonomy and increased process complexity.", "title": "" }, { "docid": "a5bcf7789a71f3ba690da0469923b3b1", "text": "Traditionally, data cleaning has been performed as a pre-processing task: after all data are selected for a study (or application), they are cleaned and loaded into a database or data warehouse. In this paper, we argue that data cleaning should be an integral part of data exploration. Especially for complex, spatio-temporal data, it is only by exploring a dataset that one can discover which constraints should be checked. In addition, in many instances, seemingly erroneous data may actually reflect interesting features. Distinguishing a feature from a data quality issue requires detailed analyses which often includes bringing in new datasets. We present a series of case studies using the NYC taxi data that illustrate data cleaning challenges that arise for spatial-temporal urban data and suggest methodologies to address these challenges.", "title": "" }, { "docid": "acbd639a034cf73f021be3ed78f849bb", "text": "The paper proposes the integration of new cognitive capabilities within the well known OpenBTS architecture in order to make the system able to react in a smart way to the changes of the radio channel. In particular, the proposed spectrum sensing strategy allows the OpenBTS system to be aware of other active transmissions by forcing to choose a new radio channel, within the GSM frequency band, when a licensed primary user has to transmit on a busy channel. The implemented scheme, representing a solid step forward in the cognitive direction, has been validated throughout a detailed testbed pointing out strengths and limitations in realistic communication environments.", "title": "" }, { "docid": "67a6d90c319374d7edd6e0893f06ce6f", "text": "The study aimed to assess the effects of compression trigger point therapy on the stiffness of the trapezius muscle in professional basketball players (Part A), and the reliability of the MyotonPRO device in clinical evaluation of athletes (Part B). Twelve professional basketball players participated in Part A of the study (mean age: 19.8 ± 2.4 years, body height 197 ± 8.2 cm, body mass: 91.8 ± 11.8 kg), with unilateral neck or shoulder pain at the dominant side. Part B tested twelve right-handed male athletes (mean ± SD; age: 20.4 ± 1.2 years; body height: 178.6 ± 7.7 cm; body mass: 73.2 ± 12.6 kg). Stiffness measurements were obtained directly before and after a single session trigger point compression therapy. Measurements were performed bilaterally over 5 points covering the trapezius muscle. The effects were evaluated using a full-factorial repeated measure ANOVA and the Bonferroni post-hoc test for equal variance. A p-value < .05 was considered significant. The RM ANOVA revealed a significant decrease in muscle stiffness for the upper trapezius muscle. Specifically, muscle stiffness decreased from 243.7 ± 30.5 to 215.0 ± 48.5 N/m (11.8%), (p = .008) (Part A). The test-retest relative reliability of trapezius muscle stiffness was found to be high (ICC from 0.821 to 0.913 for measurement points). The average SEM was 23.59 N/m and the MDC 65.34 N/m, respectively (Part B). The present study showed that a single session of compression trigger point therapy can be used to significantly decrease the stiffness of the upper trapezius among professional basketball players.", "title": "" }, { "docid": "7f404b1f414673c5b0fb24e397cd755c", "text": "Text mining or knowledge discovery is that sub process of data mining, which is widely being used to discover hidden patterns and significant information from the huge amount of unstructured written material. The proliferation of clouds, research and technologies are responsible for the creation of vast volumes of data. This kind of data cannot be used until or unless specific information or pattern is discovered. For this text mining uses techniques of different fields like machine learning, visualization, case-based reasoning, text analysis, database technology statistics, knowledge management, natural language processing and information retrieval. Text mining is largely growing field of computer science simultaneously to big data and artificial intelligence. This paper contains the review of text mining techniques, tools and various applications.", "title": "" }, { "docid": "0d4f589fedbef123703a271e983ac0f6", "text": "This paper presents the results from a Human-Robot Interaction study that investigates the issues of participants’ preferences in terms of the robot approach direction (directionRAD), robot base approach interaction distance (distanceRBAID), robot handing over hand distance (distanceRHOHD), robot handing over arm gesture (gestureRHOAG), and the coordination of both the robot approaching and gestureRHOAG in the context of a robot handing over an object to a seated person. The results from this study aim at informing the development of a Human Aware Manipulation Planner. Twelve participants with some previous human-robot interaction experience were recruited for the trial. The results show that a majority of the participants prefer the robot to approach from the front and hand them a can of soft drink in the front sector of their personal zone. The robot handing over hand position had the most influence on determining from where the robot should approach (i.e distanceRAD). Legibility and perception of risk seem to be the deciding factor on how participants choose their preferred robot arm-base approach coordination for handing over a can. Detailed discussions of the results conclude the paper.", "title": "" }, { "docid": "9a6f62dd4fc2e9b7f6be5b30c731367c", "text": "In this paper we present a filter algorithm for nonlinear programming and prove its global convergence to stationary points. Each iteration is composed of a feasibility phase, which reduces a measure of infeasibility, and an optimality phase, which reduces the objective function in a tangential approximation of the feasible set. These two phases are totally independent, and the only coupling between them is provided by the filter. The method is independent of the internal algorithms used in each iteration, as long as these algorithms satisfy reasonable assumptions on their efficiency. Under standard hypotheses, we show two results: for a filter with minimum size, the algorithm generates a stationary accumulation point; for a slightly larger filter, all accumulation points are stationary.", "title": "" }, { "docid": "1a7dad648167b1d213d3f26626aaa6e7", "text": "This paper performs a comprehensive performance analysis of a family of non-data-aided feedforward carrier frequency offset estimators for QAM signals transmitted through AWGN channels in the presence of unknown timing error. The proposed carrier frequency offset estimators are asymptotically (large sample) nonlinear least-squares estimators obtained by exploiting the fourthorder conjugate cyclostationary statistics of the received signal and exhibit fast convergence rates (asymptotic variances on the order of O(N−3), where N stands for the number of samples). The exact asymptotic performance of these estimators is established and analyzed as a function of the received signal sampling frequency, signal-to-noise ratio, timing delay, and number of symbols. It is shown that in the presence of intersymbol interference effects, the performance of the frequency offset estimators can be improved significantly by oversampling (or fractionally sampling) the received signal. Finally, simulation results are presented to corroborate the theoretical performance analysis, and comparisons with the modified Cramér-Rao bound illustrate the superior performance of the proposed nonlinear least-squares carrier frequency offset estimators.", "title": "" }, { "docid": "4096499f4e34f6c1f0c3bb0bb63fb748", "text": "A detailed examination of evolving traffic characteristics, operator requirements, and network technology trends suggests a move away from nonblocking interconnects in data center networks (DCNs). As a result, recent efforts have advocated oversubscribed networks with the capability to adapt to traffic requirements on-demand. In this paper, we present the design, implementation, and evaluation of OSA, a novel Optical Switching Architecture for DCNs. Leveraging runtime reconfigurable optical devices, OSA dynamically changes its topology and link capacities, thereby achieving unprecedented flexibility to adapt to dynamic traffic patterns. Extensive analytical simulations using both real and synthetic traffic patterns demonstrate that OSA can deliver high bisection bandwidth (60%-100% of the nonblocking architecture). Implementation and evaluation of a small-scale functional prototype further demonstrate the feasibility of OSA.", "title": "" }, { "docid": "ffd0494007a1b82ed6b03aaefd7f8be9", "text": "In this paper we consider the problem of robot navigation in simple maze-like environments where the robot has to rely on its onboard sensors to perform the navigation task. In particular, we are interested in solutions to this problem that do not require localization, mapping or planning. Additionally, we require that our solution can quickly adapt to new situations (e.g., changing navigation goals and environments). To meet these criteria we frame this problem as a sequence of related reinforcement learning tasks. We propose a successor-feature-based deep reinforcement learning algorithm that can learn to transfer knowledge from previously mastered navigation tasks to new problem instances. Our algorithm substantially decreases the required learning time after the first task instance has been solved, which makes it easily adaptable to changing environments. We validate our method in both simulated and real robot experiments with a Robotino and compare it to a set of baseline methods including classical planning-based navigation.", "title": "" }, { "docid": "dea9a2b852b1d9d4bcee8e9b6a96bf90", "text": "Pseudo-relevance feedback (PRF) refers to a query expansion strategy based on top-retrieved documents, which has been shown to be highly effective in many retrieval models. Previous work has introduced a set of constraints (axioms) that should be satisfied by any PRF model. In this paper, we propose three additional constraints based on the proximity of feedback terms to the query terms in the feedback documents. As a case study, we consider the log-logistic model, a state-of-the-art PRF model that has been proven to be a successful method in satisfying the existing PRF constraints, and show that it does not satisfy the proposed constraints. We further modify the log-logistic model based on the proposed proximity-based constraints. Experiments on four TREC collections demonstrate the effectiveness of the proposed constraints. Our modification the log-logistic model leads to significant and substantial (up to 15%) improvements. Furthermore, we show that the proposed proximity-based function outperforms the well-known Gaussian kernel which does not satisfy all the proposed constraints.", "title": "" }, { "docid": "6f6667e4c485978b566d25837083b565", "text": "Topic models provide a powerful tool for analyzing large text collections by representing high dimensional data in a low dimensional subspace. Fitting a topic model given a set of training documents requires approximate inference techniques that are computationally expensive. With today's large-scale, constantly expanding document collections, it is useful to be able to infer topic distributions for new documents without retraining the model. In this paper, we empirically evaluate the performance of several methods for topic inference in previously unseen documents, including methods based on Gibbs sampling, variational inference, and a new method inspired by text classification. The classification-based inference method produces results similar to iterative inference methods, but requires only a single matrix multiplication. In addition to these inference methods, we present SparseLDA, an algorithm and data structure for evaluating Gibbs sampling distributions. Empirical results indicate that SparseLDA can be approximately 20 times faster than traditional LDA and provide twice the speedup of previously published fast sampling methods, while also using substantially less memory.", "title": "" }, { "docid": "576e31d3ee33f222e948e8147c39249a", "text": "With cotton fiber growth or maturation, cellulose content in cotton fibers markedly increases. Traditional chemical methods have been developed to determine cellulose content, but it is time-consuming and labor-intensive, mostly owing to the slow hydrolysis process of fiber cellulose components. As one approach, the attenuated total reflection Fourier transform infrared (ATR FT-IR) spectroscopy technique has also been utilized to monitor cotton cellulose formation, by implementing various spectral interpretation strategies of both multivariate principal component analysis (PCA) and 1-, 2- or 3-band/-variable intensity or intensity ratios. The main objective of this study was to compare the correlations between cellulose content determined by chemical analysis and ATR FT-IR spectral indices acquired by the reported procedures, among developmental Texas Marker-1 (TM-1) and immature fiber (im) mutant cotton fibers. It was observed that the R value, CIIR, and the integrated intensity of the 895 cm-1 band exhibited strong and linear relationships with cellulose content. The results have demonstrated the suitability and utility of ATR FT-IR spectroscopy, combined with a simple algorithm analysis, in assessing cotton fiber cellulose content, maturity, and crystallinity in a manner which is rapid, routine, and non-destructive.", "title": "" }, { "docid": "261b17479152b68804dce9281d66e3f5", "text": "We introduce extreme summarization, a new single-document summarization task which does not favor extractive strategies and calls for an abstractive modeling approach. The idea is to create a short, one-sentence news summary answering the question “What is the article about?”. We collect a real-world, large scale dataset for this task by harvesting online articles from the British Broadcasting Corporation (BBC). We propose a novel abstractive model which is conditioned on the article’s topics and based entirely on convolutional neural networks. We demonstrate experimentally that this architecture captures longrange dependencies in a document and recognizes pertinent content, outperforming an oracle extractive system and state-of-the-art abstractive approaches when evaluated automatically and by humans.1", "title": "" }, { "docid": "ac15d2b4d14873235fe6e4d2dfa84061", "text": "Despite strong popular conceptions of gender differences in emotionality and striking gender differences in the prevalence of disorders thought to involve emotion dysregulation, the literature on the neural bases of emotion regulation is nearly silent regarding gender differences (Gross, 2007; Ochsner & Gross, in press). The purpose of the present study was to address this gap in the literature. Using functional magnetic resonance imaging, we asked male and female participants to use a cognitive emotion regulation strategy (reappraisal) to down-regulate their emotional responses to negatively valenced pictures. Behaviorally, men and women evidenced comparable decreases in negative emotion experience. Neurally, however, gender differences emerged. Compared with women, men showed (a) lesser increases in prefrontal regions that are associated with reappraisal, (b) greater decreases in the amygdala, which is associated with emotional responding, and (c) lesser engagement of ventral striatal regions, which are associated with reward processing. We consider two non-competing explanations for these differences. First, men may expend less effort when using cognitive regulation, perhaps due to greater use of automatic emotion regulation. Second, women may use positive emotions in the service of reappraising negative emotions to a greater degree. We then consider the implications of gender differences in emotion regulation for understanding gender differences in emotional processing in general, and gender differences in affective disorders.", "title": "" }, { "docid": "bc6a13cc44a77d29360d04a2bc96bd61", "text": "Security competitions have become a popular way to foster security education by creating a competitive environment in which participants go beyond the effort usually required in traditional security courses. Live security competitions (also called “Capture The Flag,” or CTF competitions) are particularly well-suited to support handson experience, as they usually have both an attack and a defense component. Unfortunately, because these competitions put several (possibly many) teams against one another, they are difficult to design, implement, and run. This paper presents a framework that is based on the lessons learned in running, for more than 10 years, the largest educational CTF in the world, called iCTF. The framework’s goal is to provide educational institutions and other organizations with the ability to run customizable CTF competitions. The framework is open and leverages the security community for the creation of a corpus of educational security challenges.", "title": "" }, { "docid": "565c949a2bf8b6f6c3d246c7c195419d", "text": "Extracorporeal photochemotherapy (ECP) is an effective treatment modality for patients with erythrodermic myocosis fungoides (MF) and Sezary syndrome (SS). During ECP, a fraction of peripheral blood mononuclear cells is collected, incubated ex-vivo with methoxypsoralen, UVA irradiated, and finally reinfused to the patient. Although the mechanism of action of ECP is not well established, clinical and laboratory observations support the hypothesis of a vaccination-like effect. ECP induces apoptosis of normal and neoplastic lymphocytes, while enhancing differentiation of monocytes towards immature dendritic cells (imDCs), followed by engulfment of apoptotic bodies. After reinfusion, imDCs undergo maturation and antigenic peptides from the neoplastic cells are expressed on the surface of DCs. Mature DCs travel to lymph nodes and activate cytotoxic T-cell clones with specificity against tumor antigens. Disease control is mediated through cytotoxic T-lymphocytes with tumor specificity. The efficacy and excellent safety profile of ECP has been shown in a large number of retrospective trials. Previous studies showed that monotherapy with ECP produces an overall response rate of approximately 60%, while clinical data support that ECP is much more effective when combined with other immune modulating agents such as interferons or retinoids, or when used as consolidation treatment after total skin electron beam irradiation. However, only a proportion of patients actually respond to ECP and parameters predictive of response need to be discovered. A patient with a high probability of response to ECP must fulfill all of the following criteria: (1) SS or erythrodermic MF, (2) presence of neoplastic cells in peripheral blood, and (3) early disease onset. Despite the fact that ECP has been established as a standard treatment modality, no prospective randomized study has been conducted so far, to the authors' knowledge. Considering the high cost of the procedure, the role of ECP in the treatment of SS/MF needs to be clarified via well designed multicenter prospective randomized trials.", "title": "" } ]
scidocsrr
b14b8bbc154551465e9894bb5187125c
Coherent and Noncoherent Dictionaries for Action Recognition
[ { "docid": "c1f6052ecf802f1b4b2e9fd515d7ea15", "text": "In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a pre-specified set of linear transforms, or by adapting the dictionary to a set of training signals. Both these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method – the K-SVD algorithm – generalizing the K-Means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary, and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results on both synthetic tests and in applications on real image data.", "title": "" }, { "docid": "b50c0f5bd7ee7b0fbcc77934a600f7d4", "text": "Local feature descriptors underpin many diverse applications, supporting object recognition, image registration, database search, 3D reconstruction, and more. The recent phenomenal growth in mobile devices and mobile computing in general has created demand for descriptors that are not only discriminative, but also compact in size and fast to extract and match. In response, a large number of binary descriptors have been proposed, each claiming to overcome some limitations of the predecessors. This paper provides a comprehensive evaluation of several promising binary designs. We show that existing evaluation methodologies are not sufficient to fully characterize descriptors’ performance and propose a new evaluation protocol and a challenging dataset. In contrast to the previous reviews, we investigate the effects of the matching criteria, operating points, and compaction methods, showing that they all have a major impact on the systems’ design and performance. Finally, we provide descriptor extraction times for both general-purpose systems and mobile devices, in order to better understand the real complexity of the extraction task. The objective is to provide a comprehensive reference and a guide that will help in selection and design of the future descriptors.", "title": "" }, { "docid": "a25338ae0035e8a90d6523ee5ef667f7", "text": "Activity recognition in video is dominated by low- and mid-level features, and while demonstrably capable, by nature, these features carry little semantic meaning. Inspired by the recent object bank approach to image representation, we present Action Bank, a new high-level representation of video. Action bank is comprised of many individual action detectors sampled broadly in semantic space as well as viewpoint space. Our representation is constructed to be semantically rich and even when paired with simple linear SVM classifiers is capable of highly discriminative performance. We have tested action bank on four major activity recognition benchmarks. In all cases, our performance is better than the state of the art, namely 98.2% on KTH (better by 3.3%), 95.0% on UCF Sports (better by 3.7%), 57.9% on UCF50 (baseline is 47.9%), and 26.9% on HMDB51 (baseline is 23.2%). Furthermore, when we analyze the classifiers, we find strong transfer of semantics from the constituent action detectors to the bank classifier.", "title": "" } ]
[ { "docid": "396f6b6c09e88ca8e9e47022f1ae195b", "text": "Generative Adversarial Network (GAN) and its variants have recently attracted intensive research interests due to their elegant theoretical foundation and excellent empirical performance as generative models. These tools provide a promising direction in the studies where data availability is limited. One common issue in GANs is that the density of the learned generative distribution could concentrate on the training data points, meaning that they can easily remember training samples due to the high model complexity of deep networks. This becomes a major concern when GANs are applied to private or sensitive data such as patient medical records, and the concentration of distribution may divulge critical patient information. To address this issue, in this paper we propose a differentially private GAN (DPGAN) model, in which we achieve differential privacy in GANs by adding carefully designed noise to gradients during the learning procedure. We provide rigorous proof for the privacy guarantee, as well as comprehensive empirical evidence to support our analysis, where we demonstrate that our method can generate high quality data points at a reasonable privacy level.", "title": "" }, { "docid": "9c9e36a64d82beada8807546636aef20", "text": "Nowadays, FMCW (Frequency Modulated Continuous Wave) radar is widely adapted due to the use of solid state microwave amplifier to generate signal source. The FMCW radar can be implemented and analyzed at low cost and less complexity by using Software Defined Radio (SDR). In this paper, SDR based FMCW radar for target detection and air traffic control radar application is implemented in real time. The FMCW radar model is implemented using open source software and hardware. GNU Radio is utilized for software part of the radar and USRP (Universal Software Radio Peripheral) N210 for hardware part. Log-periodic antenna operating at 1GHZ frequency is used for transmission and reception of radar signals. From the beat signal obtained at receiver end and range resolution of signal, target is detected. Further low pass filtering followed by Fast Fourier Transform (FFT) is performed to reduce computational complexity.", "title": "" }, { "docid": "88e1eaad5cfc5aded16f588cd10cb244", "text": "BACKGROUND AND AIMS\nIntestinal barrier impairment is incriminated in the pathophysiology of intestinal gut disorders associated with psychiatric comorbidity. Increased intestinal permeability associated with upload of lipopolysaccharides (LPS) translocation induces depressive symptoms. Gut microbiota and probiotics alter behavior and brain neurochemistry. Since Lactobacillus farciminis suppresses stress-induced hyperpermeability, we examined whether (i) L. farciminis affects the HPA axis stress response, (ii) stress induces changes in LPS translocation and central cytokine expression which may be reversed by L. farciminis, (iii) the prevention of \"leaky\" gut and LPS upload are involved in these effects.\n\n\nMETHODS\nAt the end of the following treatments female rats were submitted to a partial restraint stress (PRS) or sham-PRS: (i) oral administration of L. farciminis during 2 weeks, (ii) intraperitoneal administration of ML-7 (a specific myosin light chain kinase inhibitor), (iii) antibiotic administration in drinking water during 12 days. After PRS or sham-PRS session, we evaluated LPS levels in portal blood, plasma corticosterone and adrenocorticotropic hormone (ACTH) levels, hypothalamic corticotropin releasing factor (CRF) and pro-inflammatory cytokine mRNA expression, and colonic paracellular permeability (CPP).\n\n\nRESULTS\nPRS increased plasma ACTH and corticosterone; hypothalamic CRF and pro-inflammatory cytokine expression; CPP and portal blood concentration of LPS. L. farciminis and ML-7 suppressed stress-induced hyperpermeability, endotoxemia and prevented HPA axis stress response and neuroinflammation. Antibiotic reduction of luminal LPS concentration prevented HPA axis stress response and increased hypothalamic expression of pro-inflammatory cytokines.\n\n\nCONCLUSION\nThe attenuation of the HPA axis response to stress by L. farciminis depends upon the prevention of intestinal barrier impairment and decrease of circulating LPS levels.", "title": "" }, { "docid": "19dea4fca2a60fad4b360d34b15480ae", "text": "We present Neural Autoregressive Distribution Estimation (NADE) models, which are neural network architectures applied to the problem of unsupervised distribution and density estimation. They leverage the probability product rule and a weight sharing scheme inspired from restricted Boltzmann machines, to yield an estimator that is both tractable and has good generalization performance. We discuss how they achieve competitive performance in modeling both binary and real-valued observations. We also present how deep NADE models can be trained to be agnostic to the ordering of input dimensions used by the autoregressive product rule decomposition. Finally, we also show how to exploit the topological structure of pixels in images using a deep convolutional architecture for NADE.", "title": "" }, { "docid": "2e3cee13657129d26ec236f9d2641e6c", "text": "Due to the prevalence of social media websites, one challenge facing computer vision researchers is to devise methods to process and search for persons of interest among the billions of shared photos on these websites. Facebook revealed in a 2013 white paper that its users have uploaded more than 250 billion photos, and are uploading 350 million new photos each day. Due to this humongous amount of data, large-scale face search for mining web images is both important and challenging. Despite significant progress in face recognition, searching a large collection of unconstrained face images has not been adequately addressed. To address this challenge, we propose a face search system which combines a fast search procedure, coupled with a state-of-the-art commercial off the shelf (COTS) matcher, in a cascaded framework. Given a probe face, we first filter the large gallery of photos to find the top-k most similar faces using deep features generated from a convolutional neural network. The k retrieved candidates are re-ranked by combining similarities from deep features and the COTS matcher. We evaluate the proposed face search system on a gallery containing 80 million web-downloaded face images. Experimental results demonstrate that the deep features are competitive with state-of-the-art methods on unconstrained face recognition benchmarks (LFW and IJB-A). More specifically, on the LFW database, we achieve 98.23% accuracy under the standard protocol and a verification rate of 87.65% at FAR of 0.1% under the BLUFR protocol. For the IJB-A benchmark, our accuracies are as follows: TAR of 51.4% at FAR of 0.1% (verification); Rank 1 retrieval of 82.0% (closed-set search); FNIR of 61.7% at FPIR of 1% (open-set search). Further, the proposed face search system offers an excellent trade-off between accuracy and scalability on datasets consisting of millions of images. Additionally, in an experiment involving searching for face images of the Tsarnaev brothers, convicted of the Boston Marathon bombing, the proposed cascade face search system could find the younger brother’s (Dzhokhar Tsarnaev) photo at rank 1 in 1 second on a 5M gallery and at rank 8 in 7 seconds", "title": "" }, { "docid": "9800cb574743679b4517818c9653ada5", "text": "This paper aims to accelerate the test-time computation of deep convolutional neural networks (CNNs). Unlike existing methods that are designed for approximating linear filters or linear responses, our method takes the nonlinear units into account. We minimize the reconstruction error of the nonlinear responses, subject to a low-rank constraint which helps to reduce the complexity of filters. We develop an effective solution to this constrained nonlinear optimization problem. An algorithm is also presented for reducing the accumulated error when multiple layers are approximated. A whole-model speedup ratio of 4× is demonstrated on a large network trained for ImageNet, while the top-5 error rate is only increased by 0.9%. Our accelerated model has a comparably fast speed as the “AlexNet” [11], but is 4.7% more accurate.", "title": "" }, { "docid": "d0b2999de796ec3215513536023cc2be", "text": "Recently proposed machine comprehension (MC) application is an effort to deal with natural language understanding problem. However, the small size of machine comprehension labeled data confines the application of deep neural networks architectures that have shown advantage in semantic inference tasks. Previous methods use a lot of NLP tools to extract linguistic features but only gain little improvement over simple baseline. In this paper, we build an attention-based recurrent neural network model, train it with the help of external knowledge which is semantically relevant to machine comprehension, and achieves a new state-of-the-art result.", "title": "" }, { "docid": "d603806f579a937a24ad996543fe9093", "text": "Early vision relies heavily on rectangular windows for tasks such as smoothing and computing correspondence. While rectangular windows are efficient, they yield poor results near object boundaries. We describe an efficient method for choosing an arbitrarily shaped connected window, in a manner which varies at each pixel. Our approach can be applied to many problems, including image restoration and visual correspondence. It runs in linear time, and takes a few seconds on traditional benchmark images. Performance on both synthetic and real imagery with ground truth appears promising.", "title": "" }, { "docid": "c5cb0ae3102fcae584e666a1ba3e73ed", "text": "A new generation of computational cameras is emerging, spawned by the introduction of the Lytro light-field camera to the consumer market and recent accomplishments in the speed at which light can be captured. By exploiting the co-design of camera optics and computational processing, these cameras capture unprecedented details of the plenoptic function: a ray-based model for light that includes the color spectrum as well as spatial, temporal, and directional variation. Although digital light sensors have greatly evolved in the last years, the visual information captured by conventional cameras has remained almost unchanged since the invention of the daguerreotype. All standard CCD and CMOS sensors integrate over the dimensions of the plenoptic function as they convert photons into electrons. In the process, all visual information is irreversibly lost, except for a two-dimensional, spatially varying subset: the common photograph.\n This course reviews the plenoptic function and discusses approaches for optically encoding high-dimensional visual information that is then recovered computationally in post-processing. It begins with an overview of the plenoptic dimensions and shows how much of this visual information is irreversibly lost in conventional image acquisition. Then it discusses the state of the art in joint optical modulation and computation reconstruction for acquisition of high-dynamic-range imagery and spectral information. It unveils the secrets behind imaging techniques that have recently been featured in the news and outlines other aspects of light that are of interest for various applications before concluding with question, answers, and a short discussion.", "title": "" }, { "docid": "28e538dcdcfed7693f0c1e4fe4d29c94", "text": "The data used in the test consisted of 500 pages selected at random from a collection of approximately 2,500 documents containing 100,000 pages. The documents in this collection were chosen by the U.S. Department of Energy (DOE) to represent the kinds of documents from which the DOE plans to build large, full-text retrieval databases using OCR for document conversion. The documents are mostly scientific and technical papers [Nartker 92].", "title": "" }, { "docid": "f1bc297544e333f08387cfd410e1dc75", "text": "Cascades are ubiquitous in various network environments. How to predict these cascades is highly nontrivial in several vital applications, such as viral marketing, epidemic prevention and traffic management. Most previous works mainly focus on predicting the final cascade sizes. As cascades are typical dynamic processes, it is always interesting and important to predict the cascade size at any time, or predict the time when a cascade will reach a certain size (e.g. an threshold for outbreak). In this paper, we unify all these tasks into a fundamental problem: cascading process prediction. That is, given the early stage of a cascade, how to predict its cumulative cascade size of any later time? For such a challenging problem, how to understand the micro mechanism that drives and generates the macro phenomena (i.e. cascading process) is essential. Here we introduce behavioral dynamics as the micro mechanism to describe the dynamic process of a node's neighbors getting infected by a cascade after this node getting infected (i.e. one-hop subcascades). Through data-driven analysis, we find out the common principles and patterns lying in behavioral dynamics and propose a novel Networked Weibull Regression model for behavioral dynamics modeling. After that we propose a novel method for predicting cascading processes by effectively aggregating behavioral dynamics, and present a scalable solution to approximate the cascading process with a theoretical guarantee. We extensively evaluate the proposed method on a large scale social network dataset. The results demonstrate that the proposed method can significantly outperform other state-of-the-art baselines in multiple tasks including cascade size prediction, outbreak time prediction and cascading process prediction.", "title": "" }, { "docid": "c28ee3a41d05654eedfd379baf2d5f24", "text": "The problem of classifying subjects into disease categories is of common occurrence in medical research. Machine learning tools such as Artificial Neural Network (ANN), Support Vector Machine (SVM) and Logistic Regression (LR) and Fisher’s Linear Discriminant Analysis (LDA) are widely used in the areas of prediction and classification. The main objective of these competing classification strategies is to predict a dichotomous outcome (e.g. disease/healthy) based on several features.", "title": "" }, { "docid": "eee3cbeb230fb5bc454e5850bb007169", "text": "Unicycle mobile robot is wheeled mobile robot that can stand and move around using one wheel. It has attached a lot of researchers to conduct studies about the system, particularly in the design of the system mechanisms and the control strategies. Unlike two wheel balancing mobile robot which mechanically stable on one side, unicycle mobile robot requires additional mechanisms to keep balancing robot on all sides. By assuming that both roll dynamics and pitch dynamics are decoupled, so the balancing mechanisms can be designed separately. The reaction wheel is used for obtaining balancing on the roll angle by rotating the disc to generate momentum. While the wheeled robot is used for obtaining balancing on the pitch angle by rotating wheel to move forward or backward. A PID controller is used as balancing control which will control the rotation motor on the reaction disc and wheel based on the pitch and roll feedback from the sensor. By adding the speed controller to the pitch control, the system will compensate automatically for perfectly center of gravity on the robot. Finally, the unicycle robot will be able to balance on pitch angle and roll angle. Based on simulation result validates that robot can balance using PID controller, while based on balancing pitch experiment result, robot can achieve balancing with maximum inclination about ±23 degree on pitch angle and ±3.5 degree on roll angle with steady state error 0.1 degree.", "title": "" }, { "docid": "27834a3ad7148d4174a289580ef9f514", "text": "We explore the power of spatial context as a self-supervisory signal for learning visual representations. In particular, we propose spatial context networks that learn to predict a representation of one image patch from another image patch, within the same image, conditioned on their real-valued relative spatial offset. Unlike auto-encoders, that aim to encode and reconstruct original image patches, our network aims to encode and reconstruct intermediate representations of the spatially offset patches. As such, the network learns a spatially conditioned contextual representation. By testing performance with various patch selection mechanisms we show that focusing on object-centric patches is important, and that using object proposal as a patch selection mechanism leads to the highest improvement in performance. Further, unlike auto-encoders, context encoders [21], or other forms of unsupervised feature learning, we illustrate that contextual supervision (with pre-trained model initialization) can improve on existing pre-trained model performance. We build our spatial context networks on top of standard VGG_19 and CNN_M architectures and, among other things, show that we can achieve improvements (with no additional explicit supervision) over the original ImageNet pre-trained VGG_19 and CNN_M models in object categorization and detection on VOC2007.", "title": "" }, { "docid": "d5330600041fd35290004a74aa38a7da", "text": "We present the EpiReader, a novel model for machine comprehension of text. Machine comprehension of unstructured, real-world text is a major research goal for natural language processing. Current tests of machine comprehension pose questions whose answers can be inferred from some supporting text, and evaluate a model’s response to the questions. The EpiReader is an end-to-end neural model comprising two components: the first component proposes a small set of candidate answers after comparing a question to its supporting text, and the second component formulates hypotheses using the proposed candidates and the question, then reranks the hypotheses based on their estimated concordance with the supporting text. We present experiments demonstrating that the EpiReader sets a new state-of-the-art on the CNN and Children’s Book Test machine comprehension benchmarks, outperforming previous neural models by a significant margin.", "title": "" }, { "docid": "e68992d53fa5bac20f8a4f17d72c7d0d", "text": "In the field of pattern recognition, data analysis, and machine learning, data points are usually modeled as high-dimensional vectors. Due to the curse-of-dimensionality, it is non-trivial to efficiently process the orginal data directly. Given the unique properties of nonlinear dimensionality reduction techniques, nonlinear learning methods are widely adopted to reduce the dimension of data. However, existing nonlinear learning methods fail in many real applications because of the too-strict requirements (for real data) or the difficulty in parameters tuning. Therefore, in this paper, we investigate the manifold learning methods which belong to the family of nonlinear dimensionality reduction methods. Specifically, we proposed a new manifold learning principle for dimensionality reduction named Curved Cosine Mapping (CCM). Based on the law of cosines in Euclidean space, CCM applies a brand new mapping pattern to manifold learning. In CCM, the nonlinear geometric relationships are obtained by utlizing the law of cosines, and then quantified as the dimensionality-reduced features. Compared with the existing approaches, the model has weaker theoretical assumptions over the input data. Moreover, to further reduce the computation cost, an optimized version of CCM is developed. Finally, we conduct extensive experiments over both artificial and real-world datasets to demonstrate the performance of proposed techniques.", "title": "" }, { "docid": "bfee1553c6207909abc9820e741d6e01", "text": "Ciphertext-policy attribute-based encryption (CP-ABE) is a promising cryptographic technique that integrates data encryption with access control for ensuring data security in IoT systems. However, the efficiency problem of CP-ABE is still a bottleneck limiting its development and application. A widespread consensus is that the computation overhead of bilinear pairing is excessive in the practical application of ABE, especially for the devices or the processors with limited computational resources and power supply. In this paper, we proposed a novel pairing-free data access control scheme based on CP-ABE using elliptic curve cryptography, abbreviated PF-CP-ABE. We replace complicated bilinear pairing with simple scalar multiplication on elliptic curves, thereby reducing the overall computation overhead. And we designed a new way of key distribution that it can directly revoke a user or an attribute without updating other users’ keys during the attribute revocation phase. Besides, our scheme use linear secret sharing scheme access structure to enhance the expressiveness of the access policy. The security and performance analysis show that our scheme significantly improved the overall efficiency as well as ensured the security.", "title": "" }, { "docid": "80f31bb04f4714d7a14499d5d97be8da", "text": "We investigate the importance of text analysis for stock price prediction. In particular, we introduce a system that forecasts companies’ stock price changes (UP, DOWN, STAY) in response to financial events reported in 8-K documents. Our results indicate that using text boosts prediction accuracy over 10% (relative) over a strong baseline that incorporates many financially-rooted features. This impact is most important in the short term (i.e., the next day after the financial event) but persists for up to five days.", "title": "" }, { "docid": "cdb0e65f89f94e436e8c798cd0188d66", "text": "Visual storytelling aims to generate human-level narrative language (i.e., a natural paragraph with multiple sentences) from a photo streams. A typical photo story consists of a global timeline with multi-thread local storylines, where each storyline occurs in one different scene. Such complex structure leads to large content gaps at scene transitions between consecutive photos. Most existing image/video captioning methods can only achieve limited performance, because the units in traditional recurrent neural networks (RNN) tend to “forget” the previous state when the visual sequence is inconsistent. In this paper, we propose a novel visual storytelling approach with Bidirectional Multi-thread Recurrent Neural Network (BMRNN). First, based on the mined local storylines, a skip gated recurrent unit (sGRU) with delay control is proposed to maintain longer range visual information. Second, by using sGRU as basic units, the BMRNN is trained to align the local storylines into the global sequential timeline. Third, a new training scheme with a storyline-constrained objective function is proposed by jointly considering both global and local matches. Experiments on three standard storytelling datasets show that the BMRNN model outperforms the state-of-the-art methods.", "title": "" } ]
scidocsrr
59d66e3199afc6fc2555a8bc468107c0
ProxImaL: efficient image optimization using proximal algorithms
[ { "docid": "8183fe0c103e2ddcab5b35549ed8629f", "text": "The performance of Douglas-Rachford splitting and the alternating direction method of multipliers (ADMM) (i.e. Douglas-Rachford splitting on the dual problem) are sensitive to conditioning of the problem data. For a restricted class of problems that enjoy a linear rate of convergence, we show in this paper how to precondition the optimization data to optimize a bound on that rate. We also generalize the preconditioning methods to problems that do not satisfy all assumptions needed to guarantee a linear convergence. The efficiency of the proposed preconditioning is confirmed in a numerical example, where improvements of more than one order of magnitude are observed compared to when no preconditioning is used.", "title": "" } ]
[ { "docid": "bce79146a0316fd10c6ee492ff0b5686", "text": "Recent advances in deep learning for object recognition in natural images has prompted a surge of interest in applying a similar set of techniques to medical images. Most of the initial attempts largely focused on replacing the input to such a deep convolutional neural network from a natural image to a medical image. This, however, does not take into consideration the fundamental differences between these two types of data. More specifically, detection or recognition of an anomaly in medical images depends significantly on fine details, unlike object recognition in natural images where coarser, more global structures matter more. This difference makes it inadequate to use the existing deep convolutional neural networks architectures, which were developed for natural images, because they rely on heavily downsampling an image to a much lower resolution to reduce the memory requirements. This hides details necessary to make accurate predictions for medical images. Furthermore, a single exam in medical imaging often comes with a set of different views which must be seamlessly fused in order to reach a correct conclusion. In our work, we propose to use a multi-view deep convolutional neural network that handles a set of more than one highresolution medical image. We evaluate this network on large-scale mammography-based breast cancer screening (BI-RADS prediction) using 103 thousand images. We focus on investigating the impact of training set sizes and image sizes on the prediction accuracy. Our results highlight that performance clearly increases with the size of training set, and that the best performance can only be achieved using the images in the original resolution. This suggests the future direction of medical imaging research using deep neural networks is to utilize as much data as possible with the least amount of potentially harmful preprocessing.", "title": "" }, { "docid": "81f5f2e9da401a40a561f91b8c6b6bc5", "text": "Human computer interaction is defined as Users (Humans) interact with the computers. Speech recognition is an area of computer science that deals with the designing of systems that recognize spoken words. Speech recognition system allows ordinary people to speak to the system. Recognizing and understanding a spoken sentence is obviously a knowledge-intensive process, which must take into account all variable information about the speech communication process, from acoustics to semantics and pragmatics. This paper is the survey of how speech is converted in text and that text in translated into another language. In this paper, we outline a speech recognition system, learning based approach and target language generation mechanism with the help of language EnglishSanskrit language pair using rule based machine translation technique [1]. Rule Based Machine Translation provides high quality translation and requires in depth knowledge of the language apart from real world knowledge and the differences in cultural background and conceptual divisions. Here the English speech is first converted into text and that will translated into Sanskrit language. Keywords-Speech Recognition, Sanskrit, Context Free Grammar, Rule based machine translation, Database.", "title": "" }, { "docid": "35e0119aa16c78bb003fc4d32fe770b8", "text": "An information stream is a requested arrangement of examples that can be perused just once or a little number of times utilizing constrained processing and stockpiling abilities. Numerous applications take a shot at stream information like web pursuit, system activity, phone call and so on. In this application information is persistently changing in view of time. In this paper we will examine future patterns of information digging that are utilized for examination and expectation of big data. We will talk about difficulties while performing mining on big information. Stream information is likewise alluding as constant information. Constant information produced through web, every second a huge number of information created, so how to oversee and dissect this information, we examine in this paper.", "title": "" }, { "docid": "3552f0bc14b541584a2015d5c1e52d51", "text": "Although sustainability in the fashion industry has gained prominence from both business practices and academic research, retailing, a vital part of the supply chain, has not yet been fairly explored in academia. The interest in this area has increased lately, mainly due to the growing complexity within this dynamic context. Therefore, it is meaningful to conduct a systematic review of the relevant published literature in this field. This study aims to identify the main perspectives of research on sustainable retailing in the fashion industry. The content analysis results indicate that the most prominent areas in the field are sustainable retailing in disposable fashion, fast fashion, slow fashion, green branding and eco-labeling; retailing of secondhand fashion; reverse logistics in fashion retailing; and emerging retailing opportunities in e-commerce. The results from this review also indicate that there is a lack of research on sustainable retailing in the fashion industry in the developing market.", "title": "" }, { "docid": "e393cf414910dbf50ac18d2ad0f2cd15", "text": "Training relation extractors for the purpose of automated knowledge base population requires the availability of sufficient training data. The amount of manual labeling can be significantly reduced by applying distant supervision, which generates training data by aligning large text corpora with existing knowledge bases. This typically results in a highly noisy training set, where many training sentences do not express the intended relation. In this paper, we propose to combine distant supervision with minimal human supervision by annotating features (in particular shortest dependency paths) rather than complete relation instances. Such feature labeling eliminates noise from the initial training set, resulting in a significant increase of precision at the expense of recall. We further improve on this approach by introducing the Semantic Label Propagation (SLP) method, which uses the similarity between low-dimensional representations of candidate training instances to again extend the (filtered) training set in order to increase recall while maintaining high precision. Our strategy is evaluated on an established test collection designed for knowledge base population (KBP) from the TAC KBP English slot filling task. The experimental results show that SLP leads to substantial performance gains when compared to existing approaches while requiring an almost negligible human annotation effort.", "title": "" }, { "docid": "5595102130b4c03c7f65f31207951f79", "text": "Being a leading location-based social network (LBSN), Foursquare’s Swarm app allows users to conduct checkins at a specified location and share their real-time locations with friends. This app records a massive set of spatio-temporal information of users around the world. In this paper, we track the evolution of user density of the Swarm app in New York City (NYC) for one entire week. We study the temporal patterns of different venue categories, and investigate how the function of venue categories affects the temporal behavior of visitors. Moreover, by applying time-series analysis, we validate that the temporal patterns can be effectively decomposed into regular parts which represent the regular human behavior and stochastic parts which represent the randomness of human behavior. Finally, we build a model to predict the evolution of the user density, and our results demonstrate an accurate prediction.", "title": "" }, { "docid": "cba9f80ab39de507e84b68dc598d0bb9", "text": "In this paper we construct a noncommutative space of “pointed Drinfeld modules” that generalizes to the case of function fields the noncommutative spaces of commensurability classes of Q-lattices. It extends the usual moduli spaces of Drinfeld modules to possibly degenerate level structures. In the second part of the paper we develop some notions of quantum statistical mechanics in positive characteristic and we show that, in the case of Drinfeld modules of rank one, there is a natural time evolution on the associated noncommutative space, which is closely related to the positive characteristic L-functions introduced by Goss. The points of the usual moduli space of Drinfeld modules define KMS functionals for this time evolution. We also show that the scaling action on the dual system is induced by a Frobenius action, up to a Wick rotation to imaginary time. © 2006 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "1979fa5a3384477602c0e81ba62199da", "text": "Language style transfer is the problem of migrating the content of a source sentence to a target style. In many of its applications, parallel training data are not available and source sentences to be transferred may have arbitrary and unknown styles. Under this problem setting, we propose an encoder-decoder framework. First, each sentence is encoded into its content and style latent representations. Then, by recombining the content with the target style, we decode a sentence aligned in the target domain. To adequately constrain the encoding and decoding functions, we couple them with two loss functions. The first is a style discrepancy loss, enforcing that the style representation accurately encodes the style information guided by the discrepancy between the sentence style and the target style. The second is a cycle consistency loss, which ensures that the transferred sentence should preserve the content of the original sentence disentangled from its style. We validate the effectiveness of our model in three tasks: sentiment modification of restaurant reviews, dialog response revision with a romantic style, and sentence rewriting with a Shakespearean style.", "title": "" }, { "docid": "91eda0e2f9ef0e2ed87c5135c0061dfd", "text": "We detail the design, implementation, and an initial evaluation of a virtual reality education and entertainment (edutainment) application called Virtual Environment Interactions (VEnvI). VEnvI is an application in which students learn computer science concepts through the process of choreographing movement for a virtual character using a fun and intuitive interface. In this exploratory study, 54 participants as part of a summer camp geared towards promoting participation of women in science and engineering programmatically crafted a dance performance for a virtual human. A subset of those students participated in an immersive embodied interaction metaphor in VEnvI. In creating this metaphor that provides first-person, embodied experiences using self-avatars, we seek to facilitate engagement, excitement and interest in computational thinking. We qualitatively and quantitatively evaluated the extent to which the activities of the summer camp, programming the dance moves, and the embodied interaction within VEnvI facilitated students' edutainment, presence, interest, excitement, and engagement in computing, and the potential to alter their perceptions of computing and computer scientists. Results indicate that students enjoyed the experience and successfully engaged the virtual character in the immersive embodied interaction, thus exhibiting high telepresence and social presence. Students also showed increased interest and excitement regarding the computing field at the end of their summer camp experience using VEnvI.", "title": "" }, { "docid": "35e8a61fe4b87a1421d48dc583e69c57", "text": "As one of the most popular micro-blogging services, Twitter attracts millions of users, producing millions of tweets daily. Shared information through this service spreads faster than would have been possible with traditional sources, however the proliferation of user-generation content poses challenges to browsing and finding valuable information. In this paper we propose a graph-theoretic model for tweet recommendation that presents users with items they may have an interest in. Our model ranks tweets and their authors simultaneously using several networks: the social network connecting the users, the network connecting the tweets, and a third network that ties the two together. Tweet and author entities are ranked following a co-ranking algorithm based on the intuition that that there is a mutually reinforcing relationship between tweets and their authors that could be reflected in the rankings. We show that this framework can be parametrized to take into account user preferences, the popularity of tweets and their authors, and diversity. Experimental evaluation on a large dataset shows that our model outperforms competitive approaches by a large margin.", "title": "" }, { "docid": "046df1ccbc545db05d0d91fe8f73d64a", "text": "Precise models of the robot inverse dynamics allow the design of significantly more accurate, energy-efficient and more compliant robot control. However, in some cases the accuracy of rigidbody models does not suffice for sound control performance due to unmodeled nonlinearities arising from hydraulic cable dynamics, complex friction or actuator dynamics. In such cases, estimating the inverse dynamics model from measured data poses an interesting alternative. Nonparametric regression methods, such as Gaussian process regression (GPR) or locally weighted projection regression (LWPR), are not as restrictive as parametric models and, thus, offer a more flexible framework for approximating unknown nonlinearities. In this paper, we propose a local approximation to the standard GPR, called local GPR (LGP), for real-time model online-learning by combining the strengths of both regression methods, i.e., the high accuracy of GPR and the fast speed of LWPR. The approach is shown to have competitive learning performance for high-dimensional data while being sufficiently fast for real-time learning. The effectiveness of LGP is exhibited by a comparison with the state-of-the-art regression techniques, such as GPR, LWPR and ν-SVR. The applicability of the proposed LGP method is demonstrated by real-time online-learning of the inverse dynamics model for robot model-based control on a Barrett WAM robot arm.", "title": "" }, { "docid": "1a34642809ce718c777d4d3956fdfe48", "text": "We propose a simple, efficient and effective method using deep convolutional activation features (CNNs) to achieve stat- of-the-art classification and segmentation for the MICCAI 2014 Brain Tumor Digital Pathology Challenge. Common traits of such medical image challenges are characterized by large image dimensions (up to the gigabyte size of an image), a limited amount of training data, and significant clinical feature representations. To tackle these challenges, we transfer the features extracted from CNNs trained with a very large general image database to the medical image challenge. In this paper, we used CNN activations trained by ImageNet to extract features (4096 neurons, 13.3% active). In addition, feature selection, feature pooling, and data augmentation are used in our work. Our system obtained 97.5% accuracy on classification and 84% accuracy on segmentation, demonstrating a significant performance gain over other participating teams.", "title": "" }, { "docid": "b93a949c1c509bf8e5d36a9ec2cb37a5", "text": "At first glance, agile methods and global software development might seem incompatible. Agile methods stress continuous face-to-face communication, whereas communication has been reported as the biggest problem of global software development. One challenge to solve is how to apply agile practices in settings where continuous face-to-face interaction is missing. However, agile methods have been successfully used in distributed projects, indicating that they could benefit global software development. This paper discusses potential benefits and challenges of adopting agile methods in global software development. The literature on real industrial case studies reporting on experiences of using agile methods in distributed projects is still scarce. Therefore we suggest further research on the topic. We present our plans for research in companies using agile methods in their distributed projects. We also intend to test the use of agile principles in globally distributed student projects developing software for industrial clients", "title": "" }, { "docid": "b084146e68ae9b6400019f69573086c3", "text": "Soccer is the most popular sport in the world and is performed by men and women, children and adults with different levels of expertise. Soccer performance depends upon a myriad of factors such as technical/biomechanical, tactical, mental and physiological areas. One of the reasons that soccer is so popular worldwide is that players may not need to have an extraordinary capacity within any of these performance areas, but possess a reasonable level within all areas. However, there are trends towards more systematic training and selection influencing the anthropometric profiles of players who compete at the highest level. As with other activities, soccer is not a science, but science may help improve performance. Efforts to improve soccer performance often focus on technique and tactics at the expense of physical fitness. During a 90-minute game, elite-level players run about 10 km at an average intensity close to the anaerobic threshold (80-90% of maximal heart rate). Within this endurance context, numerous explosive bursts of activity are required, including jumping, kicking, tackling, turning, sprinting, changing pace, and sustaining forceful contractions to maintain balance and control of the ball against defensive pressure. The best teams continue to increase their physical capacities, whilst the less well ranked have similar values as reported 30 years ago. Whether this is a result of fewer assessments and training resources, selling the best players, and/or knowledge of how to perform effective exercise training regimens in less well ranked teams, is not known. As there do exist teams from lower divisions with as high aerobic capacity as professional teams, the latter factor probably plays an important role. This article provides an update on the physiology of soccer players and referees, and relevant physiological tests. It also gives examples of effective strength- and endurance-training programmes to improve on-field performance. The cited literature has been accumulated by computer searching of relevant databases and a review of the authors' extensive files. From a total of 9893 papers covering topics discussed in this article, 843 were selected for closer scrutiny, excluding studies where information was redundant, insufficient or the experimental design was inadequate. In this article, 181 were selected and discussed. The information may have important implications for the safety and success of soccer players and hopefully it should be understood and acted upon by coaches and individual soccer players.", "title": "" }, { "docid": "e4b6dbd8238160457f14aacb8f9717ff", "text": "Abs t r ac t . The PKZIP program is one of the more widely used archive/ compression programs on personM, computers. It also has many compatible variants on other computers~ and is used by most BBS's and ftp sites to compress their archives. PKZIP provides a stream cipher which allows users to scramble files with variable length keys (passwords). In this paper we describe a known pla.intext attack on this cipher, which can find the internal representation of the key within a few hours on a personal computer using a few hundred bytes of known plaintext. In many cases, the actual user keys can also be found from the internal representation. We conclude that the PKZIP cipher is weak, and should not be used to protect valuable data.", "title": "" }, { "docid": "795e128d64e7af81ab26a0392e852343", "text": "Many applications demand availability. Unfortunately, software failures greatly reduce system availability. Prior work on surviving software failures suffers from one or more of the following limitations: Required application restructuring, inability to address deterministic software bugs, unsafe speculation on program execution, and long recovery time.This paper proposes an innovative safe technique, called Rx, which can quickly recover programs from many types of software bugs, both deterministic and non-deterministic. Our idea, inspired from allergy treatment in real life, is to rollback the program to a recent checkpoint upon a software failure, and then to re-execute the program in a modified environment. We base this idea on the observation that many bugs are correlated with the execution environment, and therefore can be avoided by removing the \"allergen\" from the environment. Rx requires few to no modifications to applications and provides programmers with additional feedback for bug diagnosis.We have implemented RX on Linux. Our experiments with four server applications that contain six bugs of various types show that RX can survive all the six software failures and provide transparent fast recovery within 0.017-0.16 seconds, 21-53 times faster than the whole program restart approach for all but one case (CVS). In contrast, the two tested alternatives, a whole program restart approach and a simple rollback and re-execution without environmental changes, cannot successfully recover the three servers (Squid, Apache, and CVS) that contain deterministic bugs, and have only a 40% recovery rate for the server (MySQL) that contains a non-deterministic concurrency bug. Additionally, RX's checkpointing system is lightweight, imposing small time and space overheads.", "title": "" }, { "docid": "072b6e69c0d0e277bf7fd679f31085f6", "text": "A strip curl antenna is investigated for obtaining a circularly-polarized (CP) tilted beam. This curl is excited through a strip line (called the excitation line) that connects the curl arm to a coaxial feed line. The antenna structure has the following features: a small circumference not exceeding two wavelengths and a small antenna height of less than 0.42 wavelength. The antenna arm is printed on a dielectric hollow cylinder, leading to a robust structure. The investigation reveals that an external excitation for the curl using a straight line (ST-line) is more effective for generating a tilted beam than an internal excitation. It is found that the axial ratio of the radiation field from the external-excitation curl is improved by transforming the ST-line into a wound line (WD-line). It is also found that a modification to the end area of the WD-line leads to an almost constant input impedance (50 ohms). Note that these results are demonstrated for the Ku-band (from 11.7 GHz to 12.75 GHz, 8.6% bandwidth).", "title": "" }, { "docid": "e4e12d6395bac797fa42cb9a19c381cb", "text": "The short-circuit force induces critical mechanical stress on a transformer. This paper deals with experimental verification and finite element analysis (FEA) for short-circuit force prediction of a 50 kVA dry-type transformer. We modeled high voltage (HV) winding into 20 sections and low voltage (LV) winding into 22 sections as similar as those windings of a model transformer. With this modeling technique, we could calculate electromagnetic forces acting on each section of the windings of a dry-type transformer under short-circuit condition. The magnetic vector potentials, magnetic flux densities, and electromagnetic forces due to short-circuit current are solved by FEA. The electromagnetic forces consisting of radial and axial directions depend both on short-circuit current and leakage flux density. These results were used as input source of sequential finite element method (FEM) to predict the resultant mechanical forces considering the structural characteristics such as stress distributions or deformations of windings, accurately. The obtained resultant mechanical forces in HV winding are compared with those of the experimental ones.", "title": "" } ]
scidocsrr
177a847a6ffacff9b454b36136e99c04
Learning Awareness Models
[ { "docid": "1dc15eab73059656e8f34092de81fd4c", "text": "The reinforcement learning paradigm allows, in principle, for complex behaviours to be learned directly from simple reward signals. In practice, however, it is common to carefully hand-design the reward function to encourage a particular solution, or to derive it from demonstration data. In this paper explore how a rich environment can help to promote the learning of complex behavior. Specifically, we train agents in diverse environmental contexts, and find that this encourages the emergence of robust behaviours that perform well across a suite of tasks. We demonstrate this principle for locomotion – behaviours that are known for their sensitivity to the choice of reward. We train several simulated bodies on a diverse set of challenging terrains and obstacles, using a simple reward function based on forward progress. Using a novel scalable variant of policy gradient reinforcement learning, our agents learn to run, jump, crouch and turn as required by the environment without explicit reward-based guidance. A visual depiction of highlights of the learned behavior can be viewed in this video.", "title": "" }, { "docid": "51c42a305039d65dc442910c8078a9aa", "text": "Infants are experts at playing, with an amazing ability to generate novel structured behaviors in unstructured environments that lack clear extrinsic reward signals. We seek to mathematically formalize these abilities using a neural network that implements curiosity-driven intrinsic motivation. Using a simple but ecologically naturalistic simulated environment in which an agent can move and interact with objects it sees, we propose a “world-model” network that learns to predict the dynamic consequences of the agent’s actions. Simultaneously, we train a separate explicit “self-model” that allows the agent to track the error map of its worldmodel. It then uses the self-model to adversarially challenge the developing world-model. We demonstrate that this policy causes the agent to explore novel and informative interactions with its environment, leading to the generation of a spectrum of complex behaviors, including ego-motion prediction, object attention, and object gathering. Moreover, the world-model that the agent learns supports improved performance on object dynamics prediction, detection, localization and recognition tasks. Taken together, our results are initial steps toward creating flexible autonomous agents that self-supervise in realistic physical environments.", "title": "" } ]
[ { "docid": "487b003ca1b0484df194ba8f3dbc50eb", "text": "Recent years have seen an explosion in the rate of discovery of genetic defects linked to Parkinson's disease. These breakthroughs have not provided a direct explanation for the disease process. Nevertheless, they have helped transform Parkinson's disease research by providing tangible clues to the neurobiology of the disorder.", "title": "" }, { "docid": "0b74dd11a1a85e98779ce7763f8fb8fd", "text": "The prediction of asthma that persists throughout childhood and into adulthood, in early life of a child has practical, clinical and prognostic implications and sets the basis for the future prevention. Artificial Neural Networks (ANNs) seems to be a superior tool for analyzing data sets where nonlinear relationships are existing between the input data and the predicted output. This study presents an effective machine-learning approach based on Multi-Layer Perceptron (MLP) neural networks, for the prediction of persistent asthma in children. Through a feature reduction, 10 high importance prognostic factors correlated to persistent asthma have been discovered. The feature selection approach results in 89.8% reduction of the initial number of features. Afterwards, a feature reduced classifier is constructed, which achieves 100% accuracy on the training and test data sets. Experimental results are presenting and verify this statement.", "title": "" }, { "docid": "c4915e44d481e8760bb13c7ebe970d45", "text": "Fault management system has critical responsibility in telecommunication networks to guarantee the networks reliability. One of important functions of fault management is fault notification. In this paper, fault notification extension for BSS 2G Siemens is proposed. Application to process and send the BSS 2G alarms automatically to sites engineers in form of Short Message Service (SMS) is designed and developed. Report status of the notification is accomplished with time-stamp and interval period. The experimental results showed that the notification extension for the BSS 2G alarm is received successfully by site engineers. Transmission time and response time of the notification is faster up to three times than using operator's phone call.", "title": "" }, { "docid": "cb9f89949979f2144e45e06dccdde2e8", "text": "This paper describes the double mode surface acoustic wave (DMS) filter design techniques for achieving the ultra-steep cut-off characteristics and low insertion loss required for the Rx filter in the personal communications services (PCS) duplexer. Simulations demonstrate that the optimal combination of the additional common ground inductance Lg and the coupling capacitance Cc between the input and output terminals of the DMS filters drastically enhances the skirt steepness and attenuation for the lower frequency side of the passband. Based on this result, we propose a novel DMS filter structure that utilizes the parasitic reactance generated in bonding wires and interdigital transducer (IDT) busbars as Lg and Cc, respectively. Because the proposed structure does not need any additional reactance component, the filter size can be small. Moreover, we propose a compact multiple-connection configuration for low insertion loss. Applying these technologies to the Rx filter, we successfully develop a PCS SAW duplexer.", "title": "" }, { "docid": "50840b0308e1f884b61c9f824b1bf17f", "text": "The StreamIt programming model has been proposed to exploit parallelism in streaming applications on general purpose multi-core architectures. This model allows programmers to specify the structure of a program as a set of filters that act upon data, and a set of communication channels between them. The StreamIt graphs describe task, data and pipeline parallelism which can be exploited on modern Graphics Processing Units (GPUs), as they support abundant parallelism in hardware. In this paper, we describe the challenges in mapping StreamIt to GPUs and propose an efficient technique to software pipeline the execution of stream programs on GPUs. We formulate this problem --- both scheduling and assignment of filters to processors --- as an efficient Integer Linear Program (ILP), which is then solved using ILP solvers. We also describe a novel buffer layout technique for GPUs which facilitates exploiting the high memory bandwidth available in GPUs. The proposed scheduling utilizes both the scalar units in GPU, to exploit data parallelism, and multiprocessors, to exploit task and pipeline parallelism. Further it takes into consideration the synchronization and bandwidth limitations of GPUs, and yields speedups between 1.87X and 36.83X over a single threaded CPU.", "title": "" }, { "docid": "83f539d6d6cd64743b9406e9ac246c1a", "text": "This paper presents Granola, a transaction coordination infrastructure for building reliable distributed storage applications. Granola provides a strong consistency model, while significantly reducing transaction coordination overhead. We introduce specific support for a new type of independent distributed transaction, which we can serialize with no locking overhead and no aborts due to write conflicts. Granola uses a novel timestamp-based coordination mechanism to order distributed transactions, offering lower latency and higher throughput than previous systems that offer strong consistency. Our experiments show that Granola has low overhead, is scalable and has high throughput. We implemented the TPC-C benchmark on Granola, and achieved 3× the throughput of a platform using a locking approach.", "title": "" }, { "docid": "52b3ba8268f9dacac20923d6b9145bd6", "text": "Clustering large scale data has become an important challenge which motivates several recent works. While the emphasis has been on the organization of massive data into disjoint groups, this work considers the identification of non-disjoint groups rather than the disjoint ones. In this setting, it is possible for data object to belong simultaneously to several groups since many real-world applications of clustering require non-disjoint partitioning to fit data structures. For this purpose, we propose the Parallel Overlapping k-means method (POKM) which is able to perform parallel clustering processes leading to non-disjoint partitioning of data. The proposed method is implemented within Spark framework to ensure the distribution of works over the different computation nodes. Experiments which we have performed on simulated and real-world multi-labeled datasets shows both faster execution times and high quality of clustering compared to existing methods.", "title": "" }, { "docid": "3535e70b1c264d99eff5797413650283", "text": "MIMO is one of the techniques used in LTE Release 8 to achieve very high data rates. A field trial was performed in a pre-commercial LTE network. The objective is to investigate how well MIMO works with realistically designed handhelds in band 13 (746-756 MHz in downlink). In total, three different handheld designs were tested using antenna mockups. In addition to the mockups, a reference antenna design with less stringent restrictions on physical size and excellent properties for MIMO was used. The trial comprised test drives in areas with different characteristics and with different network load levels. The effects of hands holding the devices and the effect of using the device inside a test vehicle were also investigated. In general, it is very clear from the trial that MIMO works very well and gives a substantial performance improvement at the tested carrier frequency if the antenna design of the hand-held is well made with respect to MIMO. In fact, the best of the handhelds performed similar to the reference antenna.", "title": "" }, { "docid": "292eea3f09d135f489331f876052ce88", "text": "-Steganography is a term used for covered writing. Steganography can be applied on different file formats, such as audio, video, text, image etc. In image steganography, data in the form of image is hidden under some image by using transformations such as ztransformation, integer wavelet transformation, DWT etc and then sent to the destination. At the destination, the data is extracted from the cover image using the inverse transformation. This paper presents a new approach for image steganography using DWT. The cover image is divided into higher and lower frequency sub-bands and data is embedded into higher frequency sub-bands. Arnold Transformation is used to increase the security. The proposed approach is implemented in MATLAB 7.0 and evaluated on the basis of PSNR, capacity and correlation. The proposed approach results in high capacity image steganography as compared to existing approaches. Keywords-Image Steganography, PSNR, Discrete Wavelet Transform.", "title": "" }, { "docid": "ab4e2ab6b206fece59f40945c82d5cd7", "text": "Knowledge distillation is effective to train small and generalisable network models for meeting the low-memory and fast running requirements. Existing offline distillation methods rely on a strong pre-trained teacher, which enables favourable knowledge discovery and transfer but requires a complex two-phase training procedure. Online counterparts address this limitation at the price of lacking a highcapacity teacher. In this work, we present an On-the-fly Native Ensemble (ONE) learning strategy for one-stage online distillation. Specifically, ONE trains only a single multi-branch network while simultaneously establishing a strong teacher onthe-fly to enhance the learning of target network. Extensive evaluations show that ONE improves the generalisation performance a variety of deep neural networks more significantly than alternative methods on four image classification dataset: CIFAR10, CIFAR100, SVHN, and ImageNet, whilst having the computational efficiency advantages.", "title": "" }, { "docid": "1541b49d4f8cade557d6944eb79e36c9", "text": "In recent years, a plethora of approaches have been proposed to deal with the increasingly challenging task of multi-output regression. This paper provides a survey on state-of-the-art multi-output regression methods, that are categorized as problem transformation and algorithm adaptation methods. In addition, we present the mostly used performance evaluation measures, publicly available data sets for multi-output regression real-world problems, as well as open-source software frameworks.", "title": "" }, { "docid": "2d5368515f2ea6926e9347d971745eb9", "text": "Let us consider a \" random graph \" r,:l,~v having n possible (labelled) vertices and N edges; in other words, let us choose at random (with equal probabilities) one of the t 1 has no isolated points) and is connected in the ordinary sense. In the present paper we consider asymptotic statistical properties of random graphs for 11++ 30. We shall deal with the following questions: 1. What is the probability of r,,. T being completely connected? 2. What is the probability that the greatest connected component (sub-graph) of r,,, s should have effectively n-k points? (k=O, 1,. . .). 3. What is the probability that rp,N should consist of exactly kf I connected components? (k = 0, 1,. + .). 4. If the edges of a graph with n vertices are chosen successively so that after each step every edge which has not yet been chosen has the same probability to be chosen as the next, and if we continue this process until the graph becomes completely connected, what is the probability that the number of necessary sfeps v will be equal to a given number I? As (partial) answers to the above questions we prove ihe following four theorems. In Theorems 1, 2, and 3 we use the notation N,= (I-&n log n+cn 1 where c is an arbitrary fixed real number ([xl denotes the integer part of x).", "title": "" }, { "docid": "d456cdecdb66e62d971a069f45d9594c", "text": "In this paper, a new rectangle detection approach is proposed. It is a bottom-up approach that contains four stages: line segment extraction, corner detection, corner-relation-graph generation and rectangle detection. Graph structure is used to construct the relations between corners and simplify the problem of rectangle detection. In addition, the approach can be extended to detect any polygons. Experiments on bin detection, traffic sign detection and license plate detection prove that the approach is robust.", "title": "" }, { "docid": "73977bfb83e82862445f0c114a0ba722", "text": "Current machine learning systems operate, almost exclusively, in a statistical, or model-blind mode, which entails severe theoretical limits on their power and performance. Such systems cannot reason about interventions and retrospection and, therefore, cannot serve as the basis for strong AI. To achieve human level intelligence, learning machines need the guidance of a model of reality, similar to the ones used in causal inference. To demonstrate the essential role of such models, I will present a summary of seven tasks which are beyond reach of current machine learning systems and which have been accomplished using the tools of causal inference.", "title": "" }, { "docid": "e78a652a865494e4d05ad80d8a37224f", "text": "This paper focuses on mechanical property of a articulated multi-unit wheel type in-pipe locomotion robot system. Through establishing the posture model of the robot system, can get the coordinates of wheel center and its corresponding contact point with pipe wall of each wheel for robot unit. Based on the posture model, the mechanical model of the robot unit is presented and the analysis is carried out in details. To confirm the effectiveness of the proposed theoretical analysis, an example about statics of the pipe robot is calculated, and the calculation results basically reflect the actual characteristics of the pipe robot. This provide theoretical basis for the selection of driving mechanism design and control mode of wheel type pipe robot.", "title": "" }, { "docid": "e41cdabf7d1cc98b7ea00d3a96664fb0", "text": "Definicija Skoliozu definiramo kao trodimenzionalnu deformaciju kralježnice koja uz postranični zavoj u frontalnoj ravnini uključuje rotaciju u transverzalnoj i promjenu profila u sagitalnoj ravnini (1). Naziv potječe od Hipokrata, a rabio ga je i rimski liječnik Galen. Prvo liječenje ortozom izveo je Ambroise Paré u 16. stoljeću (2), a termin idiopatska skolioza (IS) koji se odnosi na sve oboljele kojima nije utvrđena etiologija uveo je Kleinberg 1922. godine (3). Morfološki se skolioze dijele na one s glavnom jednostrukom (češće desnostranom torakalnom) i glavnom dvostrukom (torakolumbalnom) krivinom (slika 1.), ali ima i znatno detaljnijih podjela prema raznim autorima i tipovima liječenja (npr., Lenkeova koja je prikladnija za operativno liječenje, Schrothina, Rigova). Strukturalne su skolioze trodimenzionalne deformacije kralježnice, a nestrukturalne imaju samo postranični zavoj u frontalnoj ravnini (npr., antalgična skolioza kod ishialgije) (4).", "title": "" }, { "docid": "526854ab5bf3c01f9e88dee8aeaa8dda", "text": "Key establishment in sensor networks is a challenging problem because asymmetric key cryptosystems are unsuitable for use in resource constrained sensor nodes, and also because the nodes could be physically compromised by an adversary. We present three new mechanisms for key establishment using the framework of pre-distributing a random set of keys to each node. First, in the q-composite keys scheme, we trade off the unlikeliness of a large-scale network attack in order to significantly strengthen random key predistribution’s strength against smaller-scale attacks. Second, in the multipath-reinforcement scheme, we show how to strengthen the security between any two nodes by leveraging the security of other links. Finally, we present the random-pairwise keys scheme, which perfectly preserves the secrecy of the rest of the network when any node is captured, and also enables node-to-node authentication and quorum-based revocation.", "title": "" }, { "docid": "4a58ca6e628248088455bf9b8d10711b", "text": "Developmental dysgraphia, being observed among 10–30% of school-aged children, is a disturbance or difficulty in the production of written language that has to do with the mechanics of writing. The objective of this study is to propose a method that can be used for automated diagnosis of this disorder, as well as for estimation of difficulty level as determined by the handwriting proficiency screening questionnaire. We used a digitizing tablet to acquire handwriting and consequently employed a complex parameterization in order to quantify its kinematic aspects and hidden complexities. We also introduced a simple intrawriter normalization that increased dysgraphia discrimination and HPSQ estimation accuracies. Using a random forest classifier, we reached 96% sensitivity and specificity, while in the case of automated rating by the HPSQ total score, we reached 10% estimation error. This study proves that digital parameterization of pressure and altitude/tilt patterns in children with dysgraphia can be used for preliminary diagnosis of this writing disorder.", "title": "" }, { "docid": "791f889ddb18c375d38f809805bc66cd", "text": "INTRODUCTION\nNasopharyngeal cysts are uncommon, and are mostly asymptomatic. However, these lesions are infrequently found during routine endoscopies and imaging studies. In even more rare cases, they may be the source for unexplained sinonasal symptoms, such as CSF rhinorrhea, visual disturbances and nasal obstruction.\n\n\nPURPOSE OF REVIEW\nThis presentation systematically reviews the different nasopharyngeal cysts encountered in children, emphasizing the current knowledge on pathophysiology, recent advances in molecular biology and prenatal diagnosis, clinical presentation, imaging and treatment options.\n\n\nSUMMARY\nWith the advent of flexible and rigid fiber-optic technology and modern imaging techniques, and in particularly prenatal diagnostic techniques, nasopharyngeal cysts recognition is more common than previous times and requires an appropriate consideration. Familiarity with these lesions is essential for the pediatric otolaryngologist.", "title": "" }, { "docid": "cf95d41dc5a2bcc31b691c04e3fb8b96", "text": "Resection of pancreas, in particular pancreaticoduodenectomy, is a complex procedure, commonly performed in appropriately selected patients with benign and malignant disease of the pancreas and periampullary region. Despite significant improvements in the safety and efficacy of pancreatic surgery, pancreaticoenteric anastomosis continues to be the \"Achilles heel\" of pancreaticoduodenectomy, due to its association with a measurable risk of leakage or failure of healing, leading to pancreatic fistula. The morbidity rate after pancreaticoduodenectomy remains high in the range of 30% to 65%, although the mortality has significantly dropped to below 5%. Most of these complications are related to pancreatic fistula, with serious complications of intra-abdominal abscess, postoperative bleeding, and multiorgan failure. Several pharmacological and technical interventions have been suggested to decrease the pancreatic fistula rate, but the results have been controversial. This paper considers definition and classification of pancreatic fistula, risk factors, and preventive approach and offers management strategy when they do occur.", "title": "" } ]
scidocsrr
64297ad10ef3ddfdc722299251a50984
Cloud Services vs. On-Premise Solutions Cost Comparison Calculator
[ { "docid": "b01fb8d54ca7b2ac4a9c895c01d54047", "text": "With the growth of Cloud Computing, more and more companies are offering different cloud services. From the customer's point of view, it is always difficult to decide whose services they should use, based on users' requirements. Currently there is no software framework which can automatically index cloud providers based on their needs. In this work, we propose a framework and a mechanism, which measure the quality and prioritize Cloud services. Such framework can make significant impact and will create healthy competition among Cloud providers to satisfy their Service Level Agreement (SLA) and improve their Quality of Services (QoS).", "title": "" } ]
[ { "docid": "bb782cfc4528de63c38dfc2165f9c4b4", "text": "Many studies have investigated the smart grid architecture and communication models in the past few years. However, the communication model and architecture for a smart grid still remain unclear. Today's electric power distribution is very complex and maladapted because of the lack of efficient and cost-effective energy generation, distribution, and consumption management systems. A wireless smart grid communication system can playan important role in achieving these goals. In thispaper, we describe a smart grid communication architecture in which we merge customers and distributors into a single domain. In the proposed architecture, all the home area networks, neighborhood area networks, and local electrical equipment form a local wireless mesh network (LWMN). Each device or meter can act as a source, router, or relay. The data generated in any node (device/meter) reaches the data collector via other nodes. The data collector transmits this data via the access point of a wide area network (WAN). Finally, data is transferred to the service provider or to the control center of the smart grid. We propose a wireless cooperative communication model for the LWMN. We deploy a limited number of smart relays to improve the performance of the network. A novel relay selection mechanism is also proposed to reduce the relay selection overhead. Simulation results show that our cooperative smart grid (coopSG) communication model improves the end-to-end packet delivery latency, throughput, and energy efficiency over both the Wang et al. and Niyato et al. models.", "title": "" }, { "docid": "7fab2075a73a5795075b29e20f5354ac", "text": "The selection of hospital once an ambulance has picked up its patient is today decided by the ambulance staff. This report describes a supervised machine learning approach for predicting hospital selection. This is a multi-class classification problem. The performance of random forest, logistic regression and neural network were compared to each other and to a baseline, namely the one rule-algorithm. The algorithms were applied to real world data from SOS-alarm, the company that operate Sweden’s emergency call services. Performance was measured with accuracy and f1-score. Random Forest got the best result followed by neural network. Logistic regression exhibited slightly inferior results but still performed far better than the baseline. The results point toward machine learning being a suitable method for learning the problem of hospital selection. Analys av ambulanstransport medelst maskininlärning", "title": "" }, { "docid": "0ccf6d97ff8a6b664a73056ec8e39dc7", "text": "1. Resilient healthcare This integrative review focuses on the methodological strategies employed by studies on resilient healthcare. Resilience engineering (RE), which involves the study of coping with complexity (Woods and Hollnagel, 2006) in modern socio-technical systems (Bergström et al., 2015); emerged in about 2000. The RE discipline is quickly developing, and it has been applied to healthcare, aviation, the petrochemical industry, nuclear power plants, railways, manufacturing, natural disasters and other fields (Righi et al., 2015). The term ‘resilient healthcare’ (RHC) refers to the application of the concepts and methods of RE in the healthcare field, specifically regarding patient safety (Hollnagel et al., 2013a). Instead of the traditional risk management approach based on retrospective analyses of errors, RHC focuses on ‘everyday clinical work’, specifically on the ways it unfolds in practice (Braithwaite et al., 2017). Wears et al. (2015) defined RHC as follows. The ability of the health care system (a clinic, a ward, a hospital, a county) to adjust its functioning prior to, during, or following events (changes, disturbances or opportunities), and thereby sustain required operations under both expected and unexpected conditions. (p. xxvii) After more than a decade of theoretical development in the field of resilience, scholars are beginning to identify its methodological challenges (Woods, 2015; Nemeth and Herrera, 2015). The lack of welldefined constructs to conceptualize resilience challenges the ability to operationalize those constructs in empirical research (Righi et al., 2015; Wiig and Fahlbruch, forthcoming). Further, studying complexity requires challenging methodological designs to obtain evidence about the tested constructs to inform and further develop theory (Bergström and Dekker, 2014). It is imperative to gather emerging knowledge on applied methodology in empirical RHC research to map and discuss the methodological strategies in the healthcare domain. The insights gained might create and refine methodological designs to enable further development of RHC concepts and theory. This study aimed to describe and synthesize the methodological strategies currently applied in https://doi.org/10.1016/j.ssci.2018.08.025 Received 10 October 2016; Received in revised form 13 August 2018; Accepted 27 August 2018 ⁎ Corresponding author. E-mail addresses: [email protected] (S.H. Berg), [email protected] (K. Akerjordet), [email protected] (M. Ekstedt), [email protected] (K. Aase). Safety Science 110 (2018) 300–312 Available online 05 September 2018 0925-7535/ © 2018 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/BY-NC-ND/4.0/). T empirical RHC research in terms of the empirical fields, applied research designs, methods, analytical strategies, main topics and data collection sources at different systemic levels, and to assess the quality of those studies. We argue that one implication of studying sociotechnical systems is that multiple levels in a given system must be addressed, as proposed by, for example, Rasmussen (1997). As such, this study synthesized the ways that RHC studies have approached empirical data at various systemic levels. 2. Methodology in resilient healthcare research ‘Research methodology’ is a strategy or plan of action that shapes the choices and uses of various methods and links them to desired outcomes (Crotty, 1998). This study broadly used the term ‘methodological strategy’ to denote an observed study’s overall research design, data collection sources, data collection methods and analytical methods at different systemic levels. The methodological issues discussed in the RHC literature to date have concerned the methods used to study everyday clinical practice, healthcare complexity and the operationalization of the constructs measuring resilience. 2.1. Methods of studying healthcare complexity RE research is characterized by its study of complexities. In a review of the rationale behind resilience research, Bergström et al. (2015) found that RE researchers typically justified their research by referring to the complexity of modern socio-technical systems that makes them inherently risky. Additionally, in the healthcare field, references are made to the complex adaptive system (CAS) perspective (Braithwaite et al., 2013). CAS emerged from complexity theory, and it takes a dynamic approach to human and nonhuman agents (Urry, 2003). Healthcare is part of a complex socio-technical system and an example of a CAS comprising professionals, patients, managers, policymakers and technologies, all of which interact with and rely on trade-offs and adjustments to succeed in everyday clinical work (Braithwaite et al., 2013). Under complexity theory, complex systems are viewed as open systems that interact with their environments, implying a need to understand the systems’ environments before understanding the systems. Because these environments are complex, no standard methodology can provide a complete understanding (Bergström and Dekker, 2014), and the opportunities for experimental research are limited. Controlled studies might not be able to identify the complex interconnections and multiple variables that influence care; thus, non-linear methods are necessary to describe and understand those systems. Consequently, research on complexity imposes methodological challenges related to the development of valid evidence (Braithwaite et al., 2013). It has been argued that triangulation is necessary to study complex work settings in order to reveal actual phenomena and minimize bias leading to misinterpretation (Nemeth et al., 2011). Methodological triangulation has been suggested, as well as data triangulation, as a strategic way to increase the internal and external validity of RE/RHC research (Nemeth et al., 2011; Mendonca, 2008). Data triangulation involves collecting data from various sources, such as reports, policy documents, multiple professional groups and patient feedback, whereas methodological triangulation involves combining different qualitative methods or mixing qualitative and quantitative methods. Multiple methods have been suggested for research on everyday clinical practice and healthcare complexity. Hollnagel (2014) suggested qualitative methods, such as qualitative interviews, field observations and organizational development techniques (e.g. appreciative inquiry and cooperative inquiry). Nemeth and Herrera (2015) proposed observation in actual settings as a core value of the RE field of practice. Drawing on the methods of cognitive system engineering, Nemeth et al. (2011) described the uses of cognitive task analysis (CTA) to study resilience. CTA comprises numerous methods, one of which is the critical decision method (CDM). CDM is a retrospective interview in which subjects are asked about critical events and decisions. Other proposed methods for studying complex work settings were work domain analysis (WDA), process tracing, artefact analysis and rapid prototyping. System modelling, using methods such as trend analysis, cluster analysis, social network analysis and log linear modelling, has been proposed as a way to study resilience from a socio-technical/CAS perspective (Braithwaite et al., 2013; Anderson et al., 2013). The functional resonance analysis method (FRAM) has been employed to study interactions and dependencies as they develop in specific situations. FRAM is presented as a way to study how complex and dynamic sociotechnical systems work (Hollnagel, 2012). In addition, Leveson et al. (2006) suggested STAMP, a model of accident causation based on systems theory, as a method to analyse resilience. 2.2. Operationalization of resilience A vast amount of the RE literature has been devoted to developing theories on resilience, emphasizing that the domain is in a theory development stage (Righi et al., 2015). This process of theory development is reflected in the diverse definitions and indicators of resilience proposed over the past decade e.g. 3, (Woods, 2006, 2011; Wreathall, 2006). Numerous constructs have been developed, such as resilient abilities (Woods, 2011; Hollnagel, 2008, 2010; Nemeth et al., 2008; Hollnagel et al., 2013b), Safety-II (Hollnagel, 2014), Work-as-done (WAD) and Work-as-imagined (WAI) (Hollnagel et al., 2015), and performance variability (Hollnagel, 2014). The operationalization of these constructs has been a topic of discussion. According to Westrum (2013), one challenge to determining measures of resilience in healthcare relates to the characteristics of resilience as a family of related ideas rather than as a single construct. The applied definitions of ‘resilience’ in RE research have focused on a given system’s adaptive capacities and its abilities to adopt or absorb disturbing conditions. This conceptual understanding of resilience has been applied to RHC [6, p. xxvii]. By understanding resilience as a ‘system’s ability’, the healthcare system is perceived as a separate ontological category. The system is regarded as a unit that might have individual goals, actions or abilities not necessarily shared by its members. Therefore, RHC is greater than the sum of its members’ individual actions, which is a perspective found in methodological holism (Ylikoski, 2012). The challenge is to operationalize the study of ‘the system as a whole’. Some scholars have advocated on behalf of locating the empirical basis of resilience by studying individual performances and aggregating those data to develop a theory of resilience (Mendonca, 2008; Furniss et al., 2011). This approach uses the strategy of finding the properties of the whole (the healthcare system) within the parts at the micro level, which is found in methodological individualism. The WAD and performance variability constructs bring resilience closer to an empirical ground by fr", "title": "" }, { "docid": "96d6173f58e36039577c8e94329861b2", "text": "Reverse Turing tests, or CAPTCHAs, have become an ubiquitous defense used to protect open Web resources from being exploited at scale. An effective CAPTCHA resists existing mechanistic software solving, yet can be solved with high probability by a human being. In response, a robust solving ecosystem has emerged, reselling both automated solving technology and realtime human labor to bypass these protections. Thus, CAPTCHAs can increasingly be understood and evaluated in purely economic terms; the market price of a solution vs the monetizable value of the asset being protected. We examine the market-side of this question in depth, analyzing the behavior and dynamics of CAPTCHA-solving service providers, their price performance, and the underlying labor markets driving this economy.", "title": "" }, { "docid": "5229fb13c66ca8a2b079f8fe46bb9848", "text": "We put forth a lookup-table-based modular reduction method which partitions the binary string of an integer to be reduced into blocks according to its runs. Its complexity depends on the amount of runs in the binary string. We show that the new reduction is almost twice as fast as the popular Barrett’s reduction, and provide a thorough complexity analysis of the method.", "title": "" }, { "docid": "fc60b5230d05cc6d775933194badddf7", "text": "The Successive-Cancellation List (SCL) decoding algorithm is one of the most promising approaches towards practical polar code decoding. It is able to provide a good trade-off between error-correction performance and complexity, tunable through the size of the list. In this paper, we show that in the conventional formulation of SCL, there are redundant calculations which do not need to be performed in the course of the algorithm. We simplify SCL by removing these redundant calculations and prove that the proposed simplified SCL and the conventional SCL algorithms are equivalent. The simplified SCL algorithm is valid for any code and can reduce the time-complexity of SCL without affecting the space complexity.", "title": "" }, { "docid": "a2c26a8b15cafeb365ad9870f9bbf884", "text": "Microgrids consist of multiple parallel-connected distributed generation (DG) units with coordinated control strategies, which are able to operate in both grid-connected and islanded mode. Microgrids are attracting more and more attention since they can alleviate the stress of main transmission systems, reduce feeder losses, and improve system power quality. When the islanded microgrids are concerned, it is important to maintain system stability and achieve load power sharing among the multiple parallel-connected DG units. However, the poor active and reactive power sharing problems due to the influence of impedance mismatch of the DG feeders and the different ratings of the DG units are inevitable when the conventional droop control scheme is adopted. Therefore, the adaptive/improved droop control, network-based control methods and cost-based droop schemes are compared and summarized in this paper for active power sharing. Moreover, nonlinear and unbalanced loads could further affect the reactive power sharing when regulating the active power, and it is difficult to share the reactive power accurately only by using the enhanced virtual impedance method. Therefore, the hierarchical control strategies are utilized as supplements of the conventional droop controls and virtual impedance methods. The improved hierarchical control approaches such as the algorithms based on graph theory, multi-agent system, the gain scheduling method and predictive control have been proposed to achieve proper reactive power sharing for islanded microgrids and eliminate the effect of the communication delays on hierarchical control. Finally, the future research trends on islanded microgrids are also discussed in this paper.", "title": "" }, { "docid": "85736b2fd608e3d109ce0f3c46dda9ac", "text": "The WHO (2001) recommends exclusive breast-feeding and delaying the introduction of solid foods to an infant's diet until 6 months postpartum. However, in many countries, this recommendation is followed by few mothers, and earlier weaning onto solids is a commonly reported global practice. Therefore, this prospective, observational study aimed to assess compliance with the WHO recommendation and examine weaning practices, including the timing of weaning of infants, and to investigate the factors that predict weaning at ≤ 12 weeks. From an initial sample of 539 pregnant women recruited from the Coombe Women and Infants University Hospital, Dublin, 401 eligible mothers were followed up at 6 weeks and 6 months postpartum. Quantitative data were obtained on mothers' weaning practices using semi-structured questionnaires and a short dietary history of the infant's usual diet at 6 months. Only one mother (0.2%) complied with the WHO recommendation to exclusively breastfeed up to 6 months. Ninety-one (22.6%) infants were prematurely weaned onto solids at ≤ 12 weeks with predictive factors after adjustment, including mothers' antenatal reporting that infants should be weaned onto solids at ≤ 12 weeks, formula feeding at 12 weeks and mothers' reporting of the maternal grandmother as the principal source of advice on infant feeding. Mothers who weaned their infants at ≤ 12 weeks were more likely to engage in other sub-optimal weaning practices, including the addition of non-recommended condiments to their infants' foods. Provision of professional advice and exploring antenatal maternal misperceptions are potential areas for targeted interventions to improve compliance with the recommended weaning practices.", "title": "" }, { "docid": "02cfb4cedd863cbf9364b5a80a46e9c4", "text": "An engaged lifestyle is seen as an important component of successful ageing. Many older adults with high participation in social and leisure activities report positive wellbeing, a fact that fuelled the original activity theory and that continues to influence researchers, theorists and practitioners. This study’s purpose is to review the conceptualisation and measurement of activity among older adults and the associations reported in the gerontological literature between specific dimensions of activity and wellbeing. We searched published studies that focused on social and leisure activity and wellbeing, and found 42 studies in 44 articles published between 1995 and 2009. They reported from one to 13 activity domains, the majority reporting two or three, such as informal, formal and solitary, or productive versus leisure. Domains associated with subjective wellbeing, health or survival included social, leisure, productive, physical, intellectual, service and solitary activities. Informal social activity has accumulated the most evidence of an influence on wellbeing. Individual descriptors such as gender or physical functioning sometimes moderate these associations, while contextual variables such as choice, meaning or perceived quality play intervening roles. Differences in definitions and measurement make it difficult to draw inferences about this body of evidence on the associations between activity and wellbeing. Activity theory serves as shorthand for these associations, but gerontology must better integrate developmental and psychological constructs into a refined, comprehensive activity theory.", "title": "" }, { "docid": "00309e5119bb0de1d7b2a583b8487733", "text": "In this paper, we propose a novel Deep Reinforcement Learning framework for news recommendation. Online personalized news recommendation is a highly challenging problem due to the dynamic nature of news features and user preferences. Although some online recommendation models have been proposed to address the dynamic nature of news recommendation, these methods have three major issues. First, they only try to model current reward (e.g., Click Through Rate). Second, very few studies consider to use user feedback other than click / no click labels (e.g., how frequent user returns) to help improve recommendation. Third, these methods tend to keep recommending similar news to users, which may cause users to get bored. Therefore, to address the aforementioned challenges, we propose a Deep Q-Learning based recommendation framework, which can model future reward explicitly. We further consider user return pattern as a supplement to click / no click label in order to capture more user feedback information. In addition, an effective exploration strategy is incorporated to find new attractive news for users. Extensive experiments are conducted on the offline dataset and online production environment of a commercial news recommendation application and have shown the superior performance of our methods.", "title": "" }, { "docid": "6605397ad283fd4d353150d9066f8e6e", "text": "In this paper we present our continuing efforts to generate narrative using a character-centric approach. In particular we discuss the advantages of explicitly representing the emergent event sequence in order to be able to exert influence on it and generate stories that ‘retell’ the emergent narrative. Based on a narrative distinction between fabula, plot and presentation, we make a first step by presenting a model based on story comprehension that can capture the fabula, and show how it can be used for the automatic creation of stories.", "title": "" }, { "docid": "970927ffce65957249bc127a67d8d306", "text": "Despite substantial evidence that resources and outcomes are transmitted across generations, there has been limited inquiry into the extent to which anti-poverty programs actually disrupt the cycle of bad outcomes. We explore how the effects of the United States’ largest early childhood program transfer across generations. We leverage the geographic rollout of this federally funded, means-tested preschool program to estimate the effect of early childhood exposure among mothers on their children’s long-term outcomes. We find evidence of intergenerational transmission of effects in the form of increased educational attainment, reduced teen pregnancy, and reduced criminal engagement in the second generation.", "title": "" }, { "docid": "fcb9614925e939898af060b9ee52f357", "text": "The authors present a method for constructing a feedforward neural net implementing an arbitrarily good approximation to any L/sub 2/ function over (-1, 1)/sup n/. The net uses n input nodes, a single hidden layer whose width is determined by the function to be implemented and the allowable mean square error, and a linear output neuron. Error bounds and an example are given for the method.<<ETX>>", "title": "" }, { "docid": "627b14801c8728adf02b75e8eb62896f", "text": "In the 45 years since Cattell used English trait terms to begin the formulation of his \"description of personality,\" a number of investigators have proposed an alternative structure based on 5 orthogonal factors. The generality of this 5-factor model is here demonstrated across unusually comprehensive sets of trait terms. In the first of 3 studies, 1,431 trait adjectives grouped into 75 clusters were analyzed; virtually identical structures emerged in 10 replications, each based on a different factor-analytic procedure. A 2nd study of 479 common terms grouped into 133 synonym clusters revealed the same structure in 2 samples of self-ratings and in 2 samples of peer ratings. None of the factors beyond the 5th generalized across the samples. In the 3rd study, analyses of 100 clusters derived from 339 trait terms suggest their potential utility as Big-Five markers in future studies.", "title": "" }, { "docid": "064c7b6e2553b96bdcf2b6535e052b1b", "text": "In this paper, we address the problem of argument relation classification where argument units are from different texts. We design a joint inference method for the task by modeling argument relation classification and stance classification jointly. We show that our joint model improves the results over several strong baselines.", "title": "" }, { "docid": "d780db3ec609d74827a88c0fa0d25f56", "text": "Highly automated test vehicles are rare today, and (independent) researchers have often limited access to them. Also, developing fully functioning system prototypes is time and effort consuming. In this paper, we present three adaptions of the Wizard of Oz technique as a means of gathering data about interactions with highly automated vehicles in early development phases. Two of them address interactions between drivers and highly automated vehicles, while the third one is adapted to address interactions between pedestrians and highly automated vehicles. The focus is on the experimental methodology adaptations and our lessons learned.", "title": "" }, { "docid": "1bf062481777244f85237a6f0c2e9dea", "text": "................................................................................ 44", "title": "" }, { "docid": "97e5f2e774b58f7533242114e5e06159", "text": "We address the problem of phase retrieval, which is frequently encountered in optical imaging. The measured quantity is the magnitude of the Fourier spectrum of a function (in optics, the function is also referred to as an object). The goal is to recover the object based on the magnitude measurements. In doing so, the standard assumptions are that the object is compactly supported and positive. In this paper, we consider objects that admit a sparse representation in some orthonormal basis. We develop a variant of the Fienup algorithm to incorporate the condition of sparsity and to successively estimate and refine the phase starting from the magnitude measurements. We show that the proposed iterative algorithm possesses Cauchy convergence properties. As far as the modality is concerned, we work with measurements obtained using a frequency-domain optical-coherence tomography experimental setup. The experimental results on real measured data show that the proposed technique exhibits good reconstruction performance even with fewer coefficients taken into account for reconstruction. It also suppresses the autocorrelation artifacts to a significant extent since it estimates the phase accurately.", "title": "" }, { "docid": "40c2110eaefe79a096099aa5db7426fe", "text": "One-hop broadcasting is the predominate form of network traffic in VANETs. Exchanging status information by broadcasting among the vehicles enhances vehicular active safety. Since there is no MAC layer broadcasting recovery for 802.11 based VANETs, efforts should be made towards more robust and effective transmission of such safety-related information. In this paper, a channel adaptive broadcasting method is proposed. It relies solely on channel condition information available at each vehicle by employing standard supported sequence number mechanisms. The proposed method is fully compatible with 802.11 and introduces no communication overhead. Simulation studies show that it outperforms standard broadcasting in term of reception rate and channel utilization.", "title": "" }, { "docid": "1b15f9f544c8cd59aa8456b19d1e89b9", "text": "The Self-Organizing Map (SOM) algorithm has been extensively studied and has been applied with considerable success to a wide variety of problems. However, the algorithm is derived from heuristic ideas and this leads to a number of significant limitations. In this paper, we consider the problem of modelling the probability density of data in a space of several dimensions in terms of a smaller number of latent, or hidden, variables. We introduce a novel form of latent variable model, which we call the GTM algorithm (for Generative Topographic Mapping), which allows general non-linear transformations from latent space to data space, and which is trained using the EM (expectation-maximization) algorithm. Our approach overcomes the limitations of the SOM, while introducing no significant disadvantages. We demonstrate the performance of the GTM algorithm on simulated data from flow diagnostics for a multi-phase oil pipeline.", "title": "" } ]
scidocsrr
c8d5c5251f691278df41a457ac8234f5
Effectiveness and Efficiency of Open Relation Extraction
[ { "docid": "4261755b137a5cde3d9f33c82bc53cd7", "text": "We study the problem of automatically extracting information networks formed by recognizable entities as well as relations among them from social media sites. Our approach consists of using state-of-the-art natural language processing tools to identify entities and extract sentences that relate such entities, followed by using text-clustering algorithms to identify the relations within the information network. We propose a new term-weighting scheme that significantly improves on the state-of-the-art in the task of relation extraction, both when used in conjunction with the standard tf ċ idf scheme and also when used as a pruning filter. We describe an effective method for identifying benchmarks for open information extraction that relies on a curated online database that is comparable to the hand-crafted evaluation datasets in the literature. From this benchmark, we derive a much larger dataset which mimics realistic conditions for the task of open information extraction. We report on extensive experiments on both datasets, which not only shed light on the accuracy levels achieved by state-of-the-art open information extraction tools, but also on how to tune such tools for better results.", "title": "" }, { "docid": "cbda9744930c6d7282bca3f0083da8a3", "text": "Open Information Extraction extracts relations from text without requiring a pre-specified domain or vocabulary. While existing techniques have used only shallow syntactic features, we investigate the use of semantic role labeling techniques for the task of Open IE. Semantic role labeling (SRL) and Open IE, although developed mostly in isolation, are quite related. We compare SRL-based open extractors, which perform computationally expensive, deep syntactic analysis, with TextRunner, an open extractor, which uses shallow syntactic analysis but is able to analyze many more sentences in a fixed amount of time and thus exploit corpus-level statistics. Our evaluation answers questions regarding these systems, including, can SRL extractors, which are trained on PropBank, cope with heterogeneous text found on the Web? Which extractor attains better precision, recall, f-measure, or running time? How does extractor performance vary for binary, n-ary and nested relations? How much do we gain by running multiple extractors? How do we select the optimal extractor given amount of data, available time, types of extractions desired?", "title": "" }, { "docid": "5f2818d3a560aa34cc6b3dbfd6b8f2cc", "text": "Open Information Extraction (IE) systems extract relational tuples from text, without requiring a pre-specified vocabulary, by identifying relation phrases and associated arguments in arbitrary sentences. However, stateof-the-art Open IE systems such as REVERB and WOE share two important weaknesses – (1) they extract only relations that are mediated by verbs, and (2) they ignore context, thus extracting tuples that are not asserted as factual. This paper presents OLLIE, a substantially improved Open IE system that addresses both these limitations. First, OLLIE achieves high yield by extracting relations mediated by nouns, adjectives, and more. Second, a context-analysis step increases precision by including contextual information from the sentence in the extractions. OLLIE obtains 2.7 times the area under precision-yield curve (AUC) compared to REVERB and 1.9 times the AUC of WOE.", "title": "" }, { "docid": "40405c31dfd3439252eb1810a373ec0e", "text": "Traditional relation extraction seeks to identify pre-specified semantic relations within natural language text, while open Information Extraction (Open IE) takes a more general approach, and looks for a variety of relations without restriction to a fixed relation set. With this generalization comes the question, what is a relation? For example, should the more general task be restricted to relations mediated by verbs, nouns, or both? To help answer this question, we propose two levels of subtasks for Open IE. One task is to determine if a sentence potentially contains a relation between two entities? The other task looks to confirm explicit relation words for two entities. We propose multiple SVM models with dependency tree kernels for both tasks. For explicit relation extraction, our system can extract both noun and verb relations. Our results on three datasets show that our system is superior when compared to state-of-the-art systems like REVERB and OLLIE for both tasks. For example, in some experiments our system achieves 33% improvement on nominal relation extraction over OLLIE. In addition we propose an unsupervised rule-based approach which can serve as a strong baseline for Open IE systems.", "title": "" } ]
[ { "docid": "0ecded7fad85b79c4c288659339bc18b", "text": "We present an end-to-end supervised based system for detecting malware by analyzing network traffic. The proposed method extracts 972 behavioral features across different protocols and network layers, and refers to different observation resolutions (transaction, session, flow and conversation windows). A feature selection method is then used to identify the most meaningful features and to reduce the data dimensionality to a tractable size. Finally, various supervised methods are evaluated to indicate whether traffic in the network is malicious, to attribute it to known malware “families” and to discover new threats. A comparative experimental study using real network traffic from various environments indicates that the proposed system outperforms existing state-of-the-art rule-based systems, such as Snort and Suricata. In particular, our chronological evaluation shows that many unknown malware incidents could have been detected at least a month before their static rules were introduced to either the Snort or Suricata systems.", "title": "" }, { "docid": "f3abf5a6c20b6fff4970e1e63c0e836b", "text": "We demonstrate a physically-based technique for predicting the drape of a wide variety of woven fabrics. The approach exploits a theoretical model that explicitly represents the microstructure of woven cloth with interacting particles, rather than utilizing a continuum approximation. By testing a cloth sample in a Kawabata fabric testing device, we obtain data that is used to tune the model's energy functions, so that it reproduces the draping behavior of the original material. Photographs, comparing the drape of actual cloth with visualizations of simulation results, show that we are able to reliably model the unique large-scale draping characteristics of distinctly different fabric types.", "title": "" }, { "docid": "f13000c4870a85e491f74feb20f9b2d4", "text": "Complex Event Processing (CEP) is a stream processing model that focuses on detecting event patterns in continuous event streams. While the CEP model has gained popularity in the research communities and commercial technologies, the problem of gracefully degrading performance under heavy load in the presence of resource constraints, or load shedding, has been largely overlooked. CEP is similar to “classical” stream data management, but addresses a substantially different class of queries. This unfortunately renders the load shedding algorithms developed for stream data processing inapplicable. In this paper we study CEP load shedding under various resource constraints. We formalize broad classes of CEP load-shedding scenarios as different optimization problems. We demonstrate an array of complexity results that reveal the hardness of these problems and construct shedding algorithms with performance guarantees. Our results shed some light on the difficulty of developing load-shedding algorithms that maximize utility.", "title": "" }, { "docid": "62d23e00d13903246cc7128fe45adf12", "text": "The uncomputable parts of thinking (if there are any) can be studied in much the same spirit that Turing (1950) suggested for the study of its computable parts. We can develop precise accounts of cognitive processes that, although they involve more than computing, can still be modelled on the machines we call ‘computers’. In this paper, I want to suggest some ways that this might be done, using ideas from the mathematical theory of uncomputability (or Recursion Theory). And I want to suggest some uses to which the resulting models might be put. (The reader more interested in the models and their uses than the mathematics and its theorems, might want to skim or skip the mathematical parts.)", "title": "" }, { "docid": "f384196178bf6336d0708718e5b4b378", "text": "Simulating how the global Internet behaves is an immensely challenging undertaking because of the network's great heterogeneity and rapid change. The heterogeneity ranges from the individual links that carry the network's traffic, to the protocols that interoperate over the links, the \"mix\" of different applications used at a site, and the levels of congestion seen on different links. We discuss two key strategies for developing meaningful simulations in the face of these difficulties: searching for invariants and judiciously exploring the simulation parameter space. We finish with a brief look at a collaborative effort within the research community to develop a common network simulator.", "title": "" }, { "docid": "45f500b2d7e3ee59a34ffe0fa34acb0a", "text": "Task consolidation is a way to maximize utilization of cloud computing resources. Maximizing resource utilization provides various benefits such as the rationalization of maintenance , IT service customization, and QoS and reliable services. However, maximizing resource utilization does not mean efficient energy use. Much of the literature shows that energy consumption and resource utilization in clouds are highly coupled. Consequently, some of the literature aims to decrease resource utilization in order to save energy, while others try to reach a balance between resource utilization and energy consumption. In this paper, we present an energy-aware task consolidation (ETC) technique that minimizes energy consumption. ETC achieves this by restricting CPU use below a specified peak threshold. ETC does this by consolidating tasks amongst virtual clusters. In addition, the energy cost model considers network latency when a task migrates to another virtual cluster. To evaluate the performance of ETC we compare it against MaxUtil. MaxUtil is a recently developed greedy algorithm that aims to maximize cloud computing resources. The simulation results show that ETC can significantly reduce power consumption in a cloud system, with 17% improvement over MaxUtil. Cloud computing has recently become popular due to the maturity of related technologies such as network devices, software applications and hardware capacities. Resources in these systems can be widely distributed and the scale of resources involved can range from several servers to an entire data center. To integrate and make good use of resources at various scales, cloud computing needs efficient methods to manage them [4]. Consequently, the focus of much research in recent years has been on how to utilize resources and how to reduce power consumption. One of the key technologies in cloud computing is virtualization. The ability to create virtual machines (VMs) [14] dynamically on demand is a popular solution for managing resources on physical machines. Therefore, many methods [17,18] have been developed that enhance resource utilization such as memory compression, request discrimination, defining threshold for resource usage and task allocation among VMs. Improvements in power consumption, and the relationship between resource usage and energy consumption has also been widely studied [6,10–12,14–18]. Some research aims to improve resource utilization while others aim to reduce energy consumption. The goals of both are to reduce costs for data centers. Due to the large size of many data centers, the financial savings are substantial. Energy consumption varies according to CPU utilization [11]. Higher CPU utilization …", "title": "" }, { "docid": "4e22ce6b4169b1466ced1997af5b8f28", "text": "Many applications of Technology Enhanced Learning are based on strong assumptions: Knowledge needs to be standardized, structured and most of all externalized into learning material that preferably is annotated with meta-data for efficient re-use. A vast body of valuable knowledge does not meet these assumptions, including informal knowledge such as experience and intuition that is key to many complex activities. We notice that knowledge, even if not standardized, structured and externalized, can still be observed through its application. We refer to this observable knowledge as PRACTICED KNOWLEDGE. We propose a novel approach to Technology Enhanced Learning named MACHINE TEACHING to convey this knowledge: Machine Learning techniques are used to extract machine models of Practiced Knowledge from observational data. These models are then applied in the learner’s context for his support. We identify two important subclasses of machine teaching, General and Detailed Feedback Machine Teaching. GENERAL FEEDBACK MACHINE TEACHING aims to provide the learner with a “grade-like” numerical rating of his work. This is a direct application of supervised machine learning approaches. DETAILED FEEDBACK MACHINE TEACHING aims to provide the learner with in-depth support with respect to his activities. An analysis showed that a large subclass of Detailed Feedback Machine Teaching applications can be addressed through adapted recommender systems technology. The ability of the underlying machine learning techniques to capture structure and patterns in the observational data is crucial to the overall applicability of Machine Teaching. Therefore, we study the feasibility of Machine Teaching from a machine learning perspective. Following this goal, we evaluate the General Feedback Machine Teaching approach using state-of-the-art machine learning techniques: The exemplary Machine Teaching system is sought to provide the learner with quality estimations of his writing as judged by an online community. The results obtained in this evaluation are supportive of the applicability of Machine Teaching to this domain. To facilitate Detailed Feedback Machine Teaching, we present a novel matrix factorization model and algorithm. In addition to addressing the needs of Machine Teaching, it is also a contribution to the recommender systems field as it facilitates ranking estimation. An Evaluation in a Detailed Feedback Machine Teaching scenario for software engineers supports the feasibility of Machine Teaching in that domain. We therefore conclude that machine learning models capable of capturing important aspects of practiced knowledge can be found in both, General and Detailed Feedback Machine Teaching. Machine Teaching does not assume the knowledge to be externalized, but to be observable and therefore adds another body of knowledge to Technology Enhanced Learning not amenable to traditional Technology Enhanced Learning approaches.", "title": "" }, { "docid": "ba2632b7a323e785b57328d32a26bc99", "text": "Modern malware is designed with mutation characteristics, namely polymorphism and metamorphism, which causes an enormous growth in the number of variants of malware samples. Categorization of malware samples on the basis of their behaviors is essential for the computer security community, because they receive huge number of malware everyday, and the signature extraction process is usually based on malicious parts characterizing malware families. Microsoft released a malware classification challenge in 2015 with a huge dataset of near 0.5 terabytes of data, containing more than 20K malware samples. The analysis of this dataset inspired the development of a novel paradigm that is effective in categorizing malware variants into their actual family groups. This paradigm is presented and discussed in the present paper, where emphasis has been given to the phases related to the extraction, and selection of a set of novel features for the effective representation of malware samples. Features can be grouped according to different characteristics of malware behavior, and their fusion is performed according to a per-class weighting paradigm. The proposed method achieved a very high accuracy ($\\approx$ 0.998) on the Microsoft Malware Challenge dataset.", "title": "" }, { "docid": "4c406b80ad6c6ca617177a55d149f325", "text": "REST Chart is a Petri-Net based XML modeling framework for REST API. This paper presents two important enhancements and extensions to REST Chart modeling - Hyperlink Decoration and Hierarchical REST Chart. In particular, the proposed Hyperlink Decoration decomposes resource connections from resource representation, such that hyperlinks can be defined independently of schemas. This allows a Navigation-First Design by which the important global connections of a REST API can be designed first and reused before the local resource representations are implemented and specified. Hierarchical REST Chart is a powerful mechanism to rapidly decompose and extend a REST API in several dimensions based on Hyperlink Decoration. These new mechanisms can be used to manage the complexities in large scale REST APIs that undergo frequent changes as in some large scale open source development projects. This paper shows that these new capabilities can fit nicely in the REST Chart XML with very minor syntax changes. These enhancements to REST Chart are applied successfully in designing and verifying REST APIs for software-defined-networking (SDN) and Cloud computing.", "title": "" }, { "docid": "af461e1a81e234f5ea61652f97d03f18", "text": "In this work we propose a technique that transfers supervision between images from different modalities. We use learned representations from a large labeled modality as supervisory signal for training representations for a new unlabeled paired modality. Our method enables learning of rich representations for unlabeled modalities and can be used as a pre-training procedure for new modalities with limited labeled data. We transfer supervision from labeled RGB images to unlabeled depth and optical flow images and demonstrate large improvements for both these cross modal supervision transfers.", "title": "" }, { "docid": "b59e527be8cfb1a0d9f475904bbf1602", "text": "Clustering is grouping input data sets into subsets, called ’clusters’ within which the elements are somewhat similar. In general, clustering is an unsupervised learning task as very little or no prior knowledge is given except the input data sets. The tasks have been used in many fields and therefore various clustering algorithms have been developed. Clustering task is, however, computationally expensive as many of the algorithms require iterative or recursive procedures and most of real-life data is high dimensional. Therefore, the parallelization of clustering algorithms is inevitable, and various parallel clustering algorithms have been implemented and applied to many applications. In this paper, we review a variety of clustering algorithms and their parallel versions as well. Although the parallel clustering algorithms have been used for many applications, the clustering tasks are applied as preprocessing steps for parallelization of other algorithms too. Therefore, the applications of parallel clustering algorithms and the clustering algorithms for parallel computations are described in this paper.", "title": "" }, { "docid": "e294a94b03a2bd958def360a7bce2a46", "text": "The seismic loss estimation is greatly influenced by the identification of the failure mechanism and distribution of the structures. In case of infilled structures, the final failure mechanism greatly differs to that expected during the design and the analysis stages. This is mainly due to the resultant composite behaviour of the frame and the infill panel, which makes the failure assessment and consequently the loss estimation a challenge. In this study, a numerical investigation has been conducted on the influence of masonry infilled panels on physical structural damages and the associated economic losses, under seismic excitation. The selected index buildings have been simulated following real case typical mid-rise masonry infilled steel frame structures. A realistic simulation of construction details, such as variation of infill material properties, type of connections and built quality have been implemented in the models. The fragility functions have been derived for each model using the outcomes obtained from incremental dynamic analysis (IDA). Moreover, by considering different cases of building distribution, the losses have been estimated following an intensity-based assessment approach. The results indicate that the presence of infill panel have a noticeable influence on the vulnerability of the structure and should not be ignored in loss estimations.", "title": "" }, { "docid": "852578afdb63985d93b1d2d0ee8fc3e8", "text": "This paper builds on the recent ASPIC formalism, to develop a general framework for argumentation with preferences. We motivate a revised definition of conflict free sets of arguments, adapt ASPIC to accommodate a broader range of instantiating logics, and show that under some assumptions, the resulting framework satisfies key properties and rationality postulates. We then show that the generalised framework accommodates Tarskian logic instantiations extended with preferences, and then study instantiations of the framework by classical logic approaches to argumentation. We conclude by arguing that ASPIC’s modelling of defeasible inference rules further testifies to the generality of the framework, and then examine and counter recent critiques of Dung’s framework and its extensions to accommodate preferences.", "title": "" }, { "docid": "b8e8404c061350aeba92f6ed1ecea1f1", "text": "We consider a single-product revenue management problem where, given an initial inventory, the objective is to dynamically adjust prices over a finite sales horizon to maximize expected revenues. Realized demand is observed over time, but the underlying functional relationship between price and mean demand rate that governs these observations (otherwise known as the demand function or demand curve) is not known. We consider two instances of this problem: (i) a setting where the demand function is assumed to belong to a known parametric family with unknown parameter values; and (ii) a setting where the demand function is assumed to belong to a broad class of functions that need not admit any parametric representation. In each case we develop policies that learn the demand function “on the fly,” and optimize prices based on that. The performance of these algorithms is measured in terms of the regret: the revenue loss relative to the maximal revenues that can be extracted when the demand function is known prior to the start of the selling season. We derive lower bounds on the regret that hold for any admissible pricing policy, and then show that our proposed algorithms achieve a regret that is “close” to this lower bound. The magnitude of the regret can be interpreted as the economic value of prior knowledge on the demand function, manifested as the revenue loss due to model uncertainty.", "title": "" }, { "docid": "d2926d0b4cc10ae2704d04970e58af07", "text": "The event-related potential (ERP) is one of the most popular measures in human cognitive neuroscience. During the last few years there has been a debate about the neural fundamentals of ERPs. Two models have been proposed: The evoked model states that additive evoked responses which are completely independent of ongoing background electroencephalogram generate the ERP. On the other hand the phase reset model suggests a resetting of ongoing brain oscillations to be the neural generator of ERPs. Here, evidence for either of the two models is presented and validated, and their possible impact on cognitive neuroscience is discussed. In addition, future prospects on this field of research are presented.", "title": "" }, { "docid": "f70197342f7bca887fe224c57544f389", "text": "CONTEXT/OBJECTIVE\nThe Spinal Cord Injury--Quality of Life (SCI-QOL) measurement system was developed to address the shortage of relevant and psychometrically sound patient reported outcome (PRO) measures available for clinical care and research in spinal cord injury (SCI) rehabilitation. Using a computer adaptive testing (CAT) approach, the SCI-QOL builds on the Patient Reported Outcomes Measurement Information System (PROMIS) and the Quality of Life in Neurological Disorders (Neuro-QOL) initiative. This initial manuscript introduces the background and development of the SCI-QOL measurement system. Greater detail is presented in the additional manuscripts of this special issue.\n\n\nDESIGN\nClassical and contemporary test development methodologies were employed. Qualitative input was obtained from individuals with SCI and clinicians through interviews, focus groups, and cognitive debriefing. Item pools were field tested in a multi-site sample (n=877) and calibrated using item response theory methods. Initial reliability and validity testing was performed in a new sample of individuals with traumatic SCI (n=245).\n\n\nSETTING\nFive Model SCI System centers and one Department of Veterans Affairs Medical Center across the United States.\n\n\nPARTICIPANTS\nAdults with traumatic SCI.\n\n\nINTERVENTIONS\nn/a\n\n\nOUTCOME MEASURES\nn/a\n\n\nRESULTS\nThe SCI-QOL consists of 19 item banks, including the SCI-Functional Index banks, and 3 fixed-length scales measuring physical, emotional, and social aspects of health-related QOL (HRQOL).\n\n\nCONCLUSION\nThe SCI-QOL measurement system consists of psychometrically sound measures for individuals with SCI. The manuscripts in this special issue provide evidence of the reliability and initial validity of this measurement system. The SCI-QOL also links to other measures designed for a general medical population.", "title": "" }, { "docid": "42d978209e74f2052b93eb15b816ba5f", "text": "Bayesian optimization (BO) is a popular algorithm for solving challenging optimization tasks. It is designed for problems where the objective function is expensive to evaluate, perhaps not available in exact form, without gradient information and possibly returning noisy values. Different versions of the algorithm vary in the choice of the acquisition function, which recommends the point to query the objective at next. Initially, researchers focused on improvement-based acquisitions, while recently the attention has shifted to more computationally expensive informationtheoretical measures. In this paper we present two major contributions to the literature. First, we propose a new improvement-based acquisition function that recommends query points where the improvement is expected to be high with high confidence. The proposed algorithm is evaluated on a large set of benchmark functions from the global optimization literature, where it turns out to perform at least as well as current state-of-the-art acquisition functions, and often better. This suggests that it is a powerful default choice for BO. The novel policy is then compared to widely used global optimization solvers in order to confirm that BO methods reduce the computational costs of the optimization by keeping the number of function evaluations small. The second main contribution represents an application to precision medicine, where the interest lies in the estimation of parameters of a partial differential equations model of the human pulmonary blood circulation system. Once inferred, these parameters can help clinicians in diagnosing a patient with pulmonary hypertension without going through the standard invasive procedure of right heart catheterization, which can lead to side effects and complications (e.g. severe pain, internal bleeding, thrombosis).", "title": "" }, { "docid": "46dc618a779bd658bfa019117c880d3a", "text": "The concept and deployment of Internet of Things (IoT) has continued to develop momentum over recent years. Several different layered architectures for IoT have been proposed, although there is no consensus yet on a widely accepted architecture. In general, the proposed IoT architectures comprise three main components: an object layer, one or more middle layers, and an application layer. The main difference in detail is in the middle layers. Some include a cloud services layer for managing IoT things. Some propose virtual objects as digital counterparts for physical IoT objects. Sometimes both cloud services and virtual objects are included.In this paper, we take a first step toward our eventual goal of developing an authoritative family of access control models for a cloud-enabled Internet of Things. Our proposed access-control oriented architecture comprises four layers: an object layer, a virtual object layer, a cloud services layer, and an application layer. This 4-layer architecture serves as a framework to build access control models for a cloud-enabled IoT. Within this architecture, we present illustrative examples that highlight some IoT access control issues leading to a discussion of needed access control research. We identify the need for communication control within each layer and across adjacent layers (particularly in the lower layers), coupled with the need for data access control (particularly in the cloud services and application layers).", "title": "" }, { "docid": "9bb0ee77990ead987b49ab4180edd99f", "text": "Stacked graphs are a visualization technique popular in casual scenarios for representing multiple time-series. Variations of stacked graphs have been focused on reducing the distortion of individual streams because foundational perceptual studies suggest that variably curved slopes may make it difficult to accurately read and compare values. We contribute to this discussion by formally comparing the relative readability of basic stacked area charts, ThemeRivers, streamgraphs and our own interactive technique for straightening baselines of individual streams in a ThemeRiver. We used both real-world and randomly generated datasets and covered tasks at the elementary, intermediate and overall information levels. Results indicate that the decreased distortion of the newer techniques does appear to improve their readability, with streamgraphs performing best for value comparison tasks. We also found that when a variety of tasks is expected to be performed, using the interactive version of the themeriver leads to more correctness at the cost of being slower for value comparison tasks.", "title": "" } ]
scidocsrr
3bc2540a5508d18a673024ea2f41709a
Short- versus long-term duration of dual-antiplatelet therapy after coronary stenting: a randomized multicenter trial.
[ { "docid": "355720b7bbdc6d6d30987fc0dad5602e", "text": "To assess the likelihood of procedural success in patients with multivessel coronary disease undergoing percutaneous coronary angioplasty, 350 consecutive patients (1,100 stenoses) from four clinical sites were evaluated. Eighteen variables characterizing the severity and morphology of each stenosis and 18 patient-related variables were assessed at a core angiographic laboratory and at the clinical sites. Most patients had Canadian Cardiovascular Society class III or IV angina (72%) and two-vessel coronary disease (78%). Left ventricular function was generally well preserved (mean ejection fraction, 58 +/- 12%; range, 18-85%) and 1.9 +/- 1.0 stenoses per patient had attempted percutaneous coronary angioplasty. Procedural success (less than or equal to 50% final diameter stenosis in one or more stenoses and no major ischemic complications) was achieved in 290 patients (82.8%), and an additional nine patients (2.6%) had a reduction in diameter stenosis by 20% or more with a final diameter stenosis 51-60% and were without major complications. Major ischemic complications (death, myocardial infarction, or emergency bypass surgery) occurred in 30 patients (8.6%). In-hospital mortality was 1.1%. Stepwise regression analysis determined that a modified American College of Cardiology/American Heart Association Task Force (ACC/AHA) classification of the primary target stenosis (with type B prospectively divided into type B1 [one type B characteristic] and type B2 [greater than or equal to two type B characteristics]) and the presence of diabetes mellitus were the only variables independently predictive of procedural outcome (target stenosis modified ACC/AHA score; p less than 0.001 for both success and complications; diabetes mellitus: p = 0.003 for success and p = 0.016 for complications). Analysis of success and complications on a per stenosis dilated basis showed, for type A stenoses, a 92% success and a 2% complication rate; for type B1 stenoses, an 84% success and a 4% complication rate; for type B2 stenoses, a 76% success and a 10% complication rate; and for type C stenoses, a 61% success and a 21% complication rate. The subdivision into types B1 and B2 provided significantly more information in this clinically important intermediate risk group than did the standard ACC/AHA scheme. The stenosis characteristics of chronic total occlusion, high grade (80-99% diameter) stenosis, stenosis bend of more than 60 degrees, and excessive tortuosity were particularly predictive of adverse procedural outcome. This improved scheme may improve clinical decision making and provide a framework on which to base meaningful subgroup analysis in randomized trials assessing the efficacy of percutaneous coronary angioplasty.", "title": "" } ]
[ { "docid": "7e0325cb9fd5272eef74cb5e481db5f1", "text": "Familial amyloidotic polyneuropathy (FAP) is an inherited autosomal dominant disease that is commonly caused by accumulation of deposits of transthyretin (TTR) amyloid around peripheral nerves. The only effective treatment for FAP is liver transplantation. However, recent studies on TTR aggregation provide clues to the mechanism of the molecular pathogenesis of FAP and suggest new avenues for therapeutic intervention. It is increasingly recognized that there are common features of a number of protein-misfolding diseases that can lead to neurodegeneration. As for other amyloidogenic proteins, the most toxic forms of aggregated TTR are likely to be the low-molecular-mass diffusible species, and there is increasing evidence that this toxicity is mediated by disturbances in calcium homeostasis. This article reviews what is already known about the mechanism of TTR aggregation in FAP and describes how recent discoveries in other areas of amyloid research, particularly Alzheimer's disease, provide clues to the molecular pathogenesis of FAP.", "title": "" }, { "docid": "6b07c3fb97ab3a1001cf3753adb6754f", "text": "• Starting with the fact that school education has failed to become education for critical thinking and that one of the reasons for that could be in how education for critical thinking is conceptualised, this paper presents: (1) an analysis of the predominant approach to education for critical thinking through the implementation of special programs and methods, and (2) an attempt to establish different approaches to education for critical thinking. The overview and analysis of understanding education for developing critical thinking as the implementation of special programs reveal that it is perceived as a decontextualised activity, reduced to practicing individual intellectual skills. Foundations for a different approach, which could be characterised as the ‘education for critical competencies’, are found in ideas of critical pedagogy and open curriculum theory. This approach differs from the predominant approach in terms of how the nature and purpose of critical thinking and education for critical thinking are understood. In the approach of education for critical competencies, it is not sufficient to introduce special programs and methods for the development of critical thinking to the existing educational system. This approach emphasises the need to question and reconstruct the status, role, and power of pupils and teachers in the teaching process, but also in the process of curriculum development.", "title": "" }, { "docid": "653fee86af651e13e0d26fed35ef83e4", "text": "Small ducted fan autonomous vehicles have potential for several applications, especially for missions in urban environments. This paper discusses the use of dynamic inversion with neural network adaptation to provide an adaptive controller for the GTSpy, a small ducted fan autonomous vehicle based on the Micro Autonomous Systems’ Helispy. This approach allows utilization of the entire low speed flight envelope with a relatively poorly understood vehicle. A simulator model is constructed from a force and moment analysis of the vehicle, allowing for a validation of the controller in preparation for flight testing. Data from flight testing of the system is provided.", "title": "" }, { "docid": "9504c6c6286f6bd57e5e443d6fdcced9", "text": "Comparisons of two assessment measures for ADHD: the ADHD Behavior Checklist and the Integrated Visual and Auditory Continuous Performance Test (IVA CPT) were examined using undergraduates (n=44) randomly assigned to a control or a simulated malingerer condition and undergraduates with a valid diagnosis of ADHD (n=16). It was predicted that malingerers would successfully fake ADHD on the rating scale but not on the CPT for which they would overcompensate, scoring lower than all other groups. Analyses indicated that the ADHD Behavior Rating Scale was successfully faked for childhood and current symptoms. IVA CPT could not be faked on 81% of its scales. The CPT's impairment index results revealed: sensitivity 94%, specificity 91%, PPP 88%, NPP 95%. Results provide support for the inclusion of a CPT in assessment of adult ADHD.", "title": "" }, { "docid": "5108dd1dba48ce0369568e30dd20ca21", "text": "In this paper, we analyze neural network-based dialogue systems trained in an end-to-end manner using an updated version of the recent Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words1. This dataset is interesting because of its size, long context lengths, and technical nature; thus, it can be used to train large models directly from data with minimal feature engineering. We provide baselines in two different environments: one where models are trained to select the correct next response from a list of candidate responses, and one where models are trained to maximize the loglikelihood of a generated utterance conditioned on the context of the conversation. These are both evaluated on a recall task that we call next utterance classification (NUC), and using vector-based metrics that capture the topicality of the responses. We observe that current end-to-end models are 1. This work is an extension of a paper appearing in SIGDIAL (Lowe et al., 2015). This paper further includes results on generative dialogue models, more extensive evaluation of the retrieval models using vector-based generative metrics, and a qualitative examination of responses from the generative models and classification errors made by the Dual Encoder model. Experiments are performed on a new version of the corpus, the Ubuntu Dialogue Corpus v2, which is publicly available: https://github.com/rkadlec/ubuntu-ranking-dataset-creator. The early dataset has been updated to add features and fix bugs, which are detailed in Section 3. c ©2017 Ryan Lowe, Nissan Pow, Iulian Vlad Serban, Laurent Charlinn, Chia-Wei Liu and Joelle Pineau This is an open-access article distributed under the terms of a Creative Commons Attribution License (http ://creativecommons.org/licenses/by/3.0/). LOWE, POW, SERBAN, CHARLINN, LIU AND PINEAU unable to completely solve these tasks; thus, we provide a qualitative error analysis to determine the primary causes of error for end-to-end models evaluated on NUC, and examine sample utterances from the generative models. As a result of this analysis, we suggest some promising directions for future research on the Ubuntu Dialogue Corpus, which can also be applied to end-to-end dialogue systems in general.", "title": "" }, { "docid": "34b4a91dac887d6d0c7387baae9fd0a2", "text": "Robert Burns wrote: “The best laid schemes of Mice and Men oft go awry”. This could be considered the motto of most educational innovation. The question that arises is not so much why some innovations fail (although this is very important question), but rather why other innovations succeed? This study investigated the success factors of large-scale educational innovation projects in Dutch higher education. The research team attempted to identify success factors that might be relevant to educational innovation projects. The research design was largely qualitative, with a guided interview as the primary means of data collection, followed by data analysis and a correlation of findings with the success factors identified in the literature review. In order to pursue the research goal, a literature review of success factors was first conducted to identify existing knowledge in this area, followed by a detailed study of the educational innovation projects that have been funded by SURF Education. To obtain a list of potential success factors, existing project documentation and evaluations were reviewed and the project chairs and other important players were interviewed. Reports and evaluations by the projects themselves were reviewed to extract commonalities and differences in the factors that the projects felt were influential in their success of educational innovation. In the next phase of the project experts in the field of project management, project chairs of successful projects and evaluators/raters of projects will be asked to pinpoint factors of importance that were facilitative or detrimental to the outcome of their projects and implementation of the innovations. After completing the interviews all potential success factors will be recorded and clustered using an affinity technique. The clusters will then be labeled and clustered, creating a hierarchy of potential success factors. The project chairs will finally be asked to select the five most important success factors out of the hierarchy, and to rank their importance. This technique – the Experts’ Concept Mapping Method – is based upon Trochim’s concept mapping approach (1989a, 1989b) and was developed and perfected by Stoyanov and Kirschner (2004). Finally, the results will lead to a number of instruments as well as a functional procedure for tendering, selecting and monitoring innovative educational projects. The identification of success factors for educational innovation projects and measuring performance of projects based upon these factors are important as they can aid the development and implementation of innovation projects by explicating and making visible (and thus manageable) those success and failure factors relating to educational innovation projects in higher education. Determinants for Failure and Success of Innovation Projects: The Road to Sustainable Educational Innovation The Dutch Government has invested heavily in stimulating better and more creative use of information and communication technologies (ICT) in all forms of education. The ultimate goal of this investment is to ensure that students and teachers are equipped with the skills and knowledge required for success in the new knowledge-based economy. All stakeholders (i.e., government, industry, educational institutions, society in general) have placed high priority on achieving this goal. However, these highly funded projects have often resulted in either short-lived or local successes or outright failures (see De Bie,", "title": "" }, { "docid": "b9fb60fadf13304b46f87fda305f118e", "text": "Coordinated cyberattacks of power meter readings can be arranged to be undetectable by any bad data detection algorithm in the power system state estimation process. These unobservable attacks present a potentially serious threat to grid operations. Of particular interest are sparse attacks that involve the compromise of a modest number of meter readings. An efficient algorithm to find all unobservable attacks [under standard DC load flow approximations] involving the compromise of exactly two power injection meters and an arbitrary number of line power meters is presented. This requires O(n2m) flops for a power system with n buses and m line meters. If all lines are metered, there exist canonical forms that characterize all 3, 4, and 5-sparse unobservable attacks. These can be quickly detected in power systems using standard graph algorithms. Known-secure phasor measurement units [PMUs] can be used as countermeasures against an arbitrary collection of cyberattacks. Finding the minimum number of necessary PMUs is NP-hard. It is shown that p + 1 PMUs at carefully chosen buses are sufficient to neutralize a collection of p cyberattacks.", "title": "" }, { "docid": "799573bf08fb91b1ac644c979741e7d2", "text": "This short paper reports the method and the evaluation results of Casio and Shinshu University joint team for the ISBI Challenge 2017 – Skin Lesion Analysis Towards Melanoma Detection – Part 3: Lesion Classification hosted by ISIC. Our online validation score was 0.958 with melanoma classifier AUC 0.924 and seborrheic keratosis classifier AUC 0.993.", "title": "" }, { "docid": "d75f15a8b7eff7e74b95abb59cc488cc", "text": "Up to recently autonomous mobile robots were mostly designed to run within an indoor, yet partly structured and flat, environment. In rough terrain many problems arise and position tracking becomes more difficult. The robot has to deal with wheel slippage and large orientation changes. In this paper we will first present the recent developments on the off-road rover Shrimp. Then a new method, called 3D-Odometry, which extends the standard 2D odometry to the 3D space will be developed. Since it accounts for transitions, the 3D-Odometry provides better position estimates. It will certainly help to go towards real 3D navigation for outdoor robots.", "title": "" }, { "docid": "600a0b473a9396a9c098c40f83ec9273", "text": "This paper presents two W-band waveguide bandpass filters, one fabricated using laser micromachining and the other 3-D printing. Both filters are based on coupled resonators and are designed to have a Chebyshev response. The first filter is for laser micromachining and it is designed to have a compact structure allowing the whole filter to be made from a single metal workpiece. This eliminates the need to split the filter into several layers and therefore yields an enhanced performance in terms of low insertion loss and good durability. The second filter is produced from polymer resin using a stereolithography 3-D printing technique and the whole filter is plated with copper. To facilitate the plating process, the waveguide filter consists of slots on both the broadside and narrow side walls. Such slots also reduce the weight of the filter while still retaining the filter's performance in terms of insertion loss. Both filters are fabricated and tested and have good agreement between measurements and simulations.", "title": "" }, { "docid": "6c7e35c54c1fd1d321cc533d63db802e", "text": "We propose to demonstrate DORIS, a system that maps the schema of a Web Service automatically to the schema of a knowledge base. Given only the input type and the URL of the Web Service, DORIS executes a few probing calls, and deduces an intensional description of the Web service. In addition, she computes an XSLT transformation function that can transform a Web Service call result in XML to RDF facts in the target schema. Users will be able to play with DORIS, and to see how real-world Web Services can be mapped to large knowledge bases of the Semantic Web.", "title": "" }, { "docid": "88c592bdd7bb9c9348545734a9508b7b", "text": "environments: An introduction C.-S. Li B. L. Brech S. Crowder D. M. Dias H. Franke M. Hogstrom D. Lindquist G. Pacifici S. Pappe B. Rajaraman J. Rao R. P. Ratnaparkhi R. A. Smith M. D. Williams During the past few years, enterprises have been increasingly aggressive in moving mission-critical and performance-sensitive applications to the cloud, while at the same time many new mobile, social, and analytics applications are directly developed and operated on cloud computing platforms. These two movements are encouraging the shift of the value proposition of cloud computing from cost reduction to simultaneous agility and optimization. These requirements (agility and optimization) are driving the recent disruptive trend of software defined computing, for which the entire computing infrastructureVcompute, storage and networkVis becoming software defined and dynamically programmable. The key elements within software defined environments include capability-based resource abstraction, goal-based and policy-based workload definition, and outcome-based continuous mapping of the workload to the available resources. Furthermore, software defined environments provide the tooling and capabilities to compose workloads from existing components that are then continuously and autonomously mapped onto the underlying programmable infrastructure. These elements enable software defined environments to achieve agility, efficiency, and continuous outcome-optimized provisioning and management, plus continuous assurance for resiliency and security. This paper provides an overview and introduction to the key elements and challenges of software defined environments.", "title": "" }, { "docid": "082630a33c0cc0de0e60a549fc57d8e8", "text": "Agricultural monitoring, especially in developing countries, can help prevent famine and support humanitarian efforts. A central challenge is yield estimation, i.e., predicting crop yields before harvest. We introduce a scalable, accurate, and inexpensive method to predict crop yields using publicly available remote sensing data. Our approach improves existing techniques in three ways. First, we forego hand-crafted features traditionally used in the remote sensing community and propose an approach based on modern representation learning ideas. We also introduce a novel dimensionality reduction technique that allows us to train a Convolutional Neural Network or Long-short Term Memory network and automatically learn useful features even when labeled training data are scarce. Finally, we incorporate a Gaussian Process component to explicitly model the spatio-temporal structure of the data and further improve accuracy. We evaluate our approach on county-level soybean yield prediction in the U.S. and show that it outperforms competing techniques.", "title": "" }, { "docid": "2f23d51ffd54a6502eea07883709d016", "text": "Named entity recognition (NER) is a popular domain of natural language processing. For this reason, many tools exist to perform this task. Amongst other points, they differ in the processing method they rely upon, the entity types they can detect, the nature of the text they can handle, and their input/output formats. This makes it difficult for a user to select an appropriate NER tool for a specific situation. In this article, we try to answer this question in the context of biographic texts. For this matter, we first constitute a new corpus by annotating 247 Wikipedia articles. We then select 4 publicly available, well known and free for research NER tools for comparison: Stanford NER, Illinois NET, OpenCalais NER WS and Alias-i LingPipe. We apply them to our corpus, assess their performances and compare them. When considering overall performances, a clear hierarchy emerges: Stanford has the best results, followed by LingPipe, Illionois and OpenCalais. However, a more detailed evaluation performed relatively to entity types and article categories highlights the fact their performances are diversely influenced by those factors. This complementarity opens an interesting perspective regarding the combination of these individual tools in order to improve performance.", "title": "" }, { "docid": "22719028c913aa4d0407352caf185d7a", "text": "Although the fact that genetic predisposition and environmental exposures interact to shape development and function of the human brain and, ultimately, the risk of psychiatric disorders has drawn wide interest, the corresponding molecular mechanisms have not yet been elucidated. We found that a functional polymorphism altering chromatin interaction between the transcription start site and long-range enhancers in the FK506 binding protein 5 (FKBP5) gene, an important regulator of the stress hormone system, increased the risk of developing stress-related psychiatric disorders in adulthood by allele-specific, childhood trauma–dependent DNA demethylation in functional glucocorticoid response elements of FKBP5. This demethylation was linked to increased stress-dependent gene transcription followed by a long-term dysregulation of the stress hormone system and a global effect on the function of immune cells and brain areas associated with stress regulation. This identification of molecular mechanisms of genotype-directed long-term environmental reactivity will be useful for designing more effective treatment strategies for stress-related disorders.", "title": "" }, { "docid": "eb44e4ac9f1a3345df85ced155909661", "text": "Domain adaptation (DA) attempts to enhance the generalization capability of classifier through narrowing the gap of the distributions across domains. This paper focuses on unsupervised domain adaptation where labels are not available in target domain. Most existing approaches explore the domaininvariant features shared by domains but ignore the discriminative information of source domain. To address this issue, we propose a discriminative domain adaptation method (DDA) to reduce domain shift by seeking a common latent subspace jointly using supervised sparse coding (SSC) and discriminative regularization term. Particularly, DDA adapts SSC to yield discriminative coefficients of target data and further unites with discriminative regularization term to induce a common latent subspace across domains. We show that both strategies can boost the ability of transferring knowledge from source to target domain. Experiments on two real world datasets demonstrate the effectiveness of our proposed method over several existing state-of-the-art domain adaptation methods.", "title": "" }, { "docid": "fde2aefec80624ff4bc21d055ffbe27b", "text": "Object detector with region proposal networks such as Fast/Faster R-CNN [1, 2] have shown the state-of-the art performance on several benchmarks. However, they have limited success for detecting small objects. We argue the limitation is related to insufficient performance of Fast R-CNN block in Faster R-CNN. In this paper, we propose a refining block for Fast R-CNN. We further merge the block and Faster R-CNN into a single network (RF-RCNN). The RF-RCNN was applied on plate and human detection in RoadView image that consists of high resolution street images (over 30M pixels). As a result, the RF-RCNN showed great improvement over the Faster-RCNN.", "title": "" }, { "docid": "2a6ccc94e3f3a9beb2992da1e225b720", "text": "This paper proposes a new design of SPOKE-type PM brushless direct current (BLDC) motor without using neodymium PM (Nd-PM). The proposed model has an improved output characteristic as it uses the properties of the magnetic flux effect of the SPOKE-type motor with an additional pushing assistant magnet and subassistant magnet in the shape of spoke. In this paper, ferrite PM (Fe-PM) is used instead of Nd-PM. First, the air-gap flux density and backelectromotive force (BEMF) are obtained based on the field model. Second, the analytical expressions of magnet field strength and magnet flux density are obtained in the air gap produced by Fe-PM. The developed analytical model is obtained by solving the magnetic scalar potential. Finally, the air-gap field distribution and BEMF of SPOKE-type motor are analyzed. The analysis works for internal rotor motor topologies. This paper validates results of the analytical model by finite-element analysis for wing-shaped SPOKE-type BLDC motors.", "title": "" }, { "docid": "0ff3ccdf834b8264cada634049389c9c", "text": "Many applications today need to manage large data sets with uncertainties. In this paper we describe the foundations of managing data where the uncertainties are quantified as probabilities. We review the basic definitions of the probabilistic data model, present some fundamental theoretical result for query evaluation on probabilistic databases, and discuss several challenges, open problems, and research directions.", "title": "" } ]
scidocsrr
74521a8d14d1abf8bd9f6cd44131653e
LCDet: Low-Complexity Fully-Convolutional Neural Networks for Object Detection in Embedded Systems
[ { "docid": "065ca3deb8cb266f741feb67e404acb5", "text": "Recent research on deep convolutional neural networks (CNNs) has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple CNN architectures that achieve that accuracy level. With equivalent accuracy, smaller CNN architectures offer at least three advantages: (1) Smaller CNNs require less communication across servers during distributed training. (2) Smaller CNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller CNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small CNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques, we are able to compress SqueezeNet to less than 0.5MB (510× smaller than AlexNet). The SqueezeNet architecture is available for download here: https://github.com/DeepScale/SqueezeNet", "title": "" } ]
[ { "docid": "89e5162c09c873f03c40aa2c64f85924", "text": "The aim of the study was to assess the efficacy of the standardised extract SHR-5 of roots of Rhodiola Rosea L. in the treatment of individuals suffering from stress-related fatigue. The phase III clinical trial took the form of a randomised, double-blind, placebo-controlled study with parallel groups. Participants, males and females aged between 20 and 55 years, were selected according to the Swedish National Board of Health and Welfare diagnostic criteria for fatigue syndrome. A total of 60 individuals were randomised into two groups, one ( N = 30) of which received four tablets daily of SHR-5 extract (576 mg extract/day), while a second ( N = 30) received four placebo tablets daily. The effects of the extract with respect to quality of life (SF-36 questionnaire), symptoms of fatigue (Pines' burnout scale), depression (Montgomery -Asberg depression rating scale - MADRS), attention (Conners' computerised continuous performance test II - CCPT II), and saliva cortisol response to awakening were assessed on day 1 and after 28 days of medication. Data were analysed by between-within analyses of variance. No serious side effects that could be attributed to the extract were reported. Significant post-treatment improvements were observed for both groups (placebo effect) in Pines' burnout scale, mental health (SF-36), and MADRS and in several CCPT II indices of attention, namely, omissions, commissions, and Hit RT SE. When the two groups were compared, however, significant effects of the SHR-5 extract in comparison with the placebo were observed in Pines' burnout scale and the CCPT II indices omissions, Hit RT SE, and variability. Pre- VERSUS post-treatment cortisol responses to awakening stress were significantly different in the treatment group compared with the control group. It is concluded that repeated administration of R. ROSEA extract SHR-5 exerts an anti-fatigue effect that increases mental performance, particularly the ability to concentrate, and decreases cortisol response to awakening stress in burnout patients with fatigue syndrome.", "title": "" }, { "docid": "9d9ab636a48fb67675d14fff5255c93a", "text": "Much like sentences are composed of words, words themselves are composed of smaller units. For example, the English word questionably can be analyzed as question+able+ly. However, this structural decomposition of the word does not directly give us a semantic representation of the word’s meaning. Since morphology obeys the principle of compositionality, the semantics of the word can be systematically derived from the meaning of its parts. In this work, we propose a novel probabilistic model of word formation that captures both the analysis of a word w into its constituent segments and the synthesis of the meaning of w from the meanings of those segments. Our model jointly learns to segment words into morphemes and compose distributional semantic vectors of those morphemes. We experiment with the model on English CELEX data and German DErivBase (Zeller et al., 2013) data. We show that jointly modeling semantics increases both segmentation accuracy and morpheme F1 by between 3% and 5%. Additionally, we investigate different models of vector composition, showing that recurrent neural networks yield an improvement over simple additive models. Finally, we study the degree to which the representations correspond to a linguist’s notion of morphological productivity.", "title": "" }, { "docid": "968ee8726afb8cc82d629ac8afabf3db", "text": "Online communities are increasingly important to organizations and the general public, but there is little theoretically based research on what makes some online communities more successful than others. In this article, we apply theory from the field of social psychology to understand how online communities develop member attachment, an important dimension of community success. We implemented and empirically tested two sets of community features for building member attachment by strengthening either group identity or interpersonal bonds. To increase identity-based attachment, we gave members information about group activities and intergroup competition, and tools for group-level communication. To increase bond-based attachment, we gave members information about the activities of individual members and interpersonal similarity, and tools for interpersonal communication. Results from a six-month field experiment show that participants’ visit frequency and self-reported attachment increased in both conditions. Community features intended to foster identity-based attachment had stronger effects than features intended to foster bond-based attachment. Participants in the identity condition with access to group profiles and repeated exposure to their group’s activities visited their community twice as frequently as participants in other conditions. The new features also had stronger effects on newcomers than on old-timers. This research illustrates how theory from the social science literature can be applied to gain a more systematic understanding of online communities and how theory-inspired features can improve their success. 1", "title": "" }, { "docid": "82d4b2aa3e3d3ec10425c6250268861c", "text": "Deep Neural Networks (DNNs) are typically trained by backpropagation in a batch learning setting, which requires the entire training data to be made available prior to the learning task. This is not scalable for many real-world scenarios where new data arrives sequentially in a stream form. We aim to address an open challenge of “Online Deep Learning” (ODL) for learning DNNs on the fly in an online setting. Unlike traditional online learning that often optimizes some convex objective function with respect to a shallow model (e.g., a linear/kernel-based hypothesis), ODL is significantly more challenging since the optimization of the DNN objective function is non-convex, and regular backpropagation does not work well in practice, especially for online learning settings. In this paper, we present a new online deep learning framework that attempts to tackle the challenges by learning DNN models of adaptive depth from a sequence of training data in an online learning setting. In particular, we propose a novel Hedge Backpropagation (HBP) method for online updating the parameters of DNN effectively, and validate the efficacy of our method on large-scale data sets, including both stationary and concept drifting scenarios.", "title": "" }, { "docid": "04d5824991ada6194f3028a900d7f31b", "text": "In this work, we present a solution to real-time monocular dense mapping. A tightly-coupled visual-inertial localization module is designed to provide metric and high-accuracy odometry. A motion stereo algorithm is proposed to take the video input from one camera to produce local depth measurements with semi-global regularization. The local measurements are then integrated into a global map for noise filtering and map refinement. The global map obtained is able to support navigation and obstacle avoidance for aerial robots through our indoor and outdoor experimental verification. Our system runs at 10Hz on an Nvidia Jetson TX1 by properly distributing computation to CPU and GPU. Through onboard experiments, we demonstrate its ability to close the perception-action loop for autonomous aerial robots. We release our implementation as open-source software1.", "title": "" }, { "docid": "42bc10578e76a0d006ee5d11484b1488", "text": "In this paper, we present a wrapper-based acoustic group feature selection system for the INTERSPEECH 2015 Computational Paralinguistics Challenge (ComParE) 2015, Eating Condition (EC) Sub-challenge. The wrapper-based method has two components: the feature subset evaluation and the feature space search. The feature subset evaluation is performed using Support Vector Machine (SVM) classifiers. The wrapper method combined with complex algorithms such as SVM is computationally intensive. To address this, the feature space search uses Best Incremental Ranked Subset (BIRS), a fast and efficient algorithm. Moreover, we investigate considering the feature space in meaningful groups rather than individually. The acoustic feature space is partitioned into groups with each group representing a Low Level Descriptor (LLD). This partitioning reduces the time complexity of the search algorithm and makes the problem more tractable while attempting to gain insight into the relevant acoustic feature groups. Our wrapper-based system achieves improvement over the challenge baseline on the EC Sub-challenge test set using a variant of BIRS algorithm and LLD groups.", "title": "" }, { "docid": "9787ae39c27f9cfad2dbd29779bb5f36", "text": "Compressive sensing (CS) techniques offer a framework for the detection and allocation of sparse signals with a reduced number of samples. Today, modern radar systems operate with high bandwidths—demanding high sample rates according to the Shannon–Nyquist theorem—and a huge number of single elements for phased array consumption and costs of radar systems. There is only a small number of publications addressing the application of CS to radar, leaving several open questions. This paper addresses some aspects as a further step to CS-radar by presenting generic system architectures and implementation considerations. It is not the aim of this paper to investigate numerically efficient algorithms but to point to promising applications as well as arising problems. Three possible applications are considered: pulse compression, radar imaging, and air space surveillance with array antennas. Some simulation results are presented and enriched by the evaluation of real data acquired by an experimental radar system of Fraunhofer FHR. & 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "7b02c36cef0c195d755b6cc1c7fbda2e", "text": "Content based object retrieval across large scale surveillance video dataset is a significant and challenging task, in which learning an effective compact object descriptor plays a critical role. In this paper, we propose an efficient deep compact descriptor with bagging auto-encoders. Specifically, we take advantage of discriminative CNN to extract efficient deep features, which not only involve rich semantic information but also can filter background noise. Besides, to boost the retrieval speed, auto-encoders are used to map the high-dimensional real-valued CNN features into short binary codes. Considering the instability of auto-encoder, we adopt a bagging strategy to fuse multiple auto-encoders to reduce the generalization error, thus further improving the retrieval accuracy. In addition, bagging is easy for parallel computing, so retrieval efficiency can be guaranteed. Retrieval experimental results on the dataset of 100k visual objects extracted from multi-camera surveillance videos demonstrate the effectiveness of the proposed deep compact descriptor.", "title": "" }, { "docid": "349b6f11d60d851a23d2d6f9ebe88e81", "text": "In the hybrid approach, neural network output directly serves as hidden Markov model (HMM) state posterior probability estimates. In contrast to this, in the tandem approach neural network output is used as input features to improve classic Gaussian mixture model (GMM) based emission probability estimates. This paper shows that GMM can be easily integrated into the deep neural network framework. By exploiting its equivalence with the log-linear mixture model (LMM), GMM can be transformed to a large softmax layer followed by a summation pooling layer. Theoretical and experimental results indicate that the jointly trained and optimally chosen GMM and bottleneck tandem features cannot perform worse than a hybrid model. Thus, the question “hybrid vs. tandem” simplifies to optimizing the output layer of a neural network. Speech recognition experiments are carried out on a broadcast news and conversations task using up to 12 feed-forward hidden layers with sigmoid and rectified linear unit activation functions. The evaluation of the LMM layer shows recognition gains over the classic softmax output.", "title": "" }, { "docid": "43dbdd6590c51b26c12512291a3a2cdb", "text": "The algorithmic challenge of maximizing information diffusion through word-of-mouth processes in social networks has been heavily studied in the past decade. While there has been immense progress and an impressive arsenal of techniques has been developed, the algorithmic frameworks make idealized assumptions regarding access to the network that can often result in poor performance of state-of-the-art techniques. In this paper we introduce a new framework which we call Adaptive Seeding. The framework is a two-stage stochastic optimization model designed to leverage the potential that typically lies in neighboring nodes of arbitrary samples of social networks. Our main result is an algorithm which provides a constant factor approximation to the optimal adaptive policy for any influence function in the Triggering model.", "title": "" }, { "docid": "8655653e5a4a64518af8da996ac17c25", "text": "Although a rigorous review of literature is essential for any research endeavor, technical solutions that support systematic literature review approaches are still scarce. Systematic literature searches in particular are often described as complex, error-prone and time-consuming, due to the prevailing lack of adequate technical support. In this study, we therefore aim to learn how to design information systems that effectively facilitate systematic literature searches. Using the design science research paradigm, we develop design principles that intend to increase comprehensiveness, precision, and reproducibility of systematic literature searches. The design principles are derived through multiple design cycles that include the instantiation of the principles in form of a prototype web application and qualitative evaluations. Our design knowledge could serve as a foundation for future research on systematic search systems and support the development of innovative information systems that, eventually, improve the quality and efficiency of systematic literature reviews.", "title": "" }, { "docid": "4ce21cf8fa30a8019c4b91e9d9f2c6e3", "text": "We prove that every simple polygon can be made as a (2D) pop-up card/book that opens to any desired angle between 0 and 360◦. More precisely, given a simple polygon attached to the two walls of the open pop-up, our polynomial-time algorithm subdivides the polygon into a single-degree-of-freedom linkage structure, such that closing the pop-up flattens the linkage without collision. This result solves an open problem of Hara and Sugihara from 2009. We also show how to obtain a more efficient construction for the special case of orthogonal polygons, and how to make 3D orthogonal polyhedra, from pop-ups that open to 90◦, 180◦, 270◦, or 360◦. 1998 ACM Subject Classification I.3.5 Computational Geometry and Object Modeling", "title": "" }, { "docid": "7063d3eb38008bcd344f0ae1508cca61", "text": "The fitness of an evolutionary individual can be understood in terms of its two basic components: survival and reproduction. As embodied in current theory, trade-offs between these fitness components drive the evolution of life-history traits in extant multicellular organisms. Here, we argue that the evolution of germ-soma specialization and the emergence of individuality at a new higher level during the transition from unicellular to multicellular organisms are also consequences of trade-offs between the two components of fitness-survival and reproduction. The models presented here explore fitness trade-offs at both the cell and group levels during the unicellular-multicellular transition. When the two components of fitness negatively covary at the lower level there is an enhanced fitness at the group level equal to the covariance of components at the lower level. We show that the group fitness trade-offs are initially determined by the cell level trade-offs. However, as the transition proceeds to multicellularity, the group level trade-offs depart from the cell level ones, because certain fitness advantages of cell specialization may be realized only by the group. The curvature of the trade-off between fitness components is a basic issue in life-history theory and we predict that this curvature is concave in single-celled organisms but becomes increasingly convex as group size increases in multicellular organisms. We argue that the increasingly convex curvature of the trade-off function is driven by the initial cost of reproduction to survival which increases as group size increases. To illustrate the principles and conclusions of the model, we consider aspects of the biology of the volvocine green algae, which contain both unicellular and multicellular members.", "title": "" }, { "docid": "5474d000acf6c20708ed73b5a7e38a0b", "text": "The primary objective of the research is to estimate the dependence between hair mercury content, hair selenium, mercury-to-selenium ratio, serum lipid spectrum, and gamma-glutamyl transferase (GGT) activity in 63 adults (40 men and 23 women). Serum triglyceride (TG) concentration in the high-mercury group significantly exceeded the values obtained for low- and medium-mercury groups by 72 and 42 %, respectively. Serum GGT activity in the examinees from high-Hg group significantly exceeded the values of the first and the second groups by 75 and 28 %, respectively. Statistical analysis of the male sample revealed similar dependences. Surprisingly, no significant changes in the parameters analyzed were detected in the female sample. In all analyzed samples, hair mercury was not associated with hair selenium concentrations. Significant correlation between hair mercury content and serum TG concentration (r = 0.531) and GGT activity (r = 0.524) in the general sample of the examinees was detected. The respective correlations were observed in the male sample. Hair mercury-to-selenium ratios significantly correlated with body weight (r = 0.310), body mass index (r = 0.250), serum TG (r = 0.389), atherogenic index (r = 0.257), and GGT activity (r = 0.393). The same correlations were observed in the male sample. Hg/Se ratio in women did not correlate with the analyzed parameters. Generally, the results of the current study show the following: (1) hair mercury is associated with serum TG concentration and GGT activity in men, (2) hair selenium content is not related to hair mercury concentration, and (3) mercury-to-selenium ratio correlates with lipid spectrum parameters and GGT activity.", "title": "" }, { "docid": "09c83ff6b43d66f160887f50711a1a6a", "text": "Recently, there has been a significant increase in the popularity of anonymous social media sites like Whisper and Secret. Unlike traditional social media sites like Facebook and Twitter, posts on anonymous social media sites are not associated with well-defined user identities or profiles. In this study, our goals are two-fold: (i) to understand the nature (sensitivity, types) of content posted on anonymous social media sites and (ii) to investigate the differences between content posted on anonymous and non-anonymous social media sites like Twitter. To this end, we gather and analyze extensive content traces from Whisper (anonymous) and Twitter (non-anonymous) social media sites. We introduce the notion of anonymity sensitivity of a social media post, which captures the extent to which users think the post should be anonymous. We also propose a human annotator based methodology to measure the same for Whisper and Twitter posts. Our analysis reveals that anonymity sensitivity of most whispers (unlike tweets) is not binary. Instead, most whispers exhibit many shades or different levels of anonymity. We also find that the linguistic differences between whispers and tweets are so significant that we could train automated classifiers to distinguish between them with reasonable accuracy. Our findings shed light on human behavior in anonymous media systems that lack the notion of an identity and they have important implications for the future designs of such systems.", "title": "" }, { "docid": "4a84fabb0b4edefc1850940ed2081f47", "text": "Given a large overcomplete dictionary of basis vectors, the goal is to simultaneously represent L>1 signal vectors using coefficient expansions marked by a common sparsity profile. This generalizes the standard sparse representation problem to the case where multiple responses exist that were putatively generated by the same small subset of features. Ideally, the associated sparse generating weights should be recovered, which can have physical significance in many applications (e.g., source localization). The generic solution to this problem is intractable and, therefore, approximate procedures are sought. Based on the concept of automatic relevance determination, this paper uses an empirical Bayesian prior to estimate a convenient posterior distribution over candidate basis vectors. This particular approximation enforces a common sparsity profile and consistently places its prominent posterior mass on the appropriate region of weight-space necessary for simultaneous sparse recovery. The resultant algorithm is then compared with multiple response extensions of matching pursuit, basis pursuit, FOCUSS, and Jeffreys prior-based Bayesian methods, finding that it often outperforms the others. Additional motivation for this particular choice of cost function is also provided, including the analysis of global and local minima and a variational derivation that highlights the similarities and differences between the proposed algorithm and previous approaches.", "title": "" }, { "docid": "80948f6534fd73a4a12af93cfff3084f", "text": "The ubiquity of location enabled devices has resulted in a wide proliferation of location based applications and services. To handle the growing scale, database management systems driving such location based services (LBS) must cope with high insert rates for location updates of millions of devices, while supporting efficient real-time analysis on latest location. Traditional DBMSs, equipped with multi-dimensional index structures, can efficiently handle spatio-temporal data. However, popular open source relational database systems are overwhelmed by the high insertion rates, real-time querying requirements, and terabytes of data that these systems must handle. On the other hand, Key-value stores can effectively support large scale operation, but do not natively support multi-attribute accesses needed to support the rich querying functionality essential for the LBSs. We present MD-HBase, a scalable data management system for LBSs that bridges this gap between scale and functionality. Our approach leverages a multi-dimensional index structure layered over a Key-value store. The underlying Key-value store allows the system to sustain high insert throughput and large data volumes, while ensuring fault-tolerance, and high availability. On the other hand, the index layer allows efficient multi-dimensional query processing. We present the design of MD-HBase that builds two standard index structuresâ€\"the K-d tree and the Quad treeâ€\"over a range partitioned Key-value store. Our prototype implementation using HBase, a standard open-source Key-value store, can handle hundreds of thousands of inserts per second using a modest 16 node cluster, while efficiently processing multidimensional range queries and nearest neighbor queries in real-time with response times as low as hundreds of milliseconds.", "title": "" }, { "docid": "737a7c63bab1a6688ec280d5d1abc7b5", "text": "Medicine continues to struggle in its approaches to numerous common subjective pain syndromes that lack objective signs and remain treatment resistant. Foremost among these are migraine, fibromyalgia, and irritable bowel syndrome, disorders that may overlap in their affected populations and whose sufferers have all endured the stigma of a psychosomatic label, as well as the failure of endless pharmacotherapeutic interventions with substandard benefit. The commonality in symptomatology in these conditions displaying hyperalgesia and central sensitization with possible common underlying pathophysiology suggests that a clinical endocannabinoid deficiency might characterize their origin. Its base hypothesis is that all humans have an underlying endocannabinoid tone that is a reflection of levels of the endocannabinoids, anandamide (arachidonylethanolamide), and 2-arachidonoylglycerol, their production, metabolism, and the relative abundance and state of cannabinoid receptors. Its theory is that in certain conditions, whether congenital or acquired, endocannabinoid tone becomes deficient and productive of pathophysiological syndromes. When first proposed in 2001 and subsequently, this theory was based on genetic overlap and comorbidity, patterns of symptomatology that could be mediated by the endocannabinoid system (ECS), and the fact that exogenous cannabinoid treatment frequently provided symptomatic benefit. However, objective proof and formal clinical trial data were lacking. Currently, however, statistically significant differences in cerebrospinal fluid anandamide levels have been documented in migraineurs, and advanced imaging studies have demonstrated ECS hypofunction in post-traumatic stress disorder. Additional studies have provided a firmer foundation for the theory, while clinical data have also produced evidence for decreased pain, improved sleep, and other benefits to cannabinoid treatment and adjunctive lifestyle approaches affecting the ECS.", "title": "" }, { "docid": "ab7184c576396a1da32c92093d606a53", "text": "Power electronics has progressively gained an important status in power generation, distribution, and consumption. With more than 70% of electricity processed through power electronics, recent research endeavors to improve the reliability of power electronic systems to comply with more stringent constraints on cost, safety, and availability in various applications. This paper serves to give an overview of the major aspects of reliability in power electronics and to address the future trends in this multidisciplinary research direction. The ongoing paradigm shift in reliability research is presented first. Then, the three major aspects of power electronics reliability are discussed, respectively, which cover physics-of-failure analysis of critical power electronic components, state-of-the-art design for reliability process and robustness validation, and intelligent control and condition monitoring to achieve improved reliability under operation. Finally, the challenges and opportunities for achieving more reliable power electronic systems in the future are discussed.", "title": "" }, { "docid": "38c32734ecc5d0e1c3bb30f97f9c9798", "text": "Dengue has emerged as an international public health problem. Reasons for the resurgence of dengue in the tropics and subtropics are complex and include unprecedented urbanization with substandard living conditions, lack of vector control, virus evolution, and international travel. Of all these factors, urbanization has probably had the most impact on the amplification of dengue within a given country, and travel has had the most impact for the spread of dengue from country to country and continent to continent. Epidemics of dengue, their seasonality, and oscillations over time are reflected by the epidemiology of dengue in travelers. Sentinel surveillance of travelers could augment existing national public health surveillance systems.", "title": "" } ]
scidocsrr
f9a7a3ce93a3015e94750b0027b20eb1
Directions in Blockchain Data Management and Analytics
[ { "docid": "bf87a4c68912f1de3492dac098f4fc17", "text": "In this paper, we demonstrate a blockchain-based solution for transparently managing and analyzing data in a pay-as-you-go car insurance application. This application allows drivers who rarely use cars to only pay insurance premium for particular trips they would like to travel. One of the key challenges from database perspective is how to ensure all the data pertaining to the actual trip and premium payment made by the users are transparently recorded so that every party in the insurance contract including the driver, the insurance company, and the financial institution is confident that the data are tamper-proof and traceable. \n Another challenge from information retrieval perspective is how to perform entity matching and pattern matching on customer data as well as their trip and claim history recorded on the blockchain for intelligent fraud detection. Last but not least, the drivers' trip history, once have been collected sufficiently, can be much valuable for the insurance company to do offline analysis and build statistics on past driving behaviour and past vehicle runtime. These statistics enable the insurance company to offer the users with transparent and individualized insurance quotes. Towards this end, we develop a blockchain-based solution for micro-insurance applications that transparently keeps records and executes smart contracts depending on runtime conditions while also connecting with off-chain analytic databases.", "title": "" }, { "docid": "1315247aa0384097f5f9e486bce09bd4", "text": "We give an overview of the scripting languages used in existing cryptocurrencies, and in particular we review in some detail the scripting languages of Bitcoin, Nxt and Ethereum, in the context of a high-level overview of Distributed Ledger Technology and cryptocurrencies. We survey different approaches, and give an overview of critiques of existing languages. We also cover technologies that might be used to underpin extensions and innovations in scripting and contracts, including technologies for verification, such as zero knowledge proofs, proof-carrying code and static analysis, as well as approaches to making systems more efficient, e.g. Merkelized Abstract Syntax Trees.", "title": "" }, { "docid": "5ed24bc652901423b5f2922c41b2702b", "text": "We put forward a new framework that makes it possible to re-write or compress the content of any number of blocks in decentralized services exploiting the blockchain technology. As we argue, there are several reasons to prefer an editable blockchain, spanning from the necessity to remove inappropriate content and the possibility to support applications requiring re-writable storage, to \"the right to be forgotten.\" Our approach generically leverages so-called chameleon hash functions (Krawczyk and Rabin, NDSS '00), which allow determining hash collisions efficiently, given a secret trapdoor information. We detail how to integrate a chameleon hash function in virtually any blockchain-based technology, for both cases where the power of redacting the blockchain content is in the hands of a single trusted entity and where such a capability is distributed among several distrustful parties (as is the case with Bitcoin). We also report on a proof-of-concept implementation of a redactable blockchain, building on top of Nakamoto's Bitcoin core. The prototype only requires minimal changes to the way current client software interprets the information stored in the blockchain and to the current blockchain, block, or transaction structures. Moreover, our experiments show that the overhead imposed by a redactable blockchain is small compared to the case of an immutable one.", "title": "" } ]
[ { "docid": "c7eceedbb7c6665dca1db772a22452dc", "text": "This paper proposes a quadruped walking robot that has high performance as a working machine. This robot is needed for various tasks controlled by tele-operation, especially for humanitarian mine detection and removal. Since there are numerous personnel landmines that are still in place from many wars, it is desirable to provide a safe and inexpensive tool that civilians can use to remove those mines. The authors have been working on the concept of the humanitarian demining robot systems for 4 years and have performed basic experiments with the Ž rst prototype VK-I using the modiŽ ed quadruped walking robot, TITAN-VIII. After those experiments, it was possible to reŽ ne some concepts and now the new robot has a tool (end-effector)changing system on its back, so that by utilizing the legs as manipulation arms and connecting various tools to the foot, it can perform mine detection and removal tasks. To accomplish these tasks, we developed various end-effectors that can be attached to the working leg. In this paper we will discuss the mechanical design of the new walking robot called TITAN-IX to be applied to the new system VK-II.", "title": "" }, { "docid": "7ba741da041825e8047c2a4efeb1e82c", "text": "OBJECTIVE\nThe objective of this study was to investigate the impact of two different commercially available dental implants on osseointegration. The surfaces were sandblasting and acid etching (Group 1) and sandblasting and acid etching, then maintained in an isotonic solution of 0.9% sodium chloride (Group 2).\n\n\nMATERIAL AND METHODS\nX-ray photoelectron spectroscopy (XPS) was employed for surface chemistry analysis. Surface morphology and topography was investigated by scanning electron microscopy (SEM) and confocal microscopy (CM), respectively. Contact angle analysis (CAA) was employed for wetting evaluation. Bone-implant-contact (BIC) and bone area fraction occupied (BAFO) analysis were performed on thin sections (30 μm) 14 and 28 days after the installation of 10 implants from each group (n=20) in rabbits' tibias. Statistical analysis was performed by ANOVA at the 95% level of significance considering implantation time and implant surface as independent variables.\n\n\nRESULTS\nGroup 2 showed 3-fold less carbon on the surface and a markedly enhanced hydrophilicity compared to Group 1 but a similar surface roughness (p>0.05). BIC and BAFO levels in Group 2 at 14 days were similar to those in Group 1 at 28 days. After 28 days of installation, BIC and BAFO measurements of Group 2 were approximately 1.5-fold greater than in Group 1 (p<0.05).\n\n\nCONCLUSION\nThe surface chemistry and wettability implants of Group 2 accelerate osseointegration and increase the area of the bone-to-implant interface when compared to those of Group 1.", "title": "" }, { "docid": "fc9fe094b3e46a85b7564a89730347fd", "text": "We present a design study of the TensorFlow Graph Visualizer, part of the TensorFlow machine intelligence platform. This tool helps users understand complex machine learning architectures by visualizing their underlying dataflow graphs. The tool works by applying a series of graph transformations that enable standard layout techniques to produce a legible interactive diagram. To declutter the graph, we decouple non-critical nodes from the layout. To provide an overview, we build a clustered graph using the hierarchical structure annotated in the source code. To support exploration of nested structure on demand, we perform edge bundling to enable stable and responsive cluster expansion. Finally, we detect and highlight repeated structures to emphasize a model's modular composition. To demonstrate the utility of the visualizer, we describe example usage scenarios and report user feedback. Overall, users find the visualizer useful for understanding, debugging, and sharing the structures of their models.", "title": "" }, { "docid": "4373b838d10ac77127c3a7021fe4534c", "text": "Fine-grained recognition concerns categorization at sub-ordinate levels, where the distinction between object classes is highly local. Compared to basic level recognition, fine-grained categorization can be more challenging as there are in general less data and fewer discriminative features. This necessitates the use of stronger prior for feature selection. In this work, we include humans in the loop to help computers select discriminative features. We introduce a novel online game called \"Bubbles\" that reveals discriminative features humans use. The player's goal is to identify the category of a heavily blurred image. During the game, the player can choose to reveal full details of circular regions (\"bubbles\"), with a certain penalty. With proper setup the game generates discriminative bubbles with assured quality. We next propose the \"Bubble Bank\" algorithm that uses the human selected bubbles to improve machine recognition performance. Experiments demonstrate that our approach yields large improvements over the previous state of the art on challenging benchmarks.", "title": "" }, { "docid": "23afac6bd3ed34fc0c040581f630c7bd", "text": "Automatic Facial Expression Recognition and Analysis, in particular FACS Action Unit (AU) detection and discrete emotion detection, has been an active topic in computer science for over two decades. Standardisation and comparability has come some way; for instance, there exist a number of commonly used facial expression databases. However, lack of a common evaluation protocol and lack of sufficient details to reproduce the reported individual results make it difficult to compare systems to each other. This in turn hinders the progress of the field. A periodical challenge in Facial Expression Recognition and Analysis would allow this comparison in a fair manner. It would clarify how far the field has come, and would allow us to identify new goals, challenges and targets. In this paper we present the first challenge in automatic recognition of facial expressions to be held during the IEEE conference on Face and Gesture Recognition 2011, in Santa Barbara, California. Two sub-challenges are defined: one on AU detection and another on discrete emotion detection. It outlines the evaluation protocol, the data used, and the results of a baseline method for the two sub-challenges.", "title": "" }, { "docid": "c809ef0984855e377bf241ed8a7aa7eb", "text": "Priapism of the clitoris is a rare entity. A case of painful priapism is reported in a patient who had previously suffered a radical cystectomy for bladder carcinoma pT3-GIII, followed by local recurrence in the pelvis. From a symptomatic point of view she showed a good response to conservative treatment (analgesics and anxiolytics), as she refused surgical treatment. She survived 6 months from the recurrence, and died with lung metastases. The priapism did not recur. The physiopathological mechanisms involved in the process are discussed and the literature reviewed.", "title": "" }, { "docid": "6505cf8aa0202f31b980087159b5006b", "text": "Dermatophytes, a group of keratinophilic fungi thriving on the keratin substrate are the etiological agents responsible for causing cutaneous infections. Dermatophytosis is currently treated with the commercially available topical and oral antifungal agents in spite of the existing side effects. Treatment of these cutaneous infections with secondary metabolites produced by marine microorganisms is considered as a novel approach. For many years these organisms have been explored with the view of developing antibacterial, antifungal, antiviral, anticancer and antiparasitic drugs. Exploring the unexplored aspect of actinobacteria for developing antidermatophytic drugs is a novel attempt which needs further investigation.", "title": "" }, { "docid": "e3978d849b1449c40299841bfd70ea69", "text": "New generations of network intrusion detection systems create the need for advanced pattern-matching engines. This paper presents a novel scheme for pattern-matching, called BFPM, that exploits a hardware-based programmable statemachine technology to achieve deterministic processing rates that are independent of input and pattern characteristics on the order of 10 Gb/s for FPGA and at least 20 Gb/s for ASIC implementations. BFPM supports dynamic updates and is one of the most storage-efficient schemes in the industry, supporting two thousand patterns extracted from Snort with a total of 32 K characters in only 128 KB of memory.", "title": "" }, { "docid": "074ffb251bfa5e529fceecc284834d15", "text": "OBJECTIVE\nEffective nutrition labels are part of a supportive environment that encourages healthier food choices. The present study examined the use, understanding and preferences regarding nutrition labels among ethnically diverse shoppers in New Zealand.\n\n\nDESIGN AND SETTING\nA survey was carried out at twenty-five supermarkets in Auckland, New Zealand, between February and April 2007. Recruitment was stratified by ethnicity. Questions assessed nutrition label use, understanding of the mandatory Nutrition Information Panel (NIP), and preference for and understanding of four nutrition label formats: multiple traffic light (MTL), simple traffic light (STL), NIP and percentage of daily intake (%DI).\n\n\nSUBJECTS\nIn total 1525 shoppers completed the survey: 401 Maori, 347 Pacific, 372 Asian and 395 New Zealand European and Other ethnicities (ten did not state ethnicity).\n\n\nRESULTS\nReported use of nutrition labels (always, regularly, sometimes) ranged from 66% to 87% by ethnicity. There was little difference in ability to obtain information from the NIP according to ethnicity or income. However, there were marked ethnic differences in ability to use the NIP to determine if a food was healthy, with lesser differences by income. Of the four label formats tested, STL and MTL labels were best understood across all ethnic and income groups, and MTL labels were most frequently preferred.\n\n\nCONCLUSIONS\nThere are clear ethnic and income disparities in ability to use the current mandatory food labels in New Zealand (NIP) to determine if foods are healthy. Conversely, MTL and STL label formats demonstrated high levels of understanding and acceptance across ethnic and income groups.", "title": "" }, { "docid": "62a405be34c1ce733c0ded8dfe72e1cf", "text": "This paper presents a new formulation of the artificial potential approach to the obstacle avoidance problem for a mobile robot or a manipulator in a known environment. Previous formulations of artificial potentials, for obstacle avoidance, have exhibited local minima in a cluttered environment. To build an artificial potential field, we use harmonic functions which completely eliminate local minima even for a cluttered environment. We use the panel method to represent arbitrarily shaped obstacles and to derive the potential over the whole space. Based on this potential function, we propose an elegant conml strategy for the real-time control of a robot. We test the harmonic potential, the panel method and the control strategy with a bar-shaped mobile robot and a 3 dof planar redundant manipulator.", "title": "" }, { "docid": "fef105b33a85f76f24c468c58a7534a0", "text": "An aging population in the United States presents important challenges for patients and physicians. The presence of inflammation can contribute to an accelerated aging process, the increasing presence of comorbidities, oxidative stress, and an increased prevalence of chronic pain. As patient-centered care is embracing a multimodal, integrative approach to the management of disease, patients and physicians are increasingly looking to the potential contribution of natural products. Camu camu, a well-researched and innovative natural product, has the potential to contribute, possibly substantially, to this management paradigm. The key issue is to raise camu camu's visibility through increased emphasis on its robust evidentiary base and its various formulations, as well as making consumers, patients, and physicians more aware of its potential. A program to increase the visibility of camu camu can contribute substantially not only to the management of inflammatory conditions and its positive contribution to overall good health but also to its potential role in many disease states.", "title": "" }, { "docid": "42440fb81f45c470d591c3bc57e7875b", "text": "We develop a framework to incorporate unlabeled data in the Error-Correcting Output Coding (ECOC) setup by decomposing multiclass problems into multiple binary problems and then use Co-Training to learn the individual binary classification problems. We show that our method is especially useful for classification tasks involving a large number of categories where Co-training doesn’t perform very well by itself and when combined with ECOC, outperforms several other algorithms that combine labeled and unlabeled data for text classification in terms of accuracy, precision-recall tradeoff, and efficiency.", "title": "" }, { "docid": "dd341bf0b1c8bb51e1ef962aceaa5ee2", "text": "A curve skeleton is a compact representation of 3D objects and has numerous applications. It can be used to describe an object's geometry and topology. In this paper, we introduce a novel approach for computing curve skeletons for volumetric representations of the input models. Our algorithm consists of three major steps: 1) using iterative least squares optimization to shrink models and, at the same time, preserving their geometries and topologies, 2) extracting curve skeletons through the thinning algorithm, and 3) pruning unnecessary branches based on shrinking ratios. The proposed method is less sensitive to noise on the surface of models and can generate smoother skeletons. In addition, our shrinking algorithm requires little computation, since the optimization system can be factorized and stored in the precomputational step. We demonstrate several extracted skeletons that help evaluate our algorithm. We also experimentally compare the proposed method with other well-known methods. Experimental results show advantages when using our method over other techniques.", "title": "" }, { "docid": "70e2536b0f3a00252bb3223a98fca8bd", "text": "The Third International Chinese Language Processing Bakeoff was held in Spring 2006 to assess the state of the art in two important tasks: word segmentation and named entity recognition. Twenty-nine groups submitted result sets in the two tasks across two tracks and a total of five corpora. We found strong results in both tasks as well as continuing challenges.", "title": "" }, { "docid": "65ce54d9733d8978c68eb4fe35ce430d", "text": "Digital photographs are replacing tradition films in our daily life and the quantity is exploding. This stimulates the strong need for efficient management tools, in which the annotation of “who” in each photo is essential. In this paper, we propose an automated method to annotate family photos using evidence from face, body and context information. Face recognition is the first consideration. However, its performance is limited by the uncontrolled condition of family photos. In family album, the same groups of people tend to appear in similar events, in which they tend to wear the same clothes within a short time duration and in nearby places. We could make use of social context information and body information to estimate the probability of the persons’ presence and identify other examples of the same recognized persons. In our approach, we first use social context information to cluster photos into events. Within each event, the body information is clustered, and then combined with face recognition results using a graphical model. Finally, the clusters with high face recognition confidence and context probabilities are identified as belonging to specific person. Experiments on a photo album containing over 1500 photos demonstrate that our approach is effective.", "title": "" }, { "docid": "e84856804fd03b5334353937e9db4f81", "text": "The probabilistic method comes up in various fields in mathematics. In these notes, we will give a brief introduction to graph theory and applications of the probabilistic method in proving bounds for Ramsey numbers and a theorem in graph cuts. This method is based on the following idea: in order to prove the existence of an object with some desired property, one defines a probability space on some larger class of objects, and then shows that an element of this space has the desired property with positive probability. The elements contained in this probability space may be of any kind. We will illustrate the probabilistic method by giving applications in graph theory.", "title": "" }, { "docid": "9b2a8692e1edf54ea06c97dec528b74c", "text": "We investigate the use of cross-lingual acoustic data to initialise deep neural network (DNN) acoustic models by means of unsupervised restricted Boltzmann machine (RBM) pre-training. DNNs for German are pretrained using one or all of German, Portuguese, Spanish and Swedish. The DNNs are used in a tandem configuration, where the network outputs are used as features for a hidden Markov model (HMM) whose emission densities are modeled by Gaussian mixture models (GMMs), as well as in a hybrid configuration, where the network outputs are used as the HMM state likelihoods. The experiments show that unsupervised pretraining is more crucial for the hybrid setups, particularly with limited amounts of transcribed training data. More importantly, unsupervised pretraining is shown to be language-independent.", "title": "" }, { "docid": "fc977e6b1a631e60688cbe10f2feb4f3", "text": "This paper deals with the development and simulation of a MATLAB/Simulink model of a Tidal Stream Converter (TSC) to be installed in the breakwater at the harbour of Mutriku on the Basque coast in Spain. The developed model is that of a three-bladed tidal turbine connected to a Doubly Fed Induction Generator (DFIG) for marine energy convertion. A Software-In-the-Loop (SIL) simulation of the established TSC model has been investigated using the NI VeriStand tool. This is achieved in order to prepare for the Hardware-In-the-Loop (HIL) implementation of the studied marine energy converter based on an NI Compact RIO real-time target. All simulation results, obtained by MATLAB/Simulink within a Model-In-the Loop (MIL) simulation framework and by NI VeriStand within SIL simulation one, are analyzed and compared in order to validate the developed TSC model.", "title": "" }, { "docid": "ac6344574ced223d007bd3b352b4b1b0", "text": "Mobile personal devices, such as smartphones, USB thumb drives, and sensors, are becoming essential elements of our modern lives. Their large-scale pervasive deployment within the population has already attracted many malware authors, cybercriminals, and even governments. Since the first demonstration of mobile malware by Marcos Velasco, millions of these have been developed with very sophisticated capabilities. They infiltrate highly secure networks using air-gap jumping capability (e.g., “Hammer Drill” and “Brutal Kangaroo”) and spread through heterogeneous computing and communication platforms. Some of these cross-platform malware attacks are capable of infiltrating isolated control systems which might be running a variety of operating systems, such as Windows, Mac OS X, Solaris, and Linux. This paper investigates cross-platform/heterogeneous mobile malware that uses removable media, such as USB connection, to spread between incompatible computing platforms and operating systems. Deep analysis and modeling of cross-platform mobile malware are conducted at the micro (infection) and macro (spread) levels. The micro-level analysis aims to understand the cross-platform malware states and transitions between these states during node-to-node infection. The micro-level analysis helps derive the parameters essential for macro-level analysis, which are also crucial for the elaboration of suitable detection and prevention solutions. The macro-level analysis aims to identify the most important factors affecting cross-platform mobile malware spread within a digitized population. Through simulation, we show that identifying these factors helps to mitigate any outbreaks.", "title": "" }, { "docid": "4143ffda9aefc24130cf14d1a55b7330", "text": "The abundance of opinions on the web has kindled the study of opinion summarization over the last few years. People have introduced various techniques and paradigms to solving this special task. This survey attempts to systematically investigate the different techniques and approaches used in opinion summarization. We provide a multi-perspective classification of the approaches used and highlight some of the key weaknesses of these approaches. This survey also covers evaluation techniques and data sets used in studying the opinion summarization problem. Finally, we provide insights into some of the challenges that are left to be addressed as this will help set the trend for future research in this area.", "title": "" } ]
scidocsrr
0c9f0a6dd97965b9f4bb2896d7cd75e3
Activity Recognition from Inertial Sensors with Convolutional Neural Networks
[ { "docid": "046837c87b6d6c789cc060c1dfa273c0", "text": "The last 20 years have seen ever-increasing research activity in the field of human activity recognition. With activity recognition having considerably matured, so has the number of challenges in designing, implementing, and evaluating activity recognition systems. This tutorial aims to provide a comprehensive hands-on introduction for newcomers to the field of human activity recognition. It specifically focuses on activity recognition using on-body inertial sensors. We first discuss the key research challenges that human activity recognition shares with general pattern recognition and identify those challenges that are specific to human activity recognition. We then describe the concept of an Activity Recognition Chain (ARC) as a general-purpose framework for designing and evaluating activity recognition systems. We detail each component of the framework, provide references to related research, and introduce the best practice methods developed by the activity recognition research community. We conclude with the educational example problem of recognizing different hand gestures from inertial sensors attached to the upper and lower arm. We illustrate how each component of this framework can be implemented for this specific activity recognition problem and demonstrate how different implementations compare and how they impact overall recognition performance.", "title": "" }, { "docid": "128ea037369e69aefa90ec37ae1f9625", "text": "The deep two-stream architecture [23] exhibited excellent performance on video based action recognition. The most computationally expensive step in this approach comes from the calculation of optical flow which prevents it to be real-time. This paper accelerates this architecture by replacing optical flow with motion vector which can be obtained directly from compressed videos without extra calculation. However, motion vector lacks fine structures, and contains noisy and inaccurate motion patterns, leading to the evident degradation of recognition performance. Our key insight for relieving this problem is that optical flow and motion vector are inherent correlated. Transferring the knowledge learned with optical flow CNN to motion vector CNN can significantly boost the performance of the latter. Specifically, we introduce three strategies for this, initialization transfer, supervision transfer and their combination. Experimental results show that our method achieves comparable recognition performance to the state-of-the-art, while our method can process 390.7 frames per second, which is 27 times faster than the original two-stream method.", "title": "" }, { "docid": "6b214fdd60a1a4efe27258c2ab948086", "text": "Ambient Assisted Living (AAL) aims to create innovative technical solutions and services to support independent living among older adults, improve their quality of life and reduce the costs associated with health and social care. AAL systems provide health monitoring through sensor based technologies to preserve health and functional ability and facilitate social support for the ageing population. Human activity recognition (HAR) is an enabler for the development of robust AAL solutions, especially in safety critical environments. Therefore, HAR models applied within this domain (e.g. for fall detection or for providing contextual information to caregivers) need to be accurate to assist in developing reliable support systems. In this paper, we evaluate three machine learning algorithms, namely Support Vector Machine (SVM), a hybrid of Hidden Markov Models (HMM) and SVM (SVM-HMM) and Artificial Neural Networks (ANNs) applied on a dataset collected between the elderly and their caregiver counterparts. Detected activities will later serve as inputs to a bidirectional activity awareness system for increasing social connectedness. Results show high classification performances for all three algorithms. Specifically, the SVM-HMM hybrid demonstrates the best classification performance. In addition to this, we make our dataset publicly available for use by the machine learning community.", "title": "" } ]
[ { "docid": "b1594132df243bbd68c91c84a54382c3", "text": "Several wearable computing or ubiquitous computing research projects have detected and distinguished user motion activities by attaching accelerometers in known positions and orientations on the user’s body. This paper observes that the orientation constraint can probably be relaxed. An estimate of the constant gravity vector can be obtained by averaging accelerometer samples. This gravity vector estimate in turn enables estimation of the vertical component and the magnitude of the horizontal component of the user’s motion, independently of how the three-axis accelerometer system is oriented.", "title": "" }, { "docid": "021166b4d3e7bce3f1c2e9df09637ac3", "text": "Various techniques have been proposed to enable organizations to initiate procedures to assess and ultimately to improve the quality of their data. The utility of these assessment techniques (ATs) has been demonstrated in different organizational contexts. However, while some of the ATs are geared towards specific application areas and are often not suitable in different applications, others are more general and therefore do not always meet specific requirements. To address this problem we propose the Hybrid Approach to assessing data quality, which can generate usable ATs for specific requirements using the activities of existing ATs. A literature review and bottom-up analysis of the existing data quality (DQ) ATs was used to identify the different activities proposed by each AT. Based on example requirements from an asset management organization, the activities were combined using the Hybrid Approach in order to generate an AT which can be followed to assess an existing DQ problem. The Hybrid Approach demonstrates that it is possible to develop new ways of assessing DQ which leverage the best practices proposed by existing ATs by combining the activities dynamically.", "title": "" }, { "docid": "a7f4a57534ee0a02b675e3b7acdf53d3", "text": "Semantic-oriented service matching is one of the challenges in automatic Web service discovery. Service users may search for Web services using keywords and receive the matching services in terms of their functional profiles. A number of approaches to computing the semantic similarity between words have been developed to enhance the precision of matchmaking, which can be classified into ontology-based and corpus-based approaches. The ontology-based approaches commonly use the differentiated concept information provided by a large ontology for measuring lexical similarity with word sense disambiguation. Nevertheless, most of the ontologies are domain-special and limited to lexical coverage, which have a limited applicability. On the other hand, corpus-based approaches rely on the distributional statistics of context to represent per word as a vector and measure the distance of word vectors. However, the polysemous problem may lead to a low computational accuracy. In this paper, in order to augment the semantic information content in word vectors, we propose a multiple semantic fusion (MSF) model to generate sense-specific vector per word. In this model, various semantic properties of the general-purpose ontology WordNet are integrated to fine-tune the distributed word representations learned from corpus, in terms of vector combination strategies. The retrofitted word vectors are modeled as semantic vectors for estimating semantic similarity. The MSF model-based similarity measure is validated against other similarity measures on multiple benchmark datasets. Experimental results of word similarity evaluation indicate that our computational method can obtain higher correlation coefficient with human judgment in most cases. Moreover, the proposed similarity measure is demonstrated to improve the performance of Web service matchmaking based on a single semantic resource. Accordingly, our findings provide a new method and perspective to understand and represent lexical semantics.", "title": "" }, { "docid": "0b40ed36adf91476da945ca9becc0c40", "text": "The popularity of social-networking sites, blogging and other content-sharing sites has exploded, resulting in more personal information and opinions being available with less access control than ever before [5]. Many content-sharing sites provide only the most rudimentary access control: a document can be either completely private or completely public. Other sites offer the slightly more flexible private/friends/public access-control model, but this still fails to support natural distinctions users need, such as separating real-world friends from online friends. The traditional response to these privacy concerns is to post anonymously or pseudonymously, but recent psychological research shows that some Internet users do not establish separate, online personae, but instead consider their online identity as an extension of their real-life self [3]. And although privacy expectations that users desire are easy to state, there is a large gap between the users’ mental models and the policy languages of traditional access-control systems [2]. The consequences of poor access control are welldocumented in the news media. Bloggers have lost their jobs when their employer discovered the employee’s personal blog [9]. Sexual predators use social-networking sites to find victims [7]. Bloggers have been stalked based on the opinions and personal information placed on their blog [8]. Universities have disciplined students using photographs published on social-networking sites [1]. For all these reasons, we advocate that blogs and social networks need a policy mechanism that supports high-level policies that can be expressed succinctly, applied automatically, and updated easily. Current access-control systems fail to meet these goals. Users manually enforce and manage their policies, users, groups, and roles of the system. Furthermore, these systems lack intuitive tools and interfaces for policy generation. We propose to solve all these problems by specifying access-control policies in terms of the content being mediated, e.g. “Blog posts about my home-town are visible to my high school friends.” The system will then automatically infer the posts that are subject to policy rules based on the posts’ contents. Similarly, the system can infer relationships and interests of the users based on the content of objects they create (see Section 3). Such policies will be intuitive and easy to specify, greatly enhancing usability for non-technical users. We first discuss the current state of access control on content-driven sites and analyze approaches proposed in literature for implementing access control for the web. We then describe our proposed method of access control for content-sharing sites.", "title": "" }, { "docid": "469ebb0e030f5450ed979c8f30566d48", "text": "It is currently unclear why adversarial examples are easy to construct for deep networks that are otherwise successful with respect to their training domain. However, it is suspected that these adversarial examples lie within some small perturbation from the network’s decision boundaries or exist in low-density regions with respect to the training distribution. Using persistent homology, we find that deep networks effectively have “holes” in their activation graphs, making them blind to regions of the input space that can be exploited by adversarial examples. These holes are effectively dense in the input space, making it easy to find a perturbed image that can be misclassified. By studying the topology of network activation, we find global patterns in the form of activation subgraphs which can both reliably determine whether an example is adversarial and can recover the true category of the example well above chance, implying that semantic information about the input is embedded globally via the activation pattern in deep networks.", "title": "" }, { "docid": "9365a612900a8bf0ddef8be6ec17d932", "text": "Stabilization exercise program has become the most popular treatment method in spinal rehabilitation since it has shown its effectiveness in some aspects related to pain and disability. However, some studies have reported that specific exercise program reduces pain and disability in chronic but not in acute low back pain, although it can be helpful in the treatment of acute low back pain by reducing recurrence rate (Ferreira et al., 2006).", "title": "" }, { "docid": "1f6d0e820b169d13e961b672b75bde71", "text": "Prenatal stress can cause long-term effects on cognitive functions in offspring. Hippocampal synaptic plasticity, believed to be the mechanism underlying certain types of learning and memory, and known to be sensitive to behavioral stress, can be changed by prenatal stress. Whether enriched environment treatment (EE) in early postnatal periods can cause a recovery from these deficits is unknown. Experimental animals were Wistar rats. Prenatal stress was evoked by 10 foot shocks (0.8 mA for 1s, 2-3 min apart) in 30 min per day at gestational day 13-19. After weaning at postnatal day 22, experimental offspring were given the enriched environment treatment through all experiments until tested (older than 52 days age). Electrophysiological and Morris water maze testing was performed at 8 weeks of age. The results showed that prenatal stress impaired long-term potentiation (LTP) but facilitated long-term depression (LTD) in the hippocampal CA1 region in the slices. Furthermore, prenatal stress exacerbated the effects of acute stress on hippocampal LTP and LTD, and also impaired spatial learning and memory in the Morris water maze. However, all these deficits induced by prenatal stress were recovered by enriched environment treatment. This work observes a phenomenon that may contribute to the understanding of clinically important interactions among cognitive deficit, prenatal stress and enriched environment treatment. Enriched environment treatment on early postnatal periods may be one potentially important target for therapeutic interventions in preventing the prenatal stress-induced cognitive disorders.", "title": "" }, { "docid": "fb59a43177d5e12ff8c87d04d10fcbbb", "text": "One of the main concerns of deep reinforcement learning (DRL) is the data inefficiency problem, which stems both from an inability to fully utilize data acquired and from naive exploration strategies. In order to alleviate these problems, we propose a DRL algorithm that aims to improve data efficiency via both the utilization of unrewarded experiences and the exploration strategy by combining ideas from unsupervised auxiliary tasks, intrinsic motivation, and hierarchical reinforcement learning (HRL). Our method is based on a simple HRL architecture with a metacontroller and a subcontroller. The subcontroller is intrinsically motivated by the metacontroller to learn to control aspects of the environment, with the intention of giving the agent: 1) a neural representation that is generically useful for tasks that involve manipulation of the environment and 2) the ability to explore the environment in a temporally extended manner through the control of the metacontroller. In this way, we reinterpret the notion of pixel- and feature-control auxiliary tasks as reusable skills that can be learned via an intrinsic reward. We evaluate our method on a number of Atari 2600 games. We found that it outperforms the baseline in several environments and significantly improves performance in one of the hardest games--Montezuma's revenge--for which the ability to utilize sparse data is key. We found that the inclusion of intrinsic reward is crucial for the improvement in the performance and that most of the benefit seems to be derived from the representations learned during training.", "title": "" }, { "docid": "5cb5698cd97daa9da2f94f88dc59e8e7", "text": "Inadvertent exposure of sensitive data is a major concern for potential cloud customers. Much focus has been on other data leakage vectors, such as side channel attacks, while issues of data disposal and assured deletion have not received enough attention to date. However, data that is not properly destroyed may lead to unintended disclosures, in turn, resulting in heavy financial penalties and reputational damage. In non-cloud contexts, issues of incomplete deletion are well understood. To the best of our knowledge, to date, there has been no systematic analysis of assured deletion challenges in public clouds.\n In this paper, we aim to address this gap by analysing assured deletion requirements for the cloud, identifying cloud features that pose a threat to assured deletion, and describing various assured deletion challenges. Based on this discussion, we identify future challenges for research in this area and propose an initial assured deletion architecture for cloud settings. Altogether, our work offers a systematization of requirements and challenges of assured deletion in the cloud, and a well-founded reference point for future research in developing new solutions to assured deletion.", "title": "" }, { "docid": "8cb84abf9a87b2691536ba58bd556a3a", "text": "The purpose of this tutorial paper is to make general type-2 fuzzy logic systems (GT2 FLSs) more accessible to fuzzy logic researchers and practitioners, and to expedite their research, designs, and use. To accomplish this, the paper 1) explains four different mathematical representations for general type-2 fuzzy sets (GT2 FSs); 2) demonstrates that for the optimal design of a GT2 FLS, one should use the vertical-slice representation of its GT2 FSs because it is the only one of the four mathematical representations that is parsimonious; 3) shows how to obtain set theoretic and other operations for GT2 FSs using type-1 (T1) FS mathematics (α- cuts play a central role); 4) reviews Mamdani and TSK interval type-2 (IT2) FLSs so that their mathematical operations can be easily used in a GT2 FLS; 5) provides all of the formulas that describe both Mamdani and TSK GT2 FLSs; 6) explains why center-of sets type-reduction should be favored for a GT2 FLS over centroid type-reduction; 7) provides three simplified GT2 FLSs (two are for Mamdani GT2 FLSs and one is for a TSK GT2 FLS), all of which bypass type reduction and are generalizations from their IT2 FLS counterparts to GT2 FLSs; 8) explains why gradient-based optimization should not be used to optimally design a GT2 FLS; 9) explains how derivative-free optimization algorithms can be used to optimally design a GT2 FLS; and 10) provides a three-step approach for optimally designing FLSs in a progressive manner, from T1 to IT2 to GT2, each of which uses a quantum particle swarm optimization algorithm, by virtue of which the performance for the IT2 FLS cannot be worse than that of the T1 FLS, and the performance for the GT2 FLS cannot be worse than that of the IT2 FLS.", "title": "" }, { "docid": "ca6ba96d752d565668e5d304bfb3b51d", "text": "T he paralinguistic information in a speech signal includes clues to the geographical and social background of the speaker. This thesis is concerned with automatic extraction of this information from a short segment of speech. A state-of-the-art Language Identification (ID) system, which is obtained by fusing variant of Gaussian mixture model and support vector machines, is developed and evaluated on the NIST 2003 and 2005 Language Recognition Evaluation (LRE) tasks. This system is applied to the problems of regional accent recognition for British English, and ethnic group recognition within a particular accent. We compare the results with human performance and, for accent recognition, the ‘text dependent’ ACCDIST accent recognition measure. For the fourteen regional accents of British English in the ABI-1 corpus (good quality read speech), our language ID system achieves a recognition accuracy of 86.4%, compared with 95.18% for our best ACCDIST-based system and 58.24% for human listeners. The “Voices across Birmingham” corpus contains significant amounts of telephone conversational speech for the two largest ethnic groups in the city of Birmingham (UK), namely the ‘Asian’ and ‘White’ communities. Our language ID system distinguishes between these two groups with an accuracy of 94.3% compared with 90.24% for human listeners. Although direct comparison is difficult, it seems that our language ID system performs much better on the standard twelve class NIST 2003 Language Recognition Evaluation task or the two class ethnic group recognition task than on the fourteen class regional accent recognition task. We conclude that automatic accent recognition is a challenging task for speech technology, and that the use of natural conversational speech may be advantageous for these types of paralinguistic task. One issue with conventional approaches to language ID that use high-order Gaussian Mixture Models (GMMs) and high-dimensional feature vectors is the amount of computing power that they require. Currently, multi-core Graphics Processing Units (GPUs) provide a possible solution at very little cost. In this thesis we also explore the application of GPUs to speech signal and pattern processing, using language ID as a vehicle to demonstrate their benefits. Realisation of the full potential of GPUs requires both effective coding of predetermined algorithms, and, in cases where there is a choice, selection of the algorithm or technique for a specific function that is most able to exploit the properties of the GPU. We demonstrate these principles using the NIST LRE 2003 task, which involves processing over 600 hours of speech. We focus on two parts of the system, namely the acoustic classifier, which is based on a 2048 component GMM, and the acoustic feature extraction process. In the case of the latter we compare a conventional FFT-based analysis with an FIR filter bank, both in terms of their ability to exploit the GPU architecture and language ID performance. With no increase in error rate our GPU based system, with an FIR-based front-end, completes the full NIST LRE 2003 task in 16 hours, compared with 180 hours for the more conventional FFT-based system on a standard CPU (a speed up factor of more than 11).", "title": "" }, { "docid": "a8d241e45dde35c4223e07c1b4a84a67", "text": "Leishmania spp. are intracellular parasitic protozoa responsible for a group of neglected tropical diseases, endemic in 98 countries around the world, called leishmaniasis. These parasites have a complex digenetic life cycle requiring a susceptible vertebrate host and a permissive insect vector, which allow their transmission. The clinical manifestations associated with leishmaniasis depend on complex interactions between the parasite and the host immune system. Consequently, leishmaniasis can be manifested as a self-healing cutaneous affliction or a visceral pathology, being the last one fatal in 85-90% of untreated cases. As a result of a long host-parasite co-evolutionary process, Leishmania spp. developed different immunomodulatory strategies that are essential for the establishment of infection. Only through deception and manipulation of the immune system, Leishmania spp. can complete its life cycle and survive. The understanding of the mechanisms associated with immune evasion and disease progression is essential for the development of novel therapies and vaccine approaches. Here, we revise how the parasite manipulates cell death and immune responses to survive and thrive in the shadow of the immune system.", "title": "" }, { "docid": "01087a2ef017a0aefff5115049f46a64", "text": "Keyphrases can provide highly condensed and valuable information that allows users to quickly acquire the main ideas. The task of automatically extracting them have received considerable attention in recent decades. Different from previous studies, which are usually focused on automatically extracting keyphrases from documents or articles, in this study, we considered the problem of automatically extracting keyphrases from tweets. Because of the length limitations of Twitter-like sites, the performances of existing methods usually drop sharply. We proposed a novel deep recurrent neural network (RNN) model to combine keywords and context information to perform this problem. To evaluate the proposed method, we also constructed a large-scale dataset collected from Twitter. The experimental results showed that the proposed method performs significantly better than previous methods.", "title": "" }, { "docid": "7cbcb3257626538e4bc398cf904cb856", "text": "Numerous tasks at the core of statistics, learning and vision areas are specific cases of ill-posed inverse problems. Recently, learning-based (e.g., deep) iterative methods have been empirically shown to be useful for these problems. Nevertheless, integrating learnable structures into iterations is still a laborious process, which can only be guided by intuitions or empirical insights. Moreover, there is a lack of rigorous analysis about the convergence behaviors of these reimplemented iterations, and thus the significance of such methods is a little bit vague. This paper moves beyond these limits and proposes Flexible Iterative Modularization Algorithm (FIMA), a generic and provable paradigm for nonconvex inverse problems. Our theoretical analysis reveals that FIMA allows us to generate globally convergent trajectories for learning-based iterative methods. Meanwhile, the devised scheduling policies on flexible modules should also be beneficial for classical numerical methods in the nonconvex scenario. Extensive experiments on real applications verify the superiority", "title": "" }, { "docid": "ce6296ae51be4e6fe3d36be618cdfe75", "text": "UNLABELLED\nOBJECTIVES. Understanding the factors that promote quality of life in old age has been a staple of social gerontology since its inception and remains a significant theme in aging research. The purpose of this article was to review the state of the science with regard to subjective well-being (SWB) in later life and to identify promising directions for future research.\n\n\nMETHODS\nThis article is based on a review of literature on SWB in aging, sociological, and psychological journals. Although the materials reviewed date back to the early 1960s, the emphasis is on publications in the past decade.\n\n\nRESULTS\nResearch to date paints an effective portrait of the epidemiology of SWB in late life and the factors associated with it. Although the research base is large, causal inferences about the determinants of SWB remain problematic. Two recent contributions to the research base are highlighted as emerging issues: studies of secular trends in SWB and cross-national studies. Discussion. The review ends with discussion of priority issues for future research.", "title": "" }, { "docid": "7fdd251faf180d2daecb6bfe0c825b2e", "text": "Biometric recognition, or biometrics, refers to the authentication of an individual based on her/his biometric traits. Among the various biometric traits (e.g., face, iris, fingerprint, voice), fingerprint-based authentication has the longest history, and has been successfully adopted in both forensic and civilian applications. Advances in fingerprint capture technology have resulted in new large scale civilian applications (e.g., US-VISIT program). However, these systems still encounter difficulties due to various noise factors present in operating environments. The purpose of this article is to give an overview of fingerprint-based recognition and discuss research opportunities for making these systems perform more effectively.", "title": "" }, { "docid": "e20d26ce3dea369ae6817139ff243355", "text": "This article explores the roots of white support for capital punishment in the United States. Our analysis addresses individual-level and contextual factors, paying particular attention to how racial attitudes and racial composition influence white support for capital punishment. Our findings suggest that white support hinges on a range of attitudes wider than prior research has indicated, including social and governmental trust and individualist and authoritarian values. Extending individual-level analyses, we also find that white responses to capital punishment are sensitive to local context. Perhaps most important, our results clarify the impact of race in two ways. First, racial prejudice emerges here as a comparatively strong predictor of white support for the death penalty. Second, black residential proximity functions to polarize white opinion along lines of racial attitude. As the black percentage of county residents rises, so too does the impact of racial prejudice on white support for capital punishment.", "title": "" }, { "docid": "99e869521e6cfca97b57e7afddcde6d1", "text": "Over the last few years, embedded systems have been increasingly used in safetycritical applications where failure can have serious consequences. The design of these systems is a complex process, which is requiring the integration of common design methods both in hardware and software to fulfill functional and non-functional requirements for these safety-critical applications. Design patterns, which give abstract solutions to commonly recurring design problems, have been widely used in the software and hardware domain. In this thesis, the concept of design patterns is adopted in the design of safetycritical embedded system. A catalog of design patterns was constructed to support the design of safety-critical embedded systems. This catalog includes a set of hardware and software design patterns which cover common design problems such as handling of random and systematic faults, safety monitoring, and sequence control. Furthermore, the catalog provides a decision support component that supports the decision process of choosing a suitable pattern for a particular problem based on the available resources and the requirements of the applicable patterns. As non-functional requirements are an important aspect in the design of safety-critical embedded systems, this work focuses on the integration of implications on non-functional properties in the existing design pattern concept. A pattern representation is proposed for safety-critical embedded application design methods by including fields for the implications and side effects of the represented design pattern on the non-functional requirements of the systems. The considered requirements include safety, reliability, modifiability, cost, and execution time. Safety and reliability represent the main non-functional requirements that should be provided in the design of safety-critical applications. Thus, reliability and safety assessment methods are proposed to show the relative safety and reliability improvement which can be achieved when using the design patterns under consideration. Moreover, a Monte Carlo based simulation method is used to illustrate the proposed assessment method which allows comparing different design patterns with respect to their impact on safety and reliability.", "title": "" }, { "docid": "0de3a24208018e55e1cb09a69e1cef80", "text": "Die Verwundbarkeit durch „Social Engineering“ lässt sich durch klassische Awareness-Maßnahmen allein nicht ausschalten. Was hilft, ist die Erhöhung der Souveränität des Individuums bei Entscheidungen — und Übung.", "title": "" }, { "docid": "8300897859310ad4ee6aff55d84f31da", "text": "We study an important crowdsourcing setting where agents evaluate one another and, based on these evaluations, a subset of agents are selected. This setting is ubiquitous when peer review is used for distributing awards in a team, allocating funding to scientists, and selecting publications for conferences. The fundamental challenge when applying crowdsourcing in these settings is that agents may misreport their reviews of others to increase their chances of being selected. We propose a new strategyproof (impartial) mechanism called Dollar Partition that satisfies desirable axiomatic properties. We then show, using a detailed experiment with parameter values derived from target real world domains, that our mechanism performs better on average, and in the worst case, than other strategyproof mechanisms in the literature.", "title": "" } ]
scidocsrr
61a1eb0ce584c1a469adc66700ef64a0
Unanimous Prediction for 100% Precision with Application to Learning Semantic Mappings
[ { "docid": "59c24fb5b9ac9a74b3f89f74b332a27c", "text": "This paper addresses the problem of learning to map sentences to logical form, given training data consisting of natural language sentences paired with logical representations of their meaning. Previous approaches have been designed for particular natural languages or specific meaning representations; here we present a more general method. The approach induces a probabilistic CCG grammar that represents the meaning of individual words and defines how these meanings can be combined to analyze complete sentences. We use higher-order unification to define a hypothesis space containing all grammars consistent with the training data, and develop an online learning algorithm that efficiently searches this space while simultaneously estimating the parameters of a log-linear parsing model. Experiments demonstrate high accuracy on benchmark data sets in four languages with two different meaning representations.", "title": "" }, { "docid": "6b7daba104f8e691dd32cba0b4d66ecd", "text": "This paper presents the first empirical results to our knowledge on learning synchronous grammars that generate logical forms. Using statistical machine translation techniques, a semantic parser based on a synchronous context-free grammar augmented with λoperators is learned given a set of training sentences and their correct logical forms. The resulting parser is shown to be the bestperforming system so far in a database query domain.", "title": "" } ]
[ { "docid": "8994470e355b5db188090be731ee4fe9", "text": "A system that allows museums to build and manage Virtual and Augmented Reality exhibitions based on 3D models of artifacts is presented. Dynamic content creation based on pre-designed visualization templates allows content designers to create virtual exhibitions very efficiently. Virtual Reality exhibitions can be presented both inside museums, e.g. on touch-screen displays installed inside galleries and, at the same time, on the Internet. Additionally, the presentation based on Augmented Reality technologies allows museum visitors to interact with the content in an intuitive and exciting manner.", "title": "" }, { "docid": "dc3d182f751beffdf4d7814073f6a05c", "text": "Information communication technologies (ICTs) have significantly revolutionized travel industry in the last decade. With an increasing number of travel companies participating in the Internet market, low price has become a minimum qualification to compete in the Internet market. As a result, e-service quality is becoming even more critical for companies to retain and attract customers in the digital age. This study focuses on e-service quality dimensions in the Internet market with an empirical study on online travel service. The purpose of this study is to develop a scale to evaluate e-service quality from the perspectives of both online companies and customers, which provides fresh insight into the dimensions of e-service quality. The results in this study indicate that trust from the perspective of customer and ease of use from the perspective of online company are the most critical and important facets in customers’ perception of online travel service quality, while reliability, system availability and responsiveness have influence on customer’s perception of online travel service quality as well, but the influence is not so strong as that of trust and ease of use. Online travel service companies should pay attention to the facets of reliability, system availability and responsiveness while focusing on the facets of ease of use and trust in order to improve their online travel service quality to customers.", "title": "" }, { "docid": "ab148ea69cf884b2653823b350ed5cfc", "text": "The application of information retrieval techniques to search tasks in software engineering is made difficult by the lexical gap between search queries, usually expressed in natural language (e.g. English), and retrieved documents, usually expressed in code (e.g. programming languages). This is often the case in bug and feature location, community question answering, or more generally the communication between technical personnel and non-technical stake holders in a software project. In this paper, we propose bridging the lexical gap by projecting natural language statements and code snippets as meaning vectors in a shared representation space. In the proposed architecture, word embeddings are first trained on API documents, tutorials, and reference documents, and then aggregated in order to estimate semantic similarities between documents. Empirical evaluations show that the learned vector space embeddings lead to improvements in a previously explored bug localization task and a newly defined task of linking API documents to computer programming questions.", "title": "" }, { "docid": "1fb748012ff900e14861e2b536fbd44c", "text": "This paper describes the use of data mining techniques to solve three important issues in network intrusion detection problems. The first goal is finding the best dimensionality reduction algorithm which reduces the computational cost while still maintains the accuracy. We implement both feature extraction (Principal Component Analysis and Independent Component Analysis) and feature selection (Genetic Algorithm and Particle Swarm Optimization) techniques for dimensionality reduction. The second goal is finding the best algorithm for misuse detection system to detect known intrusion. We implement four basic machine learning algorithms (Naïve Bayes, Decision Tree, Nearest Neighbour and Rule Induction) and then apply ensemble algorithms such as bagging, boosting and stacking to improve the performance of these four basic algorithms. The third goal is finding the best clustering algorithms to detect network anomalies which contains unknown intrusion. We analyze and compare the performance of four unsupervised clustering algorithms (k-Means, k-Medoids, EM clustering and distance-based outlier detection) in terms of accuracy and false positives. Our experiment shows that the Nearest Neighbour (NN) classifier when implemented with Particle Swarm Optimization (PSO) as an attribute selection algorithm achieved the best performance, which is 99.71% accuracy and 0.27% false positive. The misuse detection technique achieves a very good performance with more than 99% accuracy when detecting known intrusion but it fails to accurately detect data set with a large number of unknown intrusions where the highest accuracy is only 63.97%. In contrast, the anomaly detection approach shows promising results where the distance-based outlier detection method outperforms the other three clustering algorithms with the accuracy of 80.15%, followed by EM clustering (78.06%), k-Medoids (76.71%), improved k-Means (65.40%) and k-Means (57.81%).", "title": "" }, { "docid": "5772e4bfb9ced97ff65b5fdf279751f4", "text": "Deep convolutional neural networks excel at sentiment polarity classification, but tend to require substantial amounts of training data, which moreover differs quite significantly between domains. In this work, we present an approach to feed generic cues into the training process of such networks, leading to better generalization abilities given limited training data. We propose to induce sentiment embeddings via supervision on extrinsic data, which are then fed into the model via a dedicated memorybased component. We observe significant gains in effectiveness on a range of different datasets in seven different languages.", "title": "" }, { "docid": "44bbc67f44f4f516db97b317ae16a22a", "text": "Although the number of occupational therapists working in mental health has dwindled, the number of people who need our services has not. In our tendency to cling to a medical model of service provision, we have allowed the scope and content of our services to be limited to what has been supported within this model. A social model that stresses functional adaptation within the community, exemplified in psychosocial rehabilitation, offers a promising alternative. A strongly proactive stance is needed if occupational therapists are to participate fully. Occupational therapy can survive without mental health specialists, but a large and deserving population could ultimately be deprived of a valuable service.", "title": "" }, { "docid": "c98d0b262c76dee61b6f9923b1a246da", "text": "A variety of methods for camera calibration, relying on different camera models, algorithms and a priori object information, have been reported and reviewed in literature. Use of simple 2D patterns of the chess-board type represents an interesting approach, for which several ‘calibration toolboxes’ are available on the Internet, requiring varying degrees of human interaction. This paper presents an automatic multi-image approach exclusively for camera calibration purposes on the assumption that the imaged pattern consists of adjacent light and dark squares of equal size. Calibration results, also based on image sets from Internet sources, are viewed as satisfactory and comparable to those from other approaches. Questions regarding the role of image configuration need further investigation.", "title": "" }, { "docid": "558082c8d15613164d586cab0ba04d9c", "text": "One of the potential benefits of distributed systems is their use in providing highly-available services that are likely to be usable when needed. Availabilay is achieved through replication. By having inore than one copy of information, a service continues to be usable even when some copies are inaccessible, for example, because of a crash of the computer where a copy was stored. This paper presents a new replication algorithm that has desirable performance properties. Our approach is based on the primary copy technique. Computations run at a primary. which notifies its backups of what it has done. If the primary crashes, the backups are reorganized, and one of the backups becomes the new primary. Our method works in a general network with both node crashes and partitions. Replication causes little delay in user computations and little information is lost in a reorganization; we use a special kind of timestamp called a viewstamp to detect lost information.", "title": "" }, { "docid": "e4db0ee5c4e2a5c87c6d93f2f7536f15", "text": "Despite the importance of sparsity in many big data applications, there are few existing methods for efficient distributed optimization of sparsely-regularized objectives. In this paper, we present a communication-efficient framework for L1-regularized optimization in distributed environments. By taking a nontraditional view of classical objectives as part of a more general primal-dual setting, we obtain a new class of methods that can be efficiently distributed and is applicable to common L1-regularized regression and classification objectives, such as Lasso, sparse logistic regression, and elastic net regression. We provide convergence guarantees for this framework and demonstrate strong empirical performance as compared to other stateof-the-art methods on several real-world distributed datasets.", "title": "" }, { "docid": "d7907565c4ea6782cdb0c7b281a9d636", "text": "Acute appendicitis (AA) is among the most common cause of acute abdominal pain. Diagnosis of AA is challenging; a variable combination of clinical signs and symptoms has been used together with laboratory findings in several scoring systems proposed for suggesting the probability of AA and the possible subsequent management pathway. The role of imaging in the diagnosis of AA is still debated, with variable use of US, CT and MRI in different settings worldwide. Up to date, comprehensive clinical guidelines for diagnosis and management of AA have never been issued. In July 2015, during the 3rd World Congress of the WSES, held in Jerusalem (Israel), a panel of experts including an Organizational Committee and Scientific Committee and Scientific Secretariat, participated to a Consensus Conference where eight panelists presented a number of statements developed for each of the eight main questions about diagnosis and management of AA. The statements were then voted, eventually modified and finally approved by the participants to The Consensus Conference and lately by the board of co-authors. The current paper is reporting the definitive Guidelines Statements on each of the following topics: 1) Diagnostic efficiency of clinical scoring systems, 2) Role of Imaging, 3) Non-operative treatment for uncomplicated appendicitis, 4) Timing of appendectomy and in-hospital delay, 5) Surgical treatment 6) Scoring systems for intra-operative grading of appendicitis and their clinical usefulness 7) Non-surgical treatment for complicated appendicitis: abscess or phlegmon 8) Pre-operative and post-operative antibiotics.", "title": "" }, { "docid": "6b698146f5fbd2335e3d7bdfd39e8e4f", "text": "Neural network models of early sensory processing typically reduce the dimensionality of streaming input data. Such networks learn the principal subspace, in the sense of principal component analysis, by adjusting synaptic weights according to activity-dependent learning rules. When derived from a principled cost function, these rules are nonlocal and hence biologically implausible. At the same time, biologically plausible local rules have been postulated rather than derived from a principled cost function. Here, to bridge this gap, we derive a biologically plausible network for subspace learning on streaming data by minimizing a principled cost function. In a departure from previous work, where cost was quantified by the representation, or reconstruction, error, we adopt a multidimensional scaling cost function for streaming data. The resulting algorithm relies only on biologically plausible Hebbian and anti-Hebbian local learning rules. In a stochastic setting, synaptic weights converge to a stationary state, which projects the input data onto the principal subspace. If the data are generated by a nonstationary distribution, the network can track the principal subspace. Thus, our result makes a step toward an algorithmic theory of neural computation.", "title": "" }, { "docid": "ef84f7f53b60cf38972ff1eb04d0f6a5", "text": "OBJECTIVE\nThe purpose of this prospective study was to evaluate the efficacy and safety of screw fixation without bone fusion for unstable thoracolumbar and lumbar burst fracture.\n\n\nMETHODS\nNine patients younger than 40 years underwent screw fixation without bone fusion, following postural reduction using a soft roll at the involved vertebra, in cases of burst fracture. Their motor power was intact in spite of severe canal compromise. The surgical procedure included postural reduction for 3 days and screw fixations at one level above, one level below and at the fractured level itself. The patients underwent removal of implants 12 months after the initial operation, due to possibility of implant failure. Imaging and clinical findings, including canal encroachment, vertebral height, clinical outcome, and complications were analyzed.\n\n\nRESULTS\nPrior to surgery, the mean pain score (visual analogue scale) was 8.2, which decreased to 2.2 at 12 months after screw fixation. None of the patients complained of worsening of pain during 6 months after implant removal. All patients were graded as having excellent or good outcomes at 6 months after implant removal. The proportion of canal compromise at the fractured level improved from 55% to 35% at 12 months after surgery. The mean preoperative vertebral height loss was 45.3%, which improved to 20.6% at 6 months after implant removal. There were no neurological deficits related to neural injury. The improved vertebral height and canal compromise were maintained at 6 months after implant removal.\n\n\nCONCLUSION\nShort segment pedicle screw fixation, including fractured level itself, without bone fusion following postural reduction can be an effective and safe operative technique in the management of selected young patients suffering from unstable burst fracture.", "title": "" }, { "docid": "4419d61684dff89f4678afe3b8dc06e0", "text": "Reason and emotion have long been considered opposing forces. However, recent psychological and neuroscientific research has revealed that emotion and cognition are closely intertwined. Cognitive processing is needed to elicit emotional responses. At the same time, emotional responses modulate and guide cognition to enable adaptive responses to the environment. Emotion determines how we perceive our world, organise our memory, and make important decisions. In this review, we provide an overview of current theorising and research in the Affective Sciences. We describe how psychological theories of emotion conceptualise the interactions of cognitive and emotional processes. We then review recent research investigating how emotion impacts our perception, attention, memory, and decision-making. Drawing on studies with both healthy participants and clinical populations, we illustrate the mechanisms and neural substrates underlying the interactions of cognition and emotion.", "title": "" }, { "docid": "679eb46c45998897b4f8e641530f44a7", "text": "Workers in hazardous environments such as mining are constantly exposed to the health and safety hazards of dynamic and unpredictable conditions. One approach to enable them to manage these hazards is to provide them with situational awareness: real-time data (environmental, physiological, and physical location data) obtained from wireless, wearable, smart sensor technologies deployed at the work area. The scope of this approach is limited to managing the hazards of the immediate work area for prevention purposes; it does not include technologies needed after a disaster. Three critical technologies emerge and converge to support this technical approach: smart-wearable sensors, wireless sensor networks, and low-power embedded computing. The major focus of this report is on smart sensors and wireless sensor networks. Wireless networks form the infrastructure to support the realization of situational awareness; therefore, there is a significant focus on wireless networks. Lastly, the “Future Research” section pulls together the three critical technologies by proposing applications that are relevant to mining. The applications are injured miner (person-down) detection; a wireless, wearable remote viewer; and an ultrawide band smart environment that enables localization and tracking of humans and resources. The smart environment could provide location data, physiological data, and communications (video, photos, graphical images, audio, and text messages). Electrical engineer, Pittsburgh Research Laboratory, National Institute for Occupational Safety and Health, Pittsburgh, PA. President, The Designer-III Co., Franklin, PA. General engineer, Pittsburgh Research Laboratory (now with the National Personal Protective Technology Laboratory), National Institute for Occupational Safety and Health, Pittsburgh, PA. Supervisory general engineer, Pittsburgh Research Laboratory, National Institute for Occupational Safety and Health, Pittsburgh, PA.", "title": "" }, { "docid": "0c31ad159095de6057d43534199e1e45", "text": "We present a novel spatial hashing based data structure to facilitate 3D shape analysis using convolutional neural networks (CNNs). Our method builds hierarchical hash tables for an input model under different resolutions that leverage the sparse occupancy of 3D shape boundary. Based on this data structure, we design two efficient GPU algorithms namely hash2col and col2hash so that the CNN operations like convolution and pooling can be efficiently parallelized. The perfect spatial hashing is employed as our spatial hashing scheme, which is not only free of hash collision but also nearly minimal so that our data structure is almost of the same size as the raw input. Compared with existing 3D CNN methods, our data structure significantly reduces the memory footprint during the CNN training. As the input geometry features are more compactly packed, CNN operations also run faster with our data structure. The experiment shows that, under the same network structure, our method yields comparable or better benchmark results compared with the state-of-the-art while it has only one-third memory consumption when under high resolutions (i.e. 256 3).", "title": "" }, { "docid": "d11a113fdb0a30e2b62466c641e49d6d", "text": "Apache Spark has emerged as the de facto framework for big data analytics with its advanced in-memory programming model and upper-level libraries for scalable machine learning, graph analysis, streaming and structured data processing. It is a general-purpose cluster computing framework with language-integrated APIs in Scala, Java, Python and R. As a rapidly evolving open source project, with an increasing number of contributors from both academia and industry, it is difficult for researchers to comprehend the full body of development and research behind Apache Spark, especially those who are beginners in this area. In this paper, we present a technical review on big data analytics using Apache Spark. This review focuses on the key components, abstractions and features of Apache Spark. More specifically, it shows what Apache Spark has for designing and implementing big data algorithms and pipelines for machine learning, graph analysis and stream processing. In addition, we highlight some research and development directions on Apache Spark for big data analytics.", "title": "" }, { "docid": "9498afdb0db4d7f82187cd4a6af5ed36", "text": "”Bitcoin is a rare case where practice seems to be ahead of theory.” Joseph Bonneau et al. [15] This tutorial aims to further close the gap between IT security research and the area of cryptographic currencies and block chains. We will describe and refer to Bitcoin as an example throughout the tutorial, as it is the most prominent representative of a such a system. It also is a good reference to discuss the underlying block chain mechanics which are the foundation of various altcoins (e.g. Namecoin) and other derived systems. In this tutorial, the topic of cryptographic currencies is solely addressed from a technical IT security point-of-view. Therefore we do not cover any legal, sociological, financial and economical aspects. The tutorial is designed for participants with a solid IT security background but will not assume any prior knowledge on cryptographic currencies. Thus, we will quickly advance our discussion into core aspects of this field.", "title": "" }, { "docid": "42e2a8b8c1b855fba201e3421639d80d", "text": "Fraudulent behaviors in Google’s Android app market fuel search rank abuse and malware proliferation. We present FairPlay, a novel system that uncovers both malware and search rank fraud apps, by picking out trails that fraudsters leave behind. To identify suspicious apps, FairPlay’s PCF algorithm correlates review activities and uniquely combines detected review relations with linguistic and behavioral signals gleaned from longitudinal Google Play app data. We contribute a new longitudinal app dataset to the community, which consists of over 87K apps, 2.9M reviews, and 2.4M reviewers, collected over half a year. FairPlay achieves over 95% accuracy in classifying gold standard datasets of malware, fraudulent and legitimate apps. We show that 75% of the identified malware apps engage in search rank fraud. FairPlay discovers hundreds of fraudulent apps that currently evade Google Bouncer’s detection technology, and reveals a new type of attack campaign, where users are harassed into writing positive reviews, and install and review other apps.", "title": "" }, { "docid": "b5c64ddf3be731a281072a21700a85ee", "text": "This paper addresses the problem of joint detection and recounting of abnormal events in videos. Recounting of abnormal events, i.e., explaining why they are judged to be abnormal, is an unexplored but critical task in video surveillance, because it helps human observers quickly judge if they are false alarms or not. To describe the events in the human-understandable form for event recounting, learning generic knowledge about visual concepts (e.g., object and action) is crucial. Although convolutional neural networks (CNNs) have achieved promising results in learning such concepts, it remains an open question as to how to effectively use CNNs for abnormal event detection, mainly due to the environment-dependent nature of the anomaly detection. In this paper, we tackle this problem by integrating a generic CNN model and environment-dependent anomaly detectors. Our approach first learns CNN with multiple visual tasks to exploit semantic information that is useful for detecting and recounting abnormal events. By appropriately plugging the model into anomaly detectors, we can detect and recount abnormal events while taking advantage of the discriminative power of CNNs. Our approach outperforms the state-of-the-art on Avenue and UCSD Ped2 benchmarks for abnormal event detection and also produces promising results of abnormal event recounting.", "title": "" }, { "docid": "fdd59ff419b9613a1370babe64ef1c98", "text": "The disentangling problem is to discover multiple complex factors of variations hidden in data. One recent approach is to take a dataset with grouping structure and separately estimate a factor common within a group (content) and a factor specific to each group member (transformation). Notably, this approach can learn to represent a continuous space of contents, which allows for generalization to data with unseen contents. In this study, we aim at cultivating this approach within probabilistic deep generative models. Motivated by technical complication in existing groupbased methods, we propose a simpler probabilistic method, called group-contrastive variational autoencoders. Despite its simplicity, our approach achieves reasonable disentanglement with generalizability for three grouped datasets of 3D object images. In comparison with a previous model, although conventional qualitative evaluation shows little difference, our qualitative evaluation using few-shot classification exhibits superior performances for some datasets. We analyze the content representations from different methods and discuss their transformation-dependency and potential performance impacts.", "title": "" } ]
scidocsrr
4ffa944b7f593d57a43421e0d09d4824
Short and Sparse Text Topic Modeling via Self-Aggregation
[ { "docid": "5183794d8bef2d8f2ee4048d75a2bd3c", "text": "Uncovering the topics within short texts, such as tweets and instant messages, has become an important task for many content analysis applications. However, directly applying conventional topic models (e.g. LDA and PLSA) on such short texts may not work well. The fundamental reason lies in that conventional topic models implicitly capture the document-level word co-occurrence patterns to reveal topics, and thus suffer from the severe data sparsity in short documents. In this paper, we propose a novel way for modeling topics in short texts, referred as biterm topic model (BTM). Specifically, in BTM we learn the topics by directly modeling the generation of word co-occurrence patterns (i.e. biterms) in the whole corpus. The major advantages of BTM are that 1) BTM explicitly models the word co-occurrence patterns to enhance the topic learning; and 2) BTM uses the aggregated patterns in the whole corpus for learning topics to solve the problem of sparse word co-occurrence patterns at document-level. We carry out extensive experiments on real-world short text collections. The results demonstrate that our approach can discover more prominent and coherent topics, and significantly outperform baseline methods on several evaluation metrics. Furthermore, we find that BTM can outperform LDA even on normal texts, showing the potential generality and wider usage of the new topic model.", "title": "" }, { "docid": "a5a4532c1297941aad6d9c4b6ff1adaa", "text": "Many current Natural Language Processing [NLP] techniques work well assuming a large context of text as input data. However they become ineffective when applied to short texts such as Twitter feeds. To overcome the issue, we want to find a related newswire document to a given tweet to provide contextual support for NLP tasks. This requires robust modeling and understanding of the semantics of short texts. The contribution of the paper is two-fold: 1. we introduce the Linking-Tweets-toNews task as well as a dataset of linked tweet-news pairs, which can benefit many NLP applications; 2. in contrast to previous research which focuses on lexical features within the short texts (text-to-word information), we propose a graph based latent variable model that models the inter short text correlations (text-to-text information). This is motivated by the observation that a tweet usually only covers one aspect of an event. We show that using tweet specific feature (hashtag) and news specific feature (named entities) as well as temporal constraints, we are able to extract text-to-text correlations, and thus completes the semantic picture of a short text. Our experiments show significant improvement of our new model over baselines with three evaluation metrics in the new task.", "title": "" }, { "docid": "a8b818b30bee92efaf43e195590a27fd", "text": "Twitter, or the world of 140 characters poses serious challenges to the efficacy of topic models on short, messy text. While topic models such as Latent Dirichlet Allocation (LDA) have a long history of successful application to news articles and academic abstracts, they are often less coherent when applied to microblog content like Twitter. In this paper, we investigate methods to improve topics learned from Twitter content without modifying the basic machinery of LDA; we achieve this through various pooling schemes that aggregate tweets in a data preprocessing step for LDA. We empirically establish that a novel method of tweet pooling by hashtags leads to a vast improvement in a variety of measures for topic coherence across three diverse Twitter datasets in comparison to an unmodified LDA baseline and a variety of pooling schemes. An additional contribution of automatic hashtag labeling further improves on the hashtag pooling results for a subset of metrics. Overall, these two novel schemes lead to significantly improved LDA topic models on Twitter content.", "title": "" }, { "docid": "6b855b55f22de3e3f65ce56a69c35876", "text": "This paper presents an LDA-style topic model that captures not only the low-dimensional structure of data, but also how the structure changes over time. Unlike other recent work that relies on Markov assumptions or discretization of time, here each topic is associated with a continuous distribution over timestamps, and for each generated document, the mixture distribution over topics is influenced by both word co-occurrences and the document's timestamp. Thus, the meaning of a particular topic can be relied upon as constant, but the topics' occurrence and correlations change significantly over time. We present results on nine months of personal email, 17 years of NIPS research papers and over 200 years of presidential state-of-the-union addresses, showing improved topics, better timestamp prediction, and interpretable trends.", "title": "" } ]
[ { "docid": "31118ada9270facdc97465bfb28a3571", "text": "Transimpedance amplifiers using voltage feedback operational amplifiers are widely used for current to voltage conversion in applications when a moderatehigh bandwidth and a high sensitivity are required, such as photodiodes, photomultipliers, electron multipliers and capacitive sensors. The conventional circuit presents a virtual earth to the input and at low frequencies, the input capacitance is usually not a significant concem. However, at high frequencies and especially for high sensitivity circuits, the total input capacitance can severely limit the available bandwidth from the circuit [1,6]. The input capacitance in effect constitutes part of the feedback network of the op-amp and hence reduces the available loop gain at high frequencies. In some cases a high input capacitance can cause the circuit to have a lightly damped or unstable dynamic response. Lag compensation by simply adding feedback capacitance is generally used to guarantee stability, however this approach does not permit the full gain-bandwidth characteristic of the op-amp to be fully exploited.", "title": "" }, { "docid": "f58801db3709cde9ea45f43856d35b7c", "text": "A method for applying pattern recognition techniques to recognize the identity of a person based on their iris is proposed. Hidden Markov Models are used to parametrically model the local frequencies of the iris. Also discussed is a transform of the iris image from two to one dimensional space and overcoming limited data with the generation of synthetic images.", "title": "" }, { "docid": "03ca6037bf249d93a9eae8bee5ae8b11", "text": "The integration of strong encryption into operating systems is creating challenges for forensic examiners, potentially preventing us from recovering any digital evidence from a computer. Because strong encryption cannot be circumvented without a key or passphrase, forensic examiners may not be able to access data after a computer is shut down, and must decide whether to perform a live forensic acquisition. In addition, with encryption becoming integrated into the operating system, in some cases, virtualization is the most effective approach to performing a forensic examination of a system with FDE. This paper presents the evolution of full disk encryption (FDE) and its impact on digital forensics. Furthermore, by demonstrating how full disk encryption has been dealt with in past investigations, this paper provides forensics examiners with practical techniques for recovering evidence that would otherwise be inaccessible.", "title": "" }, { "docid": "b03b41f27b3046156a922f858349d4ed", "text": "Charophytes are macrophytic green algae, occurring in standing and running waters throughout the world. Species descriptions of charophytes are contradictive and different determination keys use various morphologic characters for species discrimination. Chara intermedia Braun, C. baltica Bruzelius and C. hispida Hartman are treated as three species by most existing determination keys, though their morphologic differentiation is based on different characteristics. Amplified fragment length polymorphism (AFLP) was used to detect genetically homogenous groups within the C. intermedia-C. baltica-C. hispida-cluster, by the analysis of 122 C. intermedia, C. baltica and C. hispida individuals from central and northern Europe. C. hispida clustered in a distinct genetic group in the AFLP analysis and could be determined morphologically by its aulacanthous cortification. However, for C. intermedia and C. baltica no single morphologic character was found that differentiated the two genetic groups, thus C. intermedia and C. baltica are considered as cryptic species. All C. intermedia specimen examined came from freshwater habitats, whereas the second group, C. baltica, grew in brackish water. We conclude that the species differentiation between C. intermedia and C. baltica, which is assumed to be reflected by the genetic discrimination groups, corresponds more with ecological (salinity preference) than morphologic characteristics. Based on the genetic analysis three differing colonization models of the Baltic Sea and the Swedish lakes with C. baltica and C. intermedia were discussed. As samples of C. intermedia and C. baltica have approximately the same Jaccard coefficient for genetic similarity, we suggest that C. baltica colonized the Baltic Sea after the last glacial maximum from refugia along the Atlantic and North Sea coasts. Based on the similarity of C. intermedia intermediate individuals of Central Europe and Sweden we assume a colonization of the Swedish lakes from central Europe.", "title": "" }, { "docid": "2a5194f83142bbaef832011d08acd780", "text": "This paper proposes a novel data-driven approach for inertial navigation, which learns to estimate trajectories of natural human motions just from an inertial measurement unit (IMU) in every smartphone. The key observation is that human motions are repetitive and consist of a few major modes (e.g., standing, walking, or turning). Our algorithm regresses a velocity vector from the history of linear accelerations and angular velocities, then corrects low-frequency bias in the linear accelerations, which are integrated twice to estimate positions. We have acquired training data with ground truth motion trajectories across multiple human subjects and multiple phone placements (e.g., in a bag or a hand). The qualitatively and quantitatively evaluations have demonstrated that our simple algorithm outperforms existing heuristic-based approaches and is even comparable to full Visual Inertial navigation to our surprise. As far as we know, this paper is the first to introduce supervised training for inertial navigation, potentially opening up a new line of research in the domain of data-driven inertial navigation. We will publicly share our code and data to facilitate further research.", "title": "" }, { "docid": "3c4e1c7fd5dbdf5ea50eeed1afe23ff9", "text": "Power management is an important concern in sensor networks, because a tethered energy infrastructure is usually not available and an obvious concern is to use the available battery energy efficiently. However, in some of the sensor networking applications, an additional facility is available to ameliorate the energy problem: harvesting energy from the environment. Certain considerations in using an energy harvesting source are fundamentally different from that in using a battery, because, rather than a limit on the maximum energy, it has a limit on the maximum rate at which the energy can be used. Further, the harvested energy availability typically varies with time in a nondeterministic manner. While a deterministic metric, such as residual battery, suffices to characterize the energy availability in the case of batteries, a more sophisticated characterization may be required for a harvesting source. Another issue that becomes important in networked systems with multiple harvesting nodes is that different nodes may have different harvesting opportunity. In a distributed application, the same end-user performance may be achieved using different workload allocations, and resultant energy consumptions at multiple nodes. In this case, it is important to align the workload allocation with the energy availability at the harvesting nodes. We consider the above issues in power management for energy-harvesting sensor networks. We develop abstractions to characterize the complex time varying nature of such sources with analytically tractable models and use them to address key design issues. We also develop distributed methods to efficiently use harvested energy and test these both in simulation and experimentally on an energy-harvesting sensor network, prototyped for this work.", "title": "" }, { "docid": "1af3ac7c85fbb902f419ec4776a1c571", "text": "Traditional approaches to understanding the brain's resilience to neuropathology have identified neurophysiological variables, often described as brain or cognitive \"reserve,\" associated with better outcomes. However, mechanisms of function and resilience in large-scale brain networks remain poorly understood. Dynamic network theory may provide a basis for substantive advances in understanding functional resilience in the human brain. In this perspective, we describe recent theoretical approaches from network control theory as a framework for investigating network level mechanisms underlying cognitive function and the dynamics of neuroplasticity in the human brain. We describe the theoretical opportunities offered by the application of network control theory at the level of the human connectome to understand cognitive resilience and inform translational intervention.", "title": "" }, { "docid": "27745116e5c05802bda2bc6dc548cce6", "text": "Recently, many researchers have attempted to classify Facial Attributes (FAs) by representing characteristics of FAs such as attractiveness, age, smiling and so on. In this context, recent studies have demonstrated that visual FAs are a strong background for many applications such as face verification, face search and so on. However, Facial Attribute Classification (FAC) in a wide range of attributes based on the regression representation -predicting of FAs as real-valued labelsis still a significant challenge in computer vision and psychology. In this paper, a regression model formulation is proposed for FAC in a wide range of FAs (e.g. 73 FAs). The proposed method accommodates real-valued scores to the probability of what percentage of the given FAs is present in the input image. To this end, two simultaneous dictionary learning methods are proposed to learn the regression and identity feature dictionaries simultaneously. Accordingly, a multi-level feature extraction is proposed for FAC. Then, four regression classification methods are proposed using a regression model formulated based on dictionary learning, SRC and CRC. Convincing results are", "title": "" }, { "docid": "ea447f9ed199f1b9d021a9bde733b858", "text": "1 Introduction 545 2 Eating to optimise training 545 2.1 Energy needs for training and the ideal physique 545 2.2 Strategies to reduce mass and body fat 547 2.3 Requirements for growth and gaining lean body mass 548 3 Protein needs for muscle gain, training enhancement and repair 549 4 Fuel needs for training and recovery 550 5 Eating to minimise illness and injury 551 5.", "title": "" }, { "docid": "89438b3b2a78c54a44236b720940c8f2", "text": "InProcess-Aware Information Systems, business processes are often modeled in an explicit way. Roughly speaking, the available business processmodeling languages can bedivided into twogroups. Languages from the first group are preferred by academic people but shunned by business people, and include Petri nets and process algebras. These academic languages have a proper formal semantics, which allows the corresponding academic models to be verified in a formal way. Languages from the second group are preferred by business people but disliked by academic people, and include BPEL, BPMN, andEPCs. These business languages often lack any proper semantics, which often leads to debates on how to interpret certain business models. Nevertheless, business models are used in practice, whereas academic models are hardly used. To be able to use, for example, the abundance of Petri net verification techniques on business models, we need to be able to transform these models to Petri nets. In this paper, we investigate anumberofPetri net transformations that already exist.For every transformation, we investigate the transformation itself, the constructs in the business models that are problematic for the transformation and the main applications for the transformation.", "title": "" }, { "docid": "4aa72e10d4679a9fbc66efb88dc55501", "text": "PURPOSE\nAdolescents with chronic illnesses have been shown to have management and treatment issues resulting in poor outcomes. These issues may arise from non-interest in self care and illness knowledge. A video game, \"Re-Mission,\" was developed to actively involve young people with cancer in their own treatment. Re-Mission provides opportunities to learn about cancer and its treatment.\n\n\nMETHOD\nThe efficacy of Re-Mission was investigated in a multi-site, randomized, controlled study with 375 adolescent and young adult cancer patients. Participants received either a regular commercial game (control) or both the regular game plus Re-Mission (Re-Mission group). Participants were given a mini-PC with the games installed and requested to play for an hour each week for 3 months. A test on cancer-related knowledge was given prior to game play (baseline) and again after 1 and 3 months. At 3 months the Re-Mission group also rated the acceptability and credibility of Re-Mission.\n\n\nRESULTS\nAnalyses of the knowledge test scores showed that whereas scores of both groups improved significantly over the follow-up periods, the scores of the Re-Mission group improved significantly more. The size of this effect was related to usage of Re-Mission. Credibility scores were negatively correlated with level of knowledge but not with change in knowledge level at 1 month or 3 months.\n\n\nCONCLUSIONS\nThe results indicate a specific effect of Re-Mission play on cancer knowledge that is not attributable to patients' expectations. It is concluded that video games can be an effective vehicle for health education in adolescents and young adults with chronic illnesses.", "title": "" }, { "docid": "5f68b3ab2253349941fc1bf7e602c6a2", "text": "Motivated by recent advances in adaptive sparse representations and nonlocal image modeling, we propose a patch-based image interpolation algorithm under a set theoretic framework. Our algorithm alternates the projection onto two convex sets: one is given by the observation data and the other defined by a sparsity-based nonlocal prior similar to BM3D. In order to optimize the design of observation constraint set, we propose to address the issue of sampling pattern and model it by a spatial point process. A Monte-Carlo based algorithm is proposed to optimize the randomness of sampling patterns to better approximate homogeneous Poisson process. Extensive experimental results in image interpolation and coding applications are reported to demonstrate the potential of the proposed algorithms.", "title": "" }, { "docid": "17321e451d7441c8a434c637237370a2", "text": "In recent years, there are increasing interests in using path identifiers (<inline-formula> <tex-math notation=\"LaTeX\">$\\it PIDs$ </tex-math></inline-formula>) as inter-domain routing objects. However, the <inline-formula> <tex-math notation=\"LaTeX\">$\\it PIDs$ </tex-math></inline-formula> used in existing approaches are static, which makes it easy for attackers to launch the distributed denial-of-service (DDoS) flooding attacks. To address this issue, in this paper, we present the design, implementation, and evaluation of dynamic PID (D-PID), a framework that uses <inline-formula> <tex-math notation=\"LaTeX\">$\\it PIDs$ </tex-math></inline-formula> negotiated between the neighboring domains as inter-domain routing objects. In D-PID, the <inline-formula> <tex-math notation=\"LaTeX\">$\\it PID$ </tex-math></inline-formula> of an inter-domain path connecting the two domains is kept secret and changes dynamically. We describe in detail how neighboring domains negotiate <inline-formula> <tex-math notation=\"LaTeX\">$\\it PIDs$ </tex-math></inline-formula> and how to maintain ongoing communications when <inline-formula> <tex-math notation=\"LaTeX\">$\\it PIDs$ </tex-math></inline-formula> change. We build a 42-node prototype comprised of six domains to verify D-PID’s feasibility and conduct extensive simulations to evaluate its effectiveness and cost. The results from both simulations and experiments show that D-PID can effectively prevent DDoS attacks.", "title": "" }, { "docid": "277edaaf026e541bc9abc83eaabbecbe", "text": "In most situations, simple techniques for handling missing data (such as complete case analysis, overall mean imputation, and the missing-indicator method) produce biased results, whereas imputation techniques yield valid results without complicating the analysis once the imputations are carried out. Imputation techniques are based on the idea that any subject in a study sample can be replaced by a new randomly chosen subject from the same source population. Imputation of missing data on a variable is replacing that missing by a value that is drawn from an estimate of the distribution of this variable. In single imputation, only one estimate is used. In multiple imputation, various estimates are used, reflecting the uncertainty in the estimation of this distribution. Under the general conditions of so-called missing at random and missing completely at random, both single and multiple imputations result in unbiased estimates of study associations. But single imputation results in too small estimated standard errors, whereas multiple imputation results in correctly estimated standard errors and confidence intervals. In this article we explain why all this is the case, and use a simple simulation study to demonstrate our explanations. We also explain and illustrate why two frequently used methods to handle missing data, i.e., overall mean imputation and the missing-indicator method, almost always result in biased estimates.", "title": "" }, { "docid": "18d8fe3f77ab8878ae2eb72b04fa8a48", "text": "A new magneto-electric dipole antenna with a unidirectional radiation pattern is proposed. A novel differential feeding structure is designed to provide an ultra-wideband impedance matching. A stable gain of 8.25±1.05 dBi is realized by introducing two slots in the magneto-electric dipole and using a rectangular box-shaped reflector, instead of a planar reflector. The antenna can achieve an impedance bandwidth of 114% for SWR ≤ 2 from 2.95 to 10.73 GHz. Radiation patterns with low cross polarization, low back radiation, fixing broadside direction mainbeam and symmetrical E- and H -plane patterns are obtained over the operating frequency range. Moreover, the correlation factor between the transmitting antenna input signal and the receiving antenna output signal is calculated for evaluating the time-domain characteristic. The proposed antenna, which is small in size, can be constructed easily by using PCB fabrication technique.", "title": "" }, { "docid": "9c507a2b1f57750d1b4ffeed6979a06f", "text": "Once considered provocative, the notion that the wisdom of the crowd is superior to any individual has become itself a piece of crowd wisdom, leading to speculation that online voting may soon put credentialed experts out of business. Recent applications include political and economic forecasting, evaluating nuclear safety, public policy, the quality of chemical probes, and possible responses to a restless volcano. Algorithms for extracting wisdom from the crowd are typically based on a democratic voting procedure. They are simple to apply and preserve the independence of personal judgment. However, democratic methods have serious limitations. They are biased for shallow, lowest common denominator information, at the expense of novel or specialized knowledge that is not widely shared. Adjustments based on measuring confidence do not solve this problem reliably. Here we propose the following alternative to a democratic vote: select the answer that is more popular than people predict. We show that this principle yields the best answer under reasonable assumptions about voter behaviour, while the standard ‘most popular’ or ‘most confident’ principles fail under exactly those same assumptions. Like traditional voting, the principle accepts unique problems, such as panel decisions about scientific or artistic merit, and legal or historical disputes. The potential application domain is thus broader than that covered by machine learning and psychometric methods, which require data across multiple questions.", "title": "" }, { "docid": "0cb3d77cfe1d355e948f55e18717ca22", "text": "This Wireless Mobile Battery Charger project is using technique of inductive coupling. The basic concept of his technique was applied in transformer construction. With this technique, the power from AC or DC can be transfer through the medium of magnetic field or air space. In this project, the method is divided into two major activities which is to purpose circuit construction and to fabricate the prototype. The result is to evaluate the distance of power that can be transferred using technique of inductive coupling.", "title": "" }, { "docid": "41edbcaf0903bec80b3f05f306f76c6b", "text": "Breast lesion detection using ultrasound imaging is considered an important step of computer-aided diagnosis systems. Over the past decade, researchers have demonstrated the possibilities to automate the initial lesion detection. However, the lack of a common dataset impedes research when comparing the performance of such algorithms. This paper proposes the use of deep learning approaches for breast ultrasound lesion detection and investigates three different methods: a Patch-based LeNet, a U-Net, and a transfer learning approach with a pretrained FCN-AlexNet. Their performance is compared against four state-of-the-art lesion detection algorithms (i.e., Radial Gradient Index, Multifractal Filtering, Rule-based Region Ranking, and Deformable Part Models). In addition, this paper compares and contrasts two conventional ultrasound image datasets acquired from two different ultrasound systems. Dataset A comprises 306 (60 malignant and 246 benign) images and Dataset B comprises 163 (53 malignant and 110 benign) images. To overcome the lack of public datasets in this domain, Dataset B will be made available for research purposes. The results demonstrate an overall improvement by the deep learning approaches when assessed on both datasets in terms of True Positive Fraction, False Positives per image, and F-measure.", "title": "" }, { "docid": "5eea2c2a57d85c100f4e821759610260", "text": "This paper presents an overview of a multistage signal processing framework to tackle the main challenges in continuous control protocols for motor imagery based synchronous and self-paced BCIs. The BCI can be setup rapidly and automatically even when conducting an extensive search for subject-specific parameters. A new BCI-based game training paradigm which enables assessment of continuous control performance is also introduced. A range of offline results and online analysis of the new game illustrate the potential for the proposed BCI and the advantages of using the game as a BCI training paradigm.", "title": "" }, { "docid": "595a31e82d857cedecd098bf4c910e99", "text": "Human actions in video sequences are three-dimensional (3D) spatio-temporal signals characterizing both the visual appearance and motion dynamics of the involved humans and objects. Inspired by the success of convolutional neural networks (CNN) for image classification, recent attempts have been made to learn 3D CNNs for recognizing human actions in videos. However, partly due to the high complexity of training 3D convolution kernels and the need for large quantities of training videos, only limited success has been reported. This has triggered us to investigate in this paper a new deep architecture which can handle 3D signals more effectively. Specifically, we propose factorized spatio-temporal convolutional networks (FstCN) that factorize the original 3D convolution kernel learning as a sequential process of learning 2D spatial kernels in the lower layers (called spatial convolutional layers), followed by learning 1D temporal kernels in the upper layers (called temporal convolutional layers). We introduce a novel transformation and permutation operator to make factorization in FstCN possible. Moreover, to address the issue of sequence alignment, we propose an effective training and inference strategy based on sampling multiple video clips from a given action video sequence. We have tested FstCN on two commonly used benchmark datasets (UCF-101 and HMDB-51). Without using auxiliary training videos to boost the performance, FstCN outperforms existing CNN based methods and achieves comparable performance with a recent method that benefits from using auxiliary training videos.", "title": "" } ]
scidocsrr
1201828e9489efc730dd6894a3437c29
Incentive Compatibility of Bitcoin Mining Pool Reward Functions
[ { "docid": "7ab8ccfbc6cff2804cf003c2e684c8f5", "text": "In this paper we describe the various scoring systems used to calculate rewards of participants in Bitcoin pooled mining, explain the problems each were designed to solve and analyze their respective advantages and disadvantages.", "title": "" } ]
[ { "docid": "9e6838b0fb9fc2d6b8ea541260a0e4cf", "text": "In order to achieve better collecting consumption, diagnostic and status of water, natural gas and electricity metering, an electronic device known as smart meter is introduced. These devices are increasingly installed around the globe and together with Automatic Meter Reading (AMR) technology form the basis of future intelligent metering. Devices known as concentrators collect consumption records from smart meters and send them for further processing and analysis. This paper describes the implementation and analysis of one universal electronic device that can be used as concentrator, gateway or both. Implemented device has been tested in real conditions with a smart gas meters. Meter-Bus (M-Bus) standards were discussed and how they define the structure of modern gas metering system. Special analysis is carried out about the range of communication and the impact of the place of installation of the concentrator and smart meters.", "title": "" }, { "docid": "36f960b37e7478d8ce9d41d61195f83a", "text": "An effective technique in locating a source based on intersections of hyperbolic curves defined by the time differences of arrival of a signal received at a number of sensors is proposed. The approach is noniterative and gives au explicit solution. It is an approximate realization of the maximum-likelihood estimator and is shown to attain the Cramer-Rao lower bound near the small error region. Comparisons of performance with existing techniques of beamformer, sphericat-interpolation, divide and conquer, and iterative Taylor-series methods are made. The proposed technique performs significantly better than sphericalinterpolation, and has a higher noise threshold than divide and conquer before performance breaks away from the Cramer-Rao lower bound. It provides an explicit solution form that is not available in the beamformmg and Taylor-series methods. Computational complexity is comparable to spherical-interpolation but substantially less than the Taylor-series method.", "title": "" }, { "docid": "a854ee8cf82c4bd107e93ed0e70ee543", "text": "Although the memorial benefits of testing are well established empirically, the mechanisms underlying this benefit are not well understood. The authors evaluated the mediator shift hypothesis, which states that test-restudy practice is beneficial for memory because retrieval failures during practice allow individuals to evaluate the effectiveness of mediators and to shift from less effective to more effective mediators. Across a series of experiments, participants used a keyword encoding strategy to learn word pairs with test-restudy practice or restudy only. Robust testing effects were obtained in all experiments, and results supported predictions of the mediator shift hypothesis. First, a greater proportion of keyword shifts occurred during test-restudy practice versus restudy practice. Second, a greater proportion of keyword shifts occurred after retrieval failure trials versus retrieval success trials during test-restudy practice. Third, a greater proportion of keywords were recalled on a final keyword recall test after test-restudy versus restudy practice.", "title": "" }, { "docid": "1c5ab22135bb293919022585bae160ef", "text": "Job satisfaction and employee performance has been a topic of research for decades. Whether job satisfaction influences employee satisfaction in organizations remains a crucial issue to managers and psychologists. That is where the problem lies. Therefore, the objective of this paper is to trace the relationship between job satisfaction and employee performance in organizations with particular reference to Nigeria. Related literature on the some theories of job satisfaction such as affective events, two-factor, equity and job characteristics was reviewed and findings from these theories indicate that a number of factors like achievement, recognition, responsibility, pay, work conditions and so on, have positive influence on employee performance in organizations. The paper adds to the theoretical debate on whether job satisfaction impacts positively on employee performance. It concludes that though the concept of job satisfaction is complex, using appropriate variables and mechanisms can go a long way in enhancing employee performance. It recommends that managers should use those factors that impact employee performance to make them happy, better their well being and the environment. It further specifies appropriate mechanisms using a theoretical approach to support empirical approaches which often lack clarity as to why the variables are related.", "title": "" }, { "docid": "97501db2db0fb83fef5cf4e30d1728d8", "text": "Autonomous automated vehicles are the next evolution in transportation and will improve safety, traffic efficiency and driving experience. Automated vehicles are equipped with multiple sensors (LiDAR, radar, camera, etc.) enabling local awareness of their surroundings. A fully automated vehicle will unconditionally rely on its sensors readings to make short-term (i.e. safety-related) and long-term (i.e. planning) driving decisions. In this context, sensors have to be robust against intentional or unintentional attacks that aim at lowering sensor data quality to disrupt the automation system. This paper presents remote attacks on camera-based system and LiDAR using commodity hardware. Results from laboratory experiments show effective blinding, jamming, replay, relay, and spoofing attacks. We propose software and hardware countermeasures that improve sensors resilience against these attacks.", "title": "" }, { "docid": "7f6e966f3f924e18cb3be0ae618309e6", "text": "designed shapes incorporating typedesign tradition, the rules related to visual appearance, and the design ideas of a skilled character designer. The typographic design process is structured and systematic: letterforms are visually related in weight, contrast, space, alignment, and style. To create a new typeface family, type designers generally start by designing a few key characters—such as o, h, p, and v— incorporating the most important structure elements such as vertical stems, round parts, diagonal bars, arches, and serifs (see Figure 1). They can then use the design features embedded into these structure elements (stem width, behavior of curved parts, contrast between thick and thin shape parts, and so on) to design the font’s remaining characters. Today’s industrial font description standards such as Adobe Type 1 or TrueType represent typographic characters by their shape outlines, because of the simplicity of digitizing the contours of well-designed, large-size master characters. However, outline characters only implicitly incorporate the designer’s intentions. Because their structure elements aren’t explicit, creating aesthetically appealing derived designs requiring coherent changes in character width, weight (boldness), and contrast is difficult. Outline characters aren’t suitable for optical scaling, which requires relatively fatter letter shapes at small sizes. Existing approaches for creating derived designs from outline fonts require either specifying constraints to maintain the coherence of structure elements across different characters or creating multiple master designs for the interpolation of derived designs. We present a new approach for describing and synthesizing typographic character shapes. Instead of describing characters by their outlines, we conceive each character as an assembly of structure elements (stems, bars, serifs, round parts, and arches) implemented by one or several shape components. We define the shape components by typeface-category-dependent global parameters such as the serif and junction types, by global font-dependent metrics such as the location of reference lines and the width of stems and curved parts, and by group and local parameters. (See the sidebar “Previous Work” for background information on the field of parameterizable fonts.)", "title": "" }, { "docid": "94535b71855026738a0dad677f14e5b8", "text": "Rule extraction (RE) from recurrent neural networks (RNNs) refers to finding models of the underlying RNN, typically in the form of finite state machines, that mimic the network to a satisfactory degree while having the advantage of being more transparent. RE from RNNs can be argued to allow a deeper and more profound form of analysis of RNNs than other, more or less ad hoc methods. RE may give us understanding of RNNs in the intermediate levels between quite abstract theoretical knowledge of RNNs as a class of computing devices and quantitative performance evaluations of RNN instantiations. The development of techniques for extraction of rules from RNNs has been an active field since the early 1990s. This article reviews the progress of this development and analyzes it in detail. In order to structure the survey and evaluate the techniques, a taxonomy specifically designed for this purpose has been developed. Moreover, important open research issues are identified that, if addressed properly, possibly can give the field a significant push forward.", "title": "" }, { "docid": "a8dc95d53c04f49231c8b4dea83c55f8", "text": "One of the main drawbacks of nonoverlapped coils in fractional slot concentrated winding permanent magnet (PM) machines are the high eddy current losses in both rotor core and permanent magnets induced by the asynchronous harmonics of the armature reaction field. It has been shown in the literature that the reduction of low space harmonics can effectively reduce the rotor eddy current losses. This paper shows that employing a combined star-delta winding to a three-phase PM machine with fractional slot windings and with a number of slots equal to 12, or its multiples, yields a complete cancellation to the fundamental magneto-motive force (MMF) component, which significantly reduces the induced rotor eddy current. Besides, it offers a slight increase in machine torque density. A case study on the well-known 12-slot/10-pole PM machine is conducted to explore the proposed approach. With the same concept, the general n-phase PM machine occupying 4n slots and with a dual n-phase winding is then proposed. This configuration offers a complete cancelation of all harmonics below the torque producing MMF component. Hence, the induced eddy currents in both rotor core and magnets are significantly reduced. The winding connection and the required number of turns for both winding groups are also given. The concept is applied to a 20-slot/18-pole stator with a dual five-phase winding, where the stator winding is connected as a combined star/pentagon connection. The proposed concept is assessed through a simulation study based on 2-D finite element analysis.", "title": "" }, { "docid": "77bb711327befd3f4169b4548cc5a85d", "text": "We present a new technique for learning visual-semantic embeddings for cross-modal retrieval. Inspired by hard negative mining, the use of hard negatives in structured prediction, and ranking loss functions, we introduce a simple change to common loss functions used for multi-modal embeddings. That, combined with fine-tuning and use of augmented data, yields significant gains in retrieval performance. We showcase our approach, VSE++, on MS-COCO and Flickr30K datasets, using ablation studies and comparisons with existing methods. On MS-COCO our approach outperforms state-ofthe-art methods by 8.8% in caption retrieval and 11.3% in image retrieval (at R@1).", "title": "" }, { "docid": "99fa507d3b36e1a42f0dbda5420e329a", "text": "Reference Points and Effort Provision A key open question for theories of reference-dependent preferences is what determines the reference point. One candidate is expectations: what people expect could affect how they feel about what actually occurs. In a real-effort experiment, we manipulate the rational expectations of subjects and check whether this manipulation influences their effort provision. We find that effort provision is significantly different between treatments in the way predicted by models of expectation-based reference-dependent preferences: if expectations are high, subjects work longer and earn more money than if expectations are low. JEL Classification: C91, D01, D84, J22", "title": "" }, { "docid": "d026b12bedce1782a17654f19c7dcdf7", "text": "The millions of movies produced in the human history are valuable resources for computer vision research. However, learning a vision model from movie data would meet with serious difficulties. A major obstacle is the computational cost – the length of a movie is often over one hour, which is substantially longer than the short video clips that previous study mostly focuses on. In this paper, we explore an alternative approach to learning vision models from movies. Specifically, we consider a framework comprised of a visual module and a temporal analysis module. Unlike conventional learning methods, the proposed approach learns these modules from different sets of data – the former from trailers while the latter from movies. This allows distinctive visual features to be learned within a reasonable budget while still preserving long-term temporal structures across an entire movie. We construct a large-scale dataset for this study and define a series of tasks on top. Experiments on this dataset showed that the proposed method can substantially reduce the training time while obtaining highly effective features and coherent temporal structures.", "title": "" }, { "docid": "05f36ee9c051f8f9ea6e48d4fdd28dae", "text": "While most theoretical work in machine learning has focused on the complexity of learning, recently there has been increasing interest in formally studying the complexity of teaching . In this paper we study the complexity of teaching by considering a variant of the on-line learning model in which a helpful teacher selects the instances. We measure the complexity of teaching a concept from a given concept class by a combinatorial measure we call the teaching dimension. Informally, the teaching dimension of a concept class is the minimum number of instances a teacher must reveal to uniquely identify any target concept chosen from the class. A preliminary version of this paper appeared in the Proceedings of the Fourth Annual Workshop on Computational Learning Theory, pages 303{314. August 1991. Most of this research was carried out while both authors were at MIT Laboratory for Computer Science with support provided by ARO Grant DAAL03-86-K-0171, DARPA Contract N00014-89-J-1988, NSF Grant CCR-88914428, and a grant from the Siemens Corporation. S. Goldman is currently supported in part by a G.E. Foundation Junior Faculty Grant and NSF Grant CCR-9110108.", "title": "" }, { "docid": "519b0dbeb1193a14a06ba212790f49d4", "text": "In recent years, sign language recognition has attracted much attention in computer vision . A sign language is a means of conveying the message by using hand, arm, body, and face to convey thoughts and meanings. Like spoken languages, sign languages emerge and evolve naturally within hearing-impaired communities. However, sign languages are not universal. There is no internationally recognized and standardized sign language for all deaf people. As is the case in spoken language, every country has got its own sign language with high degree of grammatical variations. The sign language used in India is commonly known as Indian Sign Language (henceforth called ISL).", "title": "" }, { "docid": "131517391d81c321f922e2c1507bb247", "text": "This study was undertaken to apply recurrent neural networks to the recognition of stock price patterns, and to develop a new method for evaluating the networks. In stock tradings, triangle patterns indicate an important clue to the trend of future change in stock prices, but the patterns are not clearly defined by rule-based approaches. From stock price data for all names of corporations listed in The First Section of Tokyo Stock Exchange, an expert called c h a d reader extracted sixteen triangles. These patterns were divided into two groups, 15 training patterns and one test pattern. Using stock data during past 3 years for 16 names, 16 experiments for the recognition were carried out, where the groups were cyclically used. The experiments revealed that the given test triangle was accurately recognized in 15 out of 16 experiments, and that the number of the mismatching patterns was 1.06 per name on the average. A new method was developed for evaluating recurrent networks with context transition performances, in particular, temporal transition performances. The method for the triangle sequences is applicable to decrease in mismatching patterns. By applying a cluster analysis to context vectors generated in the networks at recognition stage, a transition chart for context vector categorization was obtained for each stock price sequence. The finishing categories for the context vectors in the charts indicated that this method was effective in decreasing mismatching patterns.", "title": "" }, { "docid": "d5f2cb3839a8e129253e3433b9e9a5bc", "text": "Product classification in Commerce search (\\eg{} Google Product Search, Bing Shopping) involves associating categories to offers of products from a large number of merchants. The categorized offers are used in many tasks including product taxonomy browsing and matching merchant offers to products in the catalog. Hence, learning a product classifier with high precision and recall is of fundamental importance in order to provide high quality shopping experience. A product offer typically consists of a short textual description and an image depicting the product. Traditional approaches to this classification task is to learn a classifier using only the textual descriptions of the products. In this paper, we show that the use of images, a weaker signal in our setting, in conjunction with the textual descriptions, a more discriminative signal, can considerably improve the precision of the classification task, irrespective of the type of classifier being used. We present a novel classification approach, \\Cross Adapt{} (\\CrossAdaptAcro{}), that is cognizant of the disparity in the discriminative power of different types of signals and hence makes use of the confusion matrix of dominant signal (text in our setting) to prudently leverage the weaker signal (image), for an improved performance. Our evaluation performed on data from a major Commerce search engine's catalog shows a 12\\% (absolute) improvement in precision at 100\\% coverage, and a 16\\% (absolute) improvement in recall at 90\\% precision compared to classifiers that only use textual description of products. In addition, \\CrossAdaptAcro{} also provides a more accurate classifier based only on the dominant signal (text) that can be used in situations in which only the dominant signal is available during application time.", "title": "" }, { "docid": "1a10e38cfc5cad20c64709c59053ffad", "text": "Corporate and product brands are increasingly accepted as valuable intangible assets of organisations, evidence of which is apparent in the reported fi nancial value that strong brands fetch when traded in the mergers and acquisitions markets. However, while much attention is paid to conceptualising brand equity, less is paid to how brands should be managed and delivered in order to create and safeguard brand equity. In this article we develop a conceptual model of corporate brand management for creating and safeguarding brand equity. We argue that while legal protection of the brand is important, by itself it is insuffi cient to protect brand equity in the long term. We suggest that brand management ought to play an important role in safeguarding brand equity and propose a three-stage conceptual model for building and sustaining brand equity comprising: (1) adopting a brandorientation mindset, (2) developing internal branding capabilities, and (3) consistent delivery of the brand. We put forward propositions, which, taken together, form a theory of brand management for building and safeguarding brand equity. We illustrate the theory using 14 cases of award-winning service companies. Their use serves as a demonstration of how our model applies to brand management", "title": "" }, { "docid": "1ccc1b904fa58b1e31f4f3f4e2d76707", "text": "When children and adolescents are the target population in dietary surveys many different respondent and observer considerations surface. The cognitive abilities required to self-report food intake include an adequately developed concept of time, a good memory and attention span, and a knowledge of the names of foods. From the age of 8 years there is a rapid increase in the ability of children to self-report food intake. However, while cognitive abilities should be fully developed by adolescence, issues of motivation and body image may hinder willingness to report. Ten validation studies of energy intake data have demonstrated that mis-reporting, usually in the direction of under-reporting, is likely. Patterns of under-reporting vary with age, and are influenced by weight status and the dietary survey method used. Furthermore, evidence for the existence of subject-specific responding in dietary assessment challenges the assumption that repeated measurements of dietary intake will eventually obtain valid data. Unfortunately, the ability to detect mis-reporters, by comparison with presumed energy requirements, is limited unless detailed activity information is available to allow the energy intake of each subject to be evaluated individually. In addition, high variability in nutrient intakes implies that, if intakes are valid, prolonged dietary recording will be required to rank children correctly for distribution analysis. Future research should focus on refining dietary survey methods to make them more sensitive to different ages and cognitive abilities. The development of improved techniques for identification of mis-reporters and investigation of the issue of differential reporting of foods should also be given priority.", "title": "" }, { "docid": "3ad47c45135498f6ed94004e28028f6e", "text": "This paper describes the theory and implementation of Bayesian networks in the context of automatic speech recognition. Bayesian networks provide a succinct and expressive graphical language for factoring joint probability distributions, and we begin by presenting the structures that are appropriate for doing speech recognition training and decoding. This approach is notable because it expresses all the details of a speech recognition system in a uniform way using only the concepts of random variables and conditional probabilities. A powerful set of computational routines complements the representational utility of Bayesian networks, and the second part of this paper describes these algorithms in detail. We present a novel view of inference in general networks – where inference is done via a change-of-variables that renders the network tree-structured and amenable to a very simple form of inference. We present the technique in terms of straightforward dynamic programming recursions analogous to HMM a–b computation, and then extend it to handle deterministic constraints amongst variables in an extremely efficient manner. The paper concludes with a sequence of experimental results that show the range of effects that can be modeled, and that significant reductions in error-rate can be expected from intelligently factored state representations. 2003 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "5bd2a871d376cf2702e38ee7777b0060", "text": "Interconnected smart vehicles offer a range of sophisticated services that benefit the vehicle owners, transport authorities, car manufacturers, and other service providers. This potentially exposes smart vehicles to a range of security and privacy threats such as location tracking or remote hijacking of the vehicle. In this article, we argue that blockchain (BC), a disruptive technology that has found many applications from cryptocurrencies to smart contracts, is a potential solution to these challenges. We propose a BC-based architecture to protect the privacy of users and to increase the security of the vehicular ecosystem. Wireless remote software updates and other emerging services such as dynamic vehicle insurance fees are used to illustrate the efficacy of the proposed security architecture. We also qualitatively argue the resilience of the architecture against common security attacks.", "title": "" }, { "docid": "0f10bb2afc1797fad603d8c571058ecb", "text": "This paper presents findings from the All Wales Hate Crime Project. Most hate crime research has focused on discrete victim types in isolation. For the first time, internationally, this paper examines the psychological and physical impacts of hate crime across seven victim types drawing on quantitative and qualitative data. It contributes to the hate crime debate in two significant ways: (1) it provides the first look at the problem in Wales and (2) it provides the first multi-victim-type analysis of hate crime, showing that impacts are not homogenous across victim groups. The paper provides empirical credibility to the impacts felt by hate crime victims on the margins who have routinely struggled to gain support.", "title": "" } ]
scidocsrr
072886dca67cf7b844206b28e21f408c
Diesel engine performance and exhaust emission analysis using waste cooking biodiesel fuel with an artificial neural network
[ { "docid": "e44e5c574fda3f03f8ec21f04eb1c417", "text": "Biodiesel (fatty acid methyl esters), which is derived from triglycerides by transesterification with methanol, has attracted considerable attention during the past decade as a renewable, biodegradable, and nontoxic fuel. Several processes for biodiesel fuel production have been developed, among which transesterification using alkali-catalysis gives high levels of conversion of triglycerides to their corresponding methyl esters in short reaction times. This process has therefore been widely utilized for biodiesel fuel production in a number of countries. Recently, enzymatic transesterification using lipase has become more attractive for biodiesel fuel production, since the glycerol produced as a by-product can easily be recovered and the purification of fatty methyl esters is simple to accomplish. The main hurdle to the commercialization of this system is the cost of lipase production. As a means of reducing the cost, the use of whole cell biocatalysts immobilized within biomass support particles is significantly advantageous since immobilization can be achieved spontaneously during batch cultivation, and in addition, no purification is necessary. The lipase production cost can be further lowered using genetic engineering technology, such as by developing lipases with high levels of expression and/or stability towards methanol. Hence, whole cell biocatalysts appear to have great potential for industrial application.", "title": "" } ]
[ { "docid": "fd2d04af3b259a433eb565a41b11ffbd", "text": "OVERVIEW • We develop novel orthogonality regularizations on training deep CNNs, by borrowing ideas and tools from sparse optimization. • These plug-and-play regularizations can be conveniently incorporated into training almost any CNN without extra hassle. • The proposed regularizations can consistently improve the performances of baseline deep networks on CIFAR-10/100, ImageNet and SVHN datasets, based on intensive empirical experiments, as well as accelerate/stabilize the training curves. • The proposed orthogonal regularizations outperform existing competitors.", "title": "" }, { "docid": "2937c5cd1848daa74bb35aaba80890b7", "text": "Neurofeedback (NF) is a training to enhance self-regulatory capacity over brain activity patterns and consequently over brain mental states. Recent findings suggest that NF is a promising alternative for the treatment of attention-deficit/hyperactivity disorder (ADHD). We comprehensively reviewed literature searching for studies on the effectiveness and specificity of NF for the treatment of ADHD. In addition, clinically informative evidence-based data are discussed. We found 3 systematic review on the use of NF for ADHD and 6 randomized controlled trials that have not been included in these reviews. Most nonrandomized controlled trials found positive results with medium-to-large effect sizes, but the evidence for effectiveness are less robust when only randomized controlled studies are considered. The direct comparison of NF and sham-NF in 3 published studies have found no group differences, nevertheless methodological caveats, such as the quality of the training protocol used, sample size, and sample selection may have contributed to the negative results. Further data on specificity comes from electrophysiological studies reporting that NF effectively changes brain activity patterns. No safety issues have emerged from clinical trials and NF seems to be well tolerated and accepted. Follow-up studies support long-term effects of NF. Currently there is no available data to guide clinicians on the predictors of response to NF and on optimal treatment protocol. In conclusion, NF is a valid option for the treatment for ADHD, but further evidence is required to guide its use.", "title": "" }, { "docid": "dbafe7db0387b56464ac630404875465", "text": "Recognition of body posture and motion is an important physiological function that can keep the body in balance. Man-made motion sensors have also been widely applied for a broad array of biomedical applications including diagnosis of balance disorders and evaluation of energy expenditure. This paper reviews the state-of-the-art sensing components utilized for body motion measurement. The anatomy and working principles of a natural body motion sensor, the human vestibular system, are first described. Various man-made inertial sensors are then elaborated based on their distinctive sensing mechanisms. In particular, both the conventional solid-state motion sensors and the emerging non solid-state motion sensors are depicted. With their lower cost and increased intelligence, man-made motion sensors are expected to play an increasingly important role in biomedical systems for basic research as well as clinical diagnostics.", "title": "" }, { "docid": "c6a25dc466e4a22351359f17bd29916c", "text": "We consider practical methods for adding constraints to the K-Means clustering algorithm in order to avoid local solutions with empty clusters or clusters having very few points. We often observe this phenomena when applying K-Means to datasets where the number of dimensions is n 10 and the number of desired clusters is k 20. We propose explicitly adding k constraints to the underlying clustering optimization problem requiring that each cluster have at least a minimum number of points in it. We then investigate the resulting cluster assignment step. Preliminary numerical tests on real datasets indicate the constrained approach is less prone to poor local solutions, producing a better summary of the underlying data. Contrained K-Means Clustering 1", "title": "" }, { "docid": "54899cac2cd13865e117d800bb21fb8b", "text": "The purpose of this study is to give a detailed performance comparison about the feature detector and descriptor methods, particularly when their various combinations are used for image matching. As the case study, the localization experiments of a mobile robot in an indoor environment are given. In these experiments, 3090 query images and 127 dataset images are used. This study includes five methods for feature detectors such as features from accelerated segment test (FAST), oriented FAST and rotated binary robust independent elementary features (BRIEF) (ORB), speeded-up robust features (SURF), scale invariant feature transform (SIFT), binary robust invariant scalable keypoints (BRISK), and five other methods for feature descriptors which are BRIEF, BRISK, SIFT, SURF, and ORB. These methods are used in 23 different combinations and it was possible to obtain meaningful and consistent comparison results using some performance criteria defined in this study. All of these methods are used independently and separately from each other as being feature detector or descriptor. The performance analysis shows the discriminative power of various combinations of detector and descriptor methods. The analysis is completed using five parameters such as (i) accuracy, (ii) time, (iii) angle difference between keypoints, (iv) number of correct matches, and (v) distance between correctly matched keypoints. In a range of 60°, covering five rotational pose points for our system, “FAST-SURF” combination gave the best results with the lowest distance and angle difference values and highest number of matched keypoints. The combination “SIFT-SURF” is obtained as the most accurate combination with 98.41% of correct classification rate. The fastest algorithm is achieved with “ORB-BRIEF” combination with a total running time 21303.30 seconds in order to match 560 images captured during the motion with 127 dataset images.", "title": "" }, { "docid": "fd8f4206ae749136806a35c0fe1597c7", "text": "In this paper, an inductor-inductor-capacitor (LLC) resonant dc-dc converter design procedure for an onboard lithium-ion battery charger of a plug-in hybrid electric vehicle (PHEV) is presented. Unlike traditional resistive load applications, the characteristic of a battery load is nonlinear and highly related to the charging profiles. Based on the features of an LLC converter and the characteristics of the charging profiles, the design considerations are studied thoroughly. The worst-case conditions for primary-side zero-voltage switching (ZVS) operation are analytically identified based on fundamental harmonic approximation when a constant maximum power (CMP) charging profile is implemented. Then, the worst-case operating point is used as the design targeted point to ensure soft-switching operation globally. To avoid the inaccuracy of fundamental harmonic approximation approach in the below-resonance region, the design constraints are derived based on a specific operation mode analysis. Finally, a step-by-step design methodology is proposed and validated through experiments on a prototype converting 400 V from the input to an output voltage range of 250-450 V at 3.3 kW with a peak efficiency of 98.2%.", "title": "" }, { "docid": "66b2ca04ed0b1435d525f04cd81969ac", "text": "Over the past couple of decades, trends in both microarchitecture and underlying semiconductor technology have significantly reduced microprocessor clock periods. These trends have significantly increased relative main-memory latencies as measured in processor clock cycles. To avoid large performance losses caused by long memory access delays, microprocessors rely heavily on a hierarchy of cache memories. But cache memories are not always effective, either because they are not large enough to hold a program's working set, or because memory access patterns don't exhibit behavior that matches a cache memory's demand-driven, line-structured organization. To partially overcome cache memories' limitations, we organize data cache prefetch information in a new way, a GHB (global history buffer) supports existing prefetch algorithms more effectively than conventional prefetch tables. It reduces stale table data, improving accuracy and reducing memory traffic. It contains a more complete picture of cache miss history and is smaller than conventional tables.", "title": "" }, { "docid": "8e53fff50063f2956e8f65e14bec77a4", "text": "Mobile Edge Computing (MEC) provides mobile and cloud computing capabilities within the access network, and aims to unite the telco and IT at the mobile network edge. This paper presents an investigation on the progress of MEC, and proposes a platform, named WiCloud, to provide edge networking, proximate computing and data acquisition for innovative services. Furthermore, the open challenges that must be addressed before the commercial deployment of MEC are discussed.", "title": "" }, { "docid": "79f5415cfc7f89685227abb130cd75e5", "text": "Software engineering is knowledge-intensive work, and how to manage software engineering knowledge has received much attention. This systematic review identifies empirical studies of knowledge management initiatives in software engineering, and discusses the concepts studied, the major findings, and the research methods used. Seven hundred and sixty-two articles were identified, of which 68 were studies in an industry context. Of these, 29 were empirical studies and 39 reports of lessons learned. More than half of the empirical studies were case studies. The majority of empirical studies relate to technocratic and behavioural aspects of knowledge management, while there are few studies relating to economic, spatial and cartographic approaches. A finding reported across multiple papers was the need to not focus exclusively on explicit knowledge, but also consider tacit knowledge. We also describe implications for research and for practice.", "title": "" }, { "docid": "88f60c6835fed23e12c56fba618ff931", "text": "Design of fault tolerant systems is a popular subject in flight control system design. In particular, adaptive control approach has been successful in recovering aircraft in a wide variety of different actuator/sensor failure scenarios. However, if the aircraft goes under a severe actuator failure, control system might not be able to adapt fast enough to changes in the dynamics, which would result in performance degradation or even loss of the aircraft. Inspired by the recent success of deep learning applications, this work builds a hybrid recurren-t/convolutional neural network model to estimate adaptation parameters for aircraft dynamics under actuator/engine faults. The model is trained offline from a database of different failure scenarios. In case of an actuator/engine failure, the model identifies adaptation parameters and feeds this information to the adaptive control system, which results in significantly faster convergence of the controller coefficients. Developed control system is implemented on a nonlinear 6-DOF F-16 aircraft, and the results show that the proposed architecture is especially beneficial in severe failure scenarios.", "title": "" }, { "docid": "8a2f40f2a0082fae378c7907a60159ac", "text": "We present a novel graph-based neural network model for relation extraction. Our model treats multiple pairs in a sentence simultaneously and considers interactions among them. All the entities in a sentence are placed as nodes in a fully-connected graph structure. The edges are represented with position-aware contexts around the entity pairs. In order to consider different relation paths between two entities, we construct up to l-length walks between each pair. The resulting walks are merged and iteratively used to update the edge representations into longer walks representations. We show that the model achieves performance comparable to the state-ofthe-art systems on the ACE 2005 dataset without using any external tools.", "title": "" }, { "docid": "86846cd0bc21747e651191a170ad6af7", "text": "Recent advances in deep learning have enabled researchers across many disciplines to uncover new insights about large datasets. Deep neural networks have shown applicability to image, time-series, textual, and other data, all of which are available in a plethora of research fields. However, their computational complexity and large memory overhead requires advanced software and hardware technologies to train neural networks in a reasonable amount of time. To make this possible, there has been an influx in development of deep learning software that aim to leverage advanced hardware resources. In order to better understand the performance implications of deep learning frameworks over these different resources, we analyze the performance of three different frameworks, Caffe, TensorFlow, and Apache SINGA, over several hardware environments. This includes scaling up and out with single-and multi-node setups using different CPU and GPU technologies. Notably, we investigate the performance characteristics of NVIDIA's state-of-the-art hardware technology, NVLink, and also Intel's Knights Landing, the most advanced Intel product for deep learning, with respect to training time and utilization. To our best knowledge, this is the first work concerning deep learning bench-marking with NVLink and Knights Landing. Through these experiments, we provide analysis of the frameworks' performance over different hardware environments in terms of speed and scaling. As a result of this work, better insight is given towards both using and developing deep learning tools that cater to current and upcoming hardware technologies.", "title": "" }, { "docid": "0dd558f3094d82f55806d1170218efce", "text": "As the key supporting system of telecommunication enterprises, OSS/BSS needs to support the service steadily in the long-term running and maintenance process. The system architecture must remain steady and consistent in order to accomplish its goal, which is quite difficult when both the technique and business requirements are changing so rapidly. The framework method raised in this article can guarantee the system architecture’s steadiness and business processing’s consistence by means of describing business requirements, application and information abstractly, becoming more specific and formalized in the planning, developing and maintaining process, and getting the results needed. This article introduces firstly the concepts of framework method, then recommends its applications and superiority in OSS/BSS systems, and lastly gives the prospect of its application.", "title": "" }, { "docid": "4e70489d8c2108a60431b42b155f516a", "text": "The notion of ‘wireheading’, or direct reward centre stimulation of the brain, is a wellknown concept in neuroscience. In this paper, we examine the corresponding issue of reward (utility) function integrity in artificially intelligent machines. We survey the relevant literature and propose a number of potential solutions to ensure the integrity of our artificial assistants. Overall, we conclude that wireheading in rational selfimproving optimisers above a certain capacity remains an unsolved problem despite opinion of many that such machines will choose not to wirehead. A relevant issue of literalness in goal setting also remains largely unsolved and we suggest that the development of a non-ambiguous knowledge transfer language might be a step in the right direction.", "title": "" }, { "docid": "a8661d8747a8201afff10112889db151", "text": "Empathy is a multidimensional construct consisting of cognitive (inferring mental states) and emotional (empathic concern) components. Despite a paucity of research, individuals on the autism spectrum are generally believed to lack empathy. In the current study we used a new, photo-based measure, the Multifaceted Empathy Test (MET), to assess empathy multidimensionally in a group of 17 individuals with Asperger syndrome (AS) and 18 well-matched controls. Results suggested that while individuals with AS are impaired in cognitive empathy, they do not differ from controls in emotional empathy. Level of general emotional arousability and socially desirable answer tendencies did not differ between groups. Internal consistency of the MET's scales ranged from .71 to .92, and convergent and divergent validity were highly satisfactory.", "title": "" }, { "docid": "e4c27a97a355543cf113a16bcd28ca50", "text": "A metamaterial-based broadband low-profile grid-slotted patch antenna is presented. By slotting the radiating patch, a periodic array of series capacitor loaded metamaterial patch cells is formed, and excited through the coupling aperture in a ground plane right underneath and parallel to the slot at the center of the patch. By exciting two adjacent resonant modes simultaneously, broadband impedance matching and consistent radiation are achieved. The dispersion relation of the capacitor-loaded patch cell is applied in the mode analysis. The proposed grid-slotted patch antenna with a low profile of 0.06 λ0 (λ0 is the center operating wavelength in free space) achieves a measured bandwidth of 28% for the |S11| less than -10 dB and maximum gain of 9.8 dBi.", "title": "" }, { "docid": "4d276851b607fff6267ec03d6f28a471", "text": "The polysaccharide-rich wall, which envelopes the fungal cell, is pivotal to the maintenance of cellular integrity and for the protection of the cell from external aggressors - such as environmental fluxes and during host infection. This review considers the commonalities in the composition of the wall across the fungal kingdom, addresses how little is known about the assembly of the polysaccharide matrix, and considers changes in the wall of plant-pathogenic fungi during on and in planta growth, following the elucidation of infection structures requiring cell wall alterations. It highlights what is known about the phytopathogenic fungal wall and what needs to be discovered.", "title": "" }, { "docid": "839f8f079c4134641f6bf4051200dd8d", "text": "Although Industrie 4.0 is currently a top priority for many companies, research centers, and universities, a generally accepted definition of the term does not exist. As a result, discussing the topic on an academic level is difficult, and so is implementing Industrie 4.0 scenarios. Based on a literature review, the paper provides a definition of Industrie 4.0 and identifies six design principles for its implementation: interoperability, virtualization, decentralization, real-time capability, service orientation, and modularity. Taking into account these principles, academics may be enabled to further investigate on the topic, while practitioners may find assistance in implementing appropriate scenarios.", "title": "" }, { "docid": "50f7fd72dcd833c92efb56fb71918263", "text": "The input vocabulary for touch-screen interaction on handhelds is dramatically limited, especially when the thumb must be used. To enrich that vocabulary we propose to discriminate, among thumb gestures, those we call MicroRolls, characterized by zero tangential velocity of the skin relative to the screen surface. Combining four categories of thumb gestures, Drags, Swipes, Rubbings and MicroRolls, with other classification dimensions, we show that at least 16 elemental gestures can be automatically recognized. We also report the results of two experiments showing that the roll vs. slide distinction facilitates thumb input in a realistic copy and paste task, relative to existing interaction techniques.", "title": "" }, { "docid": "97af9704b898bebe4dae43c1984bc478", "text": "In earlier work we have shown that adults, young children, and infants are capable of computing transitional probabilities among adjacent syllables in rapidly presented streams of speech, and of using these statistics to group adjacent syllables into word-like units. In the present experiments we ask whether adult learners are also capable of such computations when the only available patterns occur in non-adjacent elements. In the first experiment, we present streams of speech in which precisely the same kinds of syllable regularities occur as in our previous studies, except that the patterned relations among syllables occur between non-adjacent syllables (with an intervening syllable that is unrelated). Under these circumstances we do not obtain our previous results: learners are quite poor at acquiring regular relations among non-adjacent syllables, even when the patterns are objectively quite simple. In subsequent experiments we show that learners are, in contrast, quite capable of acquiring patterned relations among non-adjacent segments-both non-adjacent consonants (with an intervening vocalic segment that is unrelated) and non-adjacent vowels (with an intervening consonantal segment that is unrelated). Finally, we discuss why human learners display these strong differences in learning differing types of non-adjacent regularities, and we conclude by suggesting that these contrasts in learnability may account for why human languages display non-adjacent regularities of one type much more widely than non-adjacent regularities of the other type.", "title": "" } ]
scidocsrr
9e332c1b5628fba6c160db8d95adab35
Genetic disorders associated with macrocephaly.
[ { "docid": "ff5097d34b7c88d6772d18b5a87a71e9", "text": "While abnormalities in head circumference in autism have been observed for decades, it is only recently that scientists have begun to focus in on the developmental origins of such a phenomenon. In this article we review past and present literature on abnormalities in head circumference, as well as recent developmental MRI studies of brain growth in this disorder. We hypothesize that brain growth abnormalities are greatest in frontal lobes, particularly affecting large neurons such as pyramidal cells, and speculate how this abnormality might affect neurofunctional circuitry in autism. The relationship to clinical characteristics and other disorders of macrencephaly are discussed.", "title": "" } ]
[ { "docid": "a32411be8c0fabc872808fd37c6ae41b", "text": "Sentence classification, serving as the foundation of the subsequent text-based processing, continues attracting researchers attentions. Recently, with the great success of deep learning, convolutional neural network (CNN), a kind of common architecture of deep learning, has been widely used to this filed and achieved excellent performance. However, most CNN-based studies focus on using complex architectures to extract more effective category information, requiring more time in training models. With the aim to get better performance with less time cost on classification, this paper proposes two simple and effective methods by fully combining information both extracted from statistics and CNN. The first method is S-SFCNN, which combines statistical features and CNN-based probabilistic features of classification to build feature vectors, and then the vectors are used to train the logistic regression classifiers. And the second method is C-SFCNN, which combines CNN-based features and statistics-based probabilistic features of classification to build feature vectors. In the two methods, the Naive Bayes log-count ratios are selected as the text statistical features and the single-layer and single channel CNN is used as our CNN architecture. The testing results executed on 7 tasks show that our methods can achieve better performance than many other complex CNN models with less time cost. In addition, we summarized the main factors influencing the performance of our methods though experiment.", "title": "" }, { "docid": "2debaecdacfa8e62bb78ff8f0cba2ce4", "text": "Analysis techniques, such as control flow, data flow, and control dependence, are used for a variety of software-engineering tasks, including structural and regression testing, dynamic execution profiling, static and dynamic slicing, and program understanding. To be applicable to programs in languages, such as Java and C++, these analysis techniques must account for the effects of exception occurrences and exception-handling constructs; failure to do so can cause the analysis techniques to compute incorrect results and thus, limit the usefulness of the applications that use them. This paper discusses the effects of exceptionhandling constructs on several analysis techniques. The paper presents techniques to construct representations for programs with explicit exception occurrences—exceptions that are raised explicitly through throw statements—and exception-handling constructs. The paper presents algorithms that use these representations to perform the desired analyses. The paper also discusses several softwareengineering applications that use these analyses. Finally, the paper describes empirical results pertaining to the occurrence of exception-handling constructs in Java programs, and their effects on some analysis tasks. Keywords— Exception handling, control-flow analysis, control-dependence analysis, data-flow analysis, program slicing, structural testing.", "title": "" }, { "docid": "104148028f4d0e2775274ef7d2e8b2ed", "text": "Funneling and saltation are two major illusory feedback techniques for vibration-based tactile feedback. They are often put into practice e.g. to reduce the number of vibrators to be worn on the body and thereby build a less cumbersome feedback device. Recently, these techniques have been found to be applicable to eliciting \"out of the body\" experiences as well (e.g. through user-held external objects). This paper examines the possibility of applying this phenomenon to interacting with virtual objects. Two usability experiments were run to test the effects of funneling and saltation respectively for perceiving tactile sensation from a virtual object in an augmented reality setting. Experimental results have shown solid evidences for phantom sensations from virtual objects with funneling, but mixed results for saltation.", "title": "" }, { "docid": "3fbbf3368b7ae10f795ce3a746914448", "text": "In this letter, we introduce some mathematical and numerical tools to analyze and interpret inhomogeneous quadratic forms. The resulting characterization is in some aspects similar to that given by experimental studies of cortical cells, making it particularly suitable for application to second-order approximations and theoretical models of physiological receptive fields. We first discuss two ways of analyzing a quadratic form by visualizing the coefficients of its quadratic and linear term directly and by considering the eigenvectors of its quadratic term. We then present an algorithm to compute the optimal excitatory and inhibitory stimulithose that maximize and minimize the considered quadratic form, respectively, given a fixed energy constraint. The analysis of the optimal stimuli is completed by considering their invariances, which are the transformations to which the quadratic form is most insensitive, and by introducing a test to determine which of these are statistically significant. Next we propose a way to measure the relative contribution of the quadratic and linear term to the total output of the quadratic form. Furthermore, we derive simpler versions of the above techniques in the special case of a quadratic form without linear term. In the final part of the letter, we show that for each quadratic form, it is possible to build an equivalent two-layer neural network, which is compatible with (but more general than) related networks used in some recent articles and with the energy model of complex cells. We show that the neural network is unique only up to an arbitrary orthogonal transformation of the excitatory and inhibitory subunits in the first layer.", "title": "" }, { "docid": "d9ddbac5032e7ff445ea57ac3fdfe8a9", "text": "Blood-brain barrier disruption, microglial activation and neurodegeneration are hallmarks of multiple sclerosis. However, the initial triggers that activate innate immune responses and their role in axonal damage remain unknown. Here we show that the blood protein fibrinogen induces rapid microglial responses toward the vasculature and is required for axonal damage in neuroinflammation. Using in vivo two-photon microscopy, we demonstrate that microglia form perivascular clusters before myelin loss or paralysis onset and that, of the plasma proteins, fibrinogen specifically induces rapid and sustained microglial responses in vivo. Fibrinogen leakage correlates with areas of axonal damage and induces reactive oxygen species release in microglia. Blocking fibrin formation with anticoagulant treatment or genetically eliminating the fibrinogen binding motif recognized by the microglial integrin receptor CD11b/CD18 inhibits perivascular microglial clustering and axonal damage. Thus, early and progressive perivascular microglial clustering triggered by fibrinogen leakage upon blood-brain barrier disruption contributes to axonal damage in neuroinflammatory disease.", "title": "" }, { "docid": "33a965548b67adb7824d1c452ace24ee", "text": "As process nodes continue to shrink, the semiconductor industry faces severe manufacturing challenges. Two most expected technologies may push the limits of next-generation lithography: extreme ultraviolet lithography (EUVL) and electron beam lithography (EBL). EUVL works by emitting intense beams of ultraviolet light that are reflected from a reflective mask into a resist for nanofabrication, while EBL scans focused beams of electrons to directly draw high-resolution feature patterns on a resist without employing any mask. Each of the two technologies encounters unique design challenges and requires solutions for a breakthrough. In this paper, we focus on the design-for-manufacturability issues for EUVL and EBL. We investigate the most critical design challenges of the two technologies, flare and shadowing effects for EUVL, and heating, stitching, fogging, and proximity effects for EBL. Preliminary solutions for these effects are explored, which can contribute to the continuing scaling of the CMOS technology. Finally, we provide future research directions for these key effects.", "title": "" }, { "docid": "1f0fd314cdc4afe7b7716ca4bd681c16", "text": "Automatic speech recognition can potentially benefit from the lip motion patterns, complementing acoustic speech to improve the overall recognition performance, particularly in noise. In this paper we propose an audio-visual fusion strategy that goes beyond simple feature concatenation and learns to automatically align the two modalities, leading to enhanced representations which increase the recognition accuracy in both clean and noisy conditions. We test our strategy on the TCD-TIMIT and LRS2 datasets, designed for large vocabulary continuous speech recognition, applying three types of noise at different power ratios. We also exploit state of the art Sequence-to-Sequence architectures, showing that our method can be easily integrated. Results show relative improvements from 7% up to 30% on TCD-TIMIT over the acoustic modality alone, depending on the acoustic noise level. We anticipate that the fusion strategy can easily generalise to many other multimodal tasks which involve correlated modalities.", "title": "" }, { "docid": "4301aa3bb6a7d1ca9c0c17b8a12ebb37", "text": "A CAPTCHA is a test that can, automatically, tell human and computer programs apart. It is a mechanism widely used nowadays for protecting web applications, interfaces, and services from malicious users and automated spammers. Usability and robustness are two fundamental aspects with CAPTCHA, where the usability aspect is the ease with which humans pass its challenges, while the robustness is the strength of its segmentation-resistance mechanism. The collapsing mechanism, which is removing the space between characters to prevent segmentation, has been shown to be reasonably resistant to known attacks. On the other hand, this mechanism drops considerably the human-solvability of text-based CAPTCHAs. Accordingly, an optimizer has previously been proposed that automatically enhances the usability of a CAPTCHA generation without sacrificing its robustness level. However, this optimizer has not yet been evaluated in terms of improving the usability. This paper, therefore, evaluates the usability of this optimizer by conducting an experimental study. The results of this evaluation showed that a statistically significant enhancement is found in the usability of text-based CAPTCHA generation. Keywords—text-based CAPTCHA; usability; security; optimization; experimentation; evaluation", "title": "" }, { "docid": "be20cb4f75ff0d4d1637095d5928b005", "text": "Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost.", "title": "" }, { "docid": "d2f36cc750703f5bbec2ea3ef4542902", "text": "ixed reality (MR) is a kind of virtual reality (VR) but a broader concept than augmented reality (AR), which augments the real world with synthetic electronic data. On the opposite side, there is a term, augmented virtuality (AV), which enhances or augments the virtual environment (VE) with data from the real world. Mixed reality covers a continuum from AR to AV. This concept embraces the definition of MR stated by Paul Milgram. 1 We participated in the Key Technology Research Project on Mixed Reality Systems (MR Project) in Japan. The Japanese government and Canon funded the Mixed Reality Systems Laboratory (MR Lab) and launched it in January 1997. We completed this national project in March 2001. At the end of the MR Project, an event called MiRai-01 (mirai means future in Japanese) was held at Yokohama, Japan, to demonstrate this emerging technology all over the world. This event was held in conjunction with two international conferences, IEEE Virtual Reality 2001 and the Second International Symposium on Mixed Reality (ISMR) and aggregated about 3,000 visitors for two days. This project aimed to produce an innovative information technology that could be used in the first decade of the 21st century while expanding the limitations of traditional VR technology. The basic policy we maintained throughout this project was to emphasize a pragmatic system development rather than a theory and to make such a system always available to people. Since MR is an advanced form of VR, the MR system inherits a VR char-acteristic—users can experience the world of MR interactively. According to this policy, we tried to make the system work in real time. Then, we enhanced each of our systems in their response speed and image quality in real time to increase user satisfaction. We describe the aim and research themes of the MR Project in Tamura et al. 2 To develop MR systems along this policy, we studied the fundamental problems of AR and AV and developed several methods to solve them in addition to system development issues. For example, we created a new image-based rendering method for AV systems, hybrid registration methods, and new types of see-through head-mounted displays (ST-HMDs) for AR systems. Three universities in Japan—University of Tokyo (Michi-taka Hirose), University of Tsukuba (Yuichic Ohta), and Hokkaido University (Tohru Ifukube)—collaborated with us to study the broad research area of MR. The side-bar, \" Four Types of MR Visual Simulation, …", "title": "" }, { "docid": "f45d6d572325e20bad1eaffe5330f077", "text": "Ongoing brain activity can be recorded as electroen-cephalograph (EEG) to discover the links between emotional states and brain activity. This study applied machine-learning algorithms to categorize EEG dynamics according to subject self-reported emotional states during music listening. A framework was proposed to optimize EEG-based emotion recognition by systematically 1) seeking emotion-specific EEG features and 2) exploring the efficacy of the classifiers. Support vector machine was employed to classify four emotional states (joy, anger, sadness, and pleasure) and obtained an averaged classification accuracy of 82.29% ± 3.06% across 26 subjects. Further, this study identified 30 subject-independent features that were most relevant to emotional processing across subjects and explored the feasibility of using fewer electrodes to characterize the EEG dynamics during music listening. The identified features were primarily derived from electrodes placed near the frontal and the parietal lobes, consistent with many of the findings in the literature. This study might lead to a practical system for noninvasive assessment of the emotional states in practical or clinical applications.", "title": "" }, { "docid": "48f06ed96714c2970550fef88d21d517", "text": "Support vector machines (SVMs) are becoming popular in a wide variety of biological applications. But, what exactly are SVMs and how do they work? And what are their most promising applications in the life sciences?", "title": "" }, { "docid": "0dac38edf20c2a89a9eb46cd1300162c", "text": "Common software weaknesses, such as improper input validation, integer overflow, can harm system security directly or indirectly, causing adverse effects such as denial-of-service, execution of unauthorized code. Common Weakness Enumeration (CWE) maintains a standard list and classification of common software weakness. Although CWE contains rich information about software weaknesses, including textual descriptions, common sequences and relations between software weaknesses, the current data representation, i.e., hyperlined documents, does not support advanced reasoning tasks on software weaknesses, such as prediction of missing relations and common consequences of CWEs. Such reasoning tasks become critical to managing and analyzing large numbers of common software weaknesses and their relations. In this paper, we propose to represent common software weaknesses and their relations as a knowledge graph, and develop a translation-based, description-embodied knowledge representation learning method to embed both software weaknesses and their relations in the knowledge graph into a semantic vector space. The vector representations (i.e., embeddings) of software weaknesses and their relations can be exploited for knowledge acquisition and inference. We conduct extensive experiments to evaluate the performance of software weakness and relation embeddings in three reasoning tasks, including CWE link prediction, CWE triple classification, and common consequence prediction. Our knowledge graph embedding approach outperforms other description- and/or structure-based representation learning methods.", "title": "" }, { "docid": "74d6c2fff4b67d05871ca0debbc4ec15", "text": "There is great interest in developing rechargeable lithium batteries with higher energy capacity and longer cycle life for applications in portable electronic devices, electric vehicles and implantable medical devices. Silicon is an attractive anode material for lithium batteries because it has a low discharge potential and the highest known theoretical charge capacity (4,200 mAh g(-1); ref. 2). Although this is more than ten times higher than existing graphite anodes and much larger than various nitride and oxide materials, silicon anodes have limited applications because silicon's volume changes by 400% upon insertion and extraction of lithium which results in pulverization and capacity fading. Here, we show that silicon nanowire battery electrodes circumvent these issues as they can accommodate large strain without pulverization, provide good electronic contact and conduction, and display short lithium insertion distances. We achieved the theoretical charge capacity for silicon anodes and maintained a discharge capacity close to 75% of this maximum, with little fading during cycling.", "title": "" }, { "docid": "d9de6a277eec1156e680ee6f656cea10", "text": "Research in the areas of organizational climate and work performance was used to develop a framework for measuring perceptions of safety at work. The framework distinguished perceptions of the work environment from perceptions of performance related to safety. Two studies supported application of the framework to employee perceptions of safety in the workplace. Safety compliance and safety participation were distinguished as separate components of safety-related performance. Perceptions of knowledge about safety and motivation to perform safely influenced individual reports of safety performance and also mediated the link between safety climate and safety performance. Specific dimensions of safety climate were identified and constituted a higher order safety climate factor. The results support conceptualizing safety climate as an antecedent to safety performance in organizations.", "title": "" }, { "docid": "b57b392e89b92aecb03235eeaaf248c8", "text": "Recent advances in semiconductor performance made possible by organic π-electron molecules, carbon-based nanomaterials, and metal oxides have been a central scientific and technological research focus over the past decade in the quest for flexible and transparent electronic products. However, advances in semiconductor materials require corresponding advances in compatible gate dielectric materials, which must exhibit excellent electrical properties such as large capacitance, high breakdown strength, low leakage current density, and mechanical flexibility on arbitrary substrates. Historically, conventional silicon dioxide (SiO2) has dominated electronics as the preferred gate dielectric material in complementary metal oxide semiconductor (CMOS) integrated transistor circuitry. However, it does not satisfy many of the performance requirements for the aforementioned semiconductors due to its relatively low dielectric constant and intransigent processability. High-k inorganics such as hafnium dioxide (HfO2) or zirconium dioxide (ZrO2) offer some increases in performance, but scientists have great difficulty depositing these materials as smooth films at temperatures compatible with flexible plastic substrates. While various organic polymers are accessible via chemical synthesis and readily form films from solution, they typically exhibit low capacitances, and the corresponding transistors operate at unacceptably high voltages. More recently, researchers have combined the favorable properties of high-k metal oxides and π-electron organics to form processable, structurally well-defined, and robust self-assembled multilayer nanodielectrics, which enable high-performance transistors with a wide variety of unconventional semiconductors. In this Account, we review recent advances in organic-inorganic hybrid gate dielectrics, fabricated by multilayer self-assembly, and their remarkable synergy with unconventional semiconductors. We first discuss the principals and functional importance of gate dielectric materials in thin-film transistor (TFT) operation. Next, we describe the design, fabrication, properties, and applications of solution-deposited multilayer organic-inorganic hybrid gate dielectrics, using self-assembly techniques, which provide bonding between the organic and inorganic layers. Finally, we discuss approaches for preparing analogous hybrid multilayers by vapor-phase growth and discuss the properties of these materials.", "title": "" }, { "docid": "1e66868e4be8605f06af9e11615d41c4", "text": "In this paper, we propose a fully automatic and efficient algorithm for realistic 3D face reconstruction by fusing multiple 2D face images. Firstly, an efficient multi-view 2D face alignment algorithm is utilized to localize the facial points of the face images; and then the intrinsic shape and texture models are inferred by the proposed Syncretized Shape Model (SSM) and Syncretized Texture Model (STM), respectively. Compared with other related works, our proposed algorithm has the following characteristics: 1) the inferred shape and texture are more realistic owing to the constraints and co-enhancement among the multiple images; 2) it is fully automatic, without any user interaction; and 3) the shape and pose parameter estimation is efficient via EM approach and unit quaternion based pose representation, and is also robust as a result of the dynamic correspondence approach. The experimental results show the effectiveness of our proposed algorithm for 3D face reconstruction.", "title": "" }, { "docid": "789fe916396c5a57a0327618d5efc74d", "text": "In object detection, an intersection over union (IoU) threshold is required to define positives and negatives. An object detector, trained with low IoU threshold, e.g. 0.5, usually produces noisy detections. However, detection performance tends to degrade with increasing the IoU thresholds. Two main factors are responsible for this: 1) overfitting during training, due to exponentially vanishing positive samples, and 2) inference-time mismatch between the IoUs for which the detector is optimal and those of the input hypotheses. A multi-stage object detection architecture, the Cascade R-CNN, is proposed to address these problems. It consists of a sequence of detectors trained with increasing IoU thresholds, to be sequentially more selective against close false positives. The detectors are trained stage by stage, leveraging the observation that the output of a detector is a good distribution for training the next higher quality detector. The resampling of progressively improved hypotheses guarantees that all detectors have a positive set of examples of equivalent size, reducing the overfitting problem. The same cascade procedure is applied at inference, enabling a closer match between the hypotheses and the detector quality of each stage. A simple implementation of the Cascade R-CNN is shown to surpass all single-model object detectors on the challenging COCO dataset. Experiments also show that the Cascade R-CNN is widely applicable across detector architectures, achieving consistent gains independently of the baseline detector strength. The code is available at https://github.com/zhaoweicai/cascade-rcnn.", "title": "" }, { "docid": "b480111b47176fe52cd6f9ca296dc666", "text": "We develop a fully automatic image colorization system. Our approach leverages recent advances in deep networks, exploiting both low-level and semantic representations. As many scene elements naturally appear according to multimodal color distributions, we train our model to predict per-pixel color histograms. This intermediate output can be used to automatically generate a color image, or further manipulated prior to image formation. On both fully and partially automatic colorization tasks, we outperform existing methods. We also explore colorization as a vehicle for self-supervised visual representation learning. Fig. 1: Our automatic colorization of grayscale input; more examples in Figs. 3 and 4.", "title": "" } ]
scidocsrr
0d4c98088199e9bfcbf31be36116a11e
The Bitcoin Backbone Protocol with Chains of Variable Difficulty
[ { "docid": "f2a66fb35153e7e10d93fac5c8d29374", "text": "A widespread security claim of the Bitcoin system, presented in the original Bitcoin white-paper, states that the security of the system is guaranteed as long as there is no attacker in possession of half or more of the total computational power used to maintain the system. This claim, however, is proved based on theoretically awed assumptions. In the paper we analyze two kinds of attacks based on two theoretical aws: the Block Discarding Attack and the Di culty Raising Attack. We argue that the current theoretical limit of attacker's fraction of total computational power essential for the security of the system is in a sense not 1 2 but a bit less than 1 4 , and outline proposals for protocol change that can raise this limit to be as close to 1 2 as we want. The basic idea of the Block Discarding Attack has been noted as early as 2010, and lately was independently though-of and analyzed by both author of this paper and authors of a most recently pre-print published paper. We thus focus on the major di erences of our analysis, and try to explain the unfortunate surprising coincidence. To the best of our knowledge, the second attack is presented here for the rst time.", "title": "" } ]
[ { "docid": "09cafefb90615ef56c080a22e90ab5b7", "text": "This article presents a Takagi–Sugeno–Kang Fuzzy Neural Network (TSKFNN) approach to predict freeway corridor travel time with an online computing algorithm. TSKFNN, a combination of a Takagi–Sugeno– Kang (TSK) type fuzzy logic system and a neural network, produces strong prediction performance because of its high accuracy and quick convergence. Real world data collected from US-290 in Houston, Texas are used to train and validate the network. The prediction performance of the TSKFNN is investigated with different combinations of traffic count, occupancy, and speed as input options. The comparison between online TSKFNN, offline TSKFNN, the back propagation neural network (BPNN) and the time series model (ARIMA) is made to evaluate the performance of TSKFNN. The results show that using count, speed, and occupancy together as input produces the best TSKFNN predictions. The online TSKFNN outperforms other commonly used models and is a promising tool for reliable travel time prediction on", "title": "" }, { "docid": "9511bcd369d7b18ba67872e1940dfa89", "text": "Addictive substances are known to increase dopaminergic signaling in the mesocorticolimbic system. The origin of this dopamine (DA) signaling originates in the ventral tegmental area (VTA), which sends afferents to various targets, including the nucleus accumbens, the medial prefrontal cortex, and the basolateral amygdala. VTA DA neurons mediate stimuli saliency and goal-directed behaviors. These neurons undergo robust drug-induced intrinsic and extrinsic synaptic mechanisms following acute and chronic drug exposure, which are part of brain-wide adaptations that ultimately lead to the transition into a drug-dependent state. Interestingly, recent investigations of the differential subpopulations of VTA DA neurons have revealed projection-specific functional roles in mediating reward, aversion, and stress. It is now critical to view drug-induced neuroadaptations from a circuit-level perspective to gain insight into how differential dopaminergic adaptations and signaling to targets of the mesocorticolimbic system mediates drug reward. This review hopes to describe the projection-specific intrinsic characteristics of these subpopulations, the differential afferent inputs onto these VTA DA neuron subpopulations, and consolidate findings of drug-induced plasticity of VTA DA neurons and highlight the importance of future projection-based studies of this system.", "title": "" }, { "docid": "07c185c21c9ce3be5754294a73ab5e3c", "text": "In order to support efficient workflow design, recent commercial workflow systems are providing templates of common business processes. These templates, called cases, can be modified individually or collectively into a new workflow to meet the business specification. However, little research has been done on how to manage workflow models, including issues such as model storage, model retrieval, model reuse and assembly. In this paper, we propose a novel framework to support workflow modeling and design by adapting workflow cases from a repository of process models. Our approach to workflow model management is based on a structured workflow lifecycle and leverages recent advances in model management and case-based reasoning techniques. Our contributions include a conceptual model of workflow cases, a similarity flooding algorithm for workflow case retrieval, and a domain-independent AI planning approach to workflow case composition. We illustrate the workflow model management framework with a prototype system called Case-Oriented Design Assistant for Workflow Modeling (CODAW). 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "c13ef40a8283f4c0aa6d61c32c6b1a79", "text": "Fingerprint individuality is the study of the extent of uniqueness of fingerprints and is the central premise of expert testimony in court. A forensic expert testifies whether a pair of fingerprints is either a match or non-match by comparing salient features of the fingerprint pair. However, the experts are rarely questioned on the uncertainty associated with the match: How likely is the observed match between the fingerprint pair due to just random chance? The main concern with the admissibility of fingerprint evidence is that the matching error rates (i.e., the fundamental error rates of matching by the human expert) are unknown. The problem of unknown error rates is also prevalent in other modes of identification such as handwriting, lie detection, etc. Realizing this, the U.S. Supreme Court, in the 1993 case of Daubert vs. Merrell Dow Pharmaceuticals, ruled that forensic evidence presented in a court is subject to five principles of scientific validation, namely whether (i) the particular technique or methodology has been subject to statistical hypothesis testing, (ii) its error rates has been established, (iii) standards controlling the technique’s operation exist and have been maintained, (iv) it has been peer reviewed, and (v) it has a general widespread acceptance. Following Daubert, forensic evidence based on fingerprints was first challenged in the 1999 case of USA vs. Byron Mitchell based on the “known error rate” condition 2 mentioned above, and subsequently, in 20 other cases involving fingerprint evidence. The establishment of matching error rates is directly related to the extent of fingerprint individualization. This article gives an overview of the problem of fingerprint individuality, the challenges faced and the models and methods that have been developed to study this problem. Related entries: Fingerprint individuality, fingerprint matching automatic, fingerprint matching manual, forensic evidence of fingerprint, individuality. Definitional entries: 1.Genuine match: This is the match between two fingerprint images of the same person. 2. Impostor match: This is the match between a pair of fingerprints from two different persons. 3. Fingerprint individuality: It is the study of the extent of which different fingerprints tend to match with each other. It is the most important measure to be judged when fingerprint evidence is presented in court as it reflects the uncertainty with the experts’ decision. 4. Variability: It refers to the differences in the observed features from one sample to another in a population. The differences can be random, that is, just by chance, or systematic due to some underlying factor that governs the variability.", "title": "" }, { "docid": "64c156ee4171b5b84fd4eedb1d922f55", "text": "We introduce a large computational subcategorization lexicon which includes subcategorization frame (SCF) and frequency information for 6,397 English verbs. This extensive lexicon was acquired automatically from five corpora and the Web using the current version of the comprehensive subcategorization acquisition system of Briscoe and Carroll (1997). The lexicon is provided freely for research use, along with a script which can be used to filter and build sub-lexicons suited for different natural language processing (NLP) purposes. Documentation is also provided which explains each sub-lexicon option and evaluates its accuracy.", "title": "" }, { "docid": "c45b962006b2bb13ab57fe5d643e2ca6", "text": "Physical activity has a positive impact on people's well-being, and it may also decrease the occurrence of chronic diseases. Activity recognition with wearable sensors can provide feedback to the user about his/her lifestyle regarding physical activity and sports, and thus, promote a more active lifestyle. So far, activity recognition has mostly been studied in supervised laboratory settings. The aim of this study was to examine how well the daily activities and sports performed by the subjects in unsupervised settings can be recognized compared to supervised settings. The activities were recognized by using a hybrid classifier combining a tree structure containing a priori knowledge and artificial neural networks, and also by using three reference classifiers. Activity data were collected for 68 h from 12 subjects, out of which the activity was supervised for 21 h and unsupervised for 47 h. Activities were recognized based on signal features from 3-D accelerometers on hip and wrist and GPS information. The activities included lying down, sitting and standing, walking, running, cycling with an exercise bike, rowing with a rowing machine, playing football, Nordic walking, and cycling with a regular bike. The total accuracy of the activity recognition using both supervised and unsupervised data was 89% that was only 1% unit lower than the accuracy of activity recognition using only supervised data. However, the accuracy decreased by 17% unit when only supervised data were used for training and only unsupervised data for validation, which emphasizes the need for out-of-laboratory data in the development of activity-recognition systems. The results support a vision of recognizing a wider spectrum, and more complex activities in real life settings.", "title": "" }, { "docid": "cdcdbb6dca02bdafdf9f5d636acb8b3d", "text": "BACKGROUND\nExpertise has been extensively studied in several sports over recent years. The specificities of how excellence is achieved in Association Football, a sport practiced worldwide, are being repeatedly investigated by many researchers through a variety of approaches and scientific disciplines.\n\n\nOBJECTIVE\nThe aim of this review was to identify and synthesise the most significant literature addressing talent identification and development in football. We identified the most frequently researched topics and characterised their methodologies.\n\n\nMETHODS\nA systematic review of Web of Science™ Core Collection and Scopus databases was performed according to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines. The following keywords were used: \"football\" and \"soccer\". Each word was associated with the terms \"talent\", \"expert*\", \"elite\", \"elite athlete\", \"identification\", \"career transition\" or \"career progression\". The selection was for the original articles in English containing relevant data about talent development/identification on male footballers.\n\n\nRESULTS\nThe search returned 2944 records. After screening against set criteria, a total of 70 manuscripts were fully reviewed. The quality of the evidence reviewed was generally excellent. The most common topics of analysis were (1) task constraints: (a) specificity and volume of practice; (2) performers' constraints: (a) psychological factors; (b) technical and tactical skills; (c) anthropometric and physiological factors; (3) environmental constraints: (a) relative age effect; (b) socio-cultural influences; and (4) multidimensional analysis. Results indicate that the most successful players present technical, tactical, anthropometric, physiological and psychological advantages that change non-linearly with age, maturational status and playing positions. These findings should be carefully considered by those involved in the identification and development of football players.\n\n\nCONCLUSION\nThis review highlights the need for coaches and scouts to consider the players' technical and tactical skills combined with their anthropometric and physiological characteristics scaled to age. Moreover, research addressing the psychological and environmental aspects that influence talent identification and development in football is currently lacking. The limitations detected in the reviewed studies suggest that future research should include the best performers and adopt a longitudinal and multidimensional perspective.", "title": "" }, { "docid": "e8e1bf877e45de0d955d8736c342ec76", "text": "Parking guidance and information (PGI) systems are becoming important parts of intelligent transportation systems due to the fact that cars and infrastructure are becoming more and more connected. One major challenge in developing efficient PGI systems is the uncertain nature of parking availability in parking facilities (both on-street and off-street). A reliable PGI system should have the capability of predicting the availability of parking at the arrival time with reliable accuracy. In this paper, we study the nature of the parking availability data in a big city and propose a multivariate autoregressive model that takes into account both temporal and spatial correlations of parking availability. The model is used to predict parking availability with high accuracy. The prediction errors are used to recommend the parking location with the highest probability of having at least one parking spot available at the estimated arrival time. The results are demonstrated using real-time parking data in the areas of San Francisco and Los Angeles.", "title": "" }, { "docid": "ae27bb288a6d3e23752b8d066fb021cb", "text": "A conversational agent (chatbot) is a piece of software that is able to communicate with humans using natural language. Modeling conversation is an important task in natural language processing and artificial intelligence (AI). Indeed, ever since the birth of AI, creating a good chatbot remains one of the field’s hardest challenges. While chatbots can be used for various tasks, in general they have to understand users’ utterances and provide responses that are relevant to the problem at hand. In the past, methods for constructing chatbot architectures have relied on hand-written rules and templates or simple statistical methods. With the rise of deep learning these models were quickly replaced by end-to-end trainable neural networks around 2015. More specifically, the recurrent encoder-decoder model [Cho et al., 2014] dominates the task of conversational modeling. This architecture was adapted from the neural machine translation domain, where it performs extremely well. Since then a multitude of variations [Serban et al., 2016] and features were presented that augment the quality of the conversation that chatbots are capable of. In my work, I conduct an in-depth survey of recent literature, examining over 70 publications related to chatbots published in the last 3 years. Then I proceed to make the argument that the very nature of the general conversation domain demands approaches that are different from current state-of-the-art architectures. Based on several examples from the literature I show why current chatbot models fail to take into account enough priors when generating responses and how this affects the quality of the conversation. In the case of chatbots these priors can be outside sources of information that the conversation is conditioned on like the persona [Li et al., 2016a] or mood of the conversers. In addition to presenting the reasons behind this problem, I propose several ideas on how it could be remedied. The next section of my paper focuses on adapting the very recent Tranformer [Vaswani et al., 2017] model to the chatbot domain, which is currently the state-of-the-art in neural machine translation. I first present my experiments with the vanilla model, using conversations extracted from the Cornell Movie-Dialog Corpus [Danescu-Niculescu-Mizil and Lee, 2011]. Secondly, I augment the model with some of my ideas regarding the issues of encoder-decoder architectures. More specifically, I feed additional features into the model like mood or persona together with the raw conversation data. Finally, I conduct a detailed analysis of how the vanilla model performs on conversational data by comparing it to previous chatbot models and how the additional features, affect the quality of the generated responses.", "title": "" }, { "docid": "97f89b905d51d2965c60bb4bbed08b4c", "text": "This communication deals with simultaneous generation of a contoured and a pencil beam from a single shaped reflector with two feeds. A novel concept of generating a high gain pencil beam from a shaped reflector is presented using focal plane conjugate field matching method. The contoured beam is generated from the shaped reflector by introducing deformations in a parabolic reflector surface. This communication proposes a simple method to counteract the effects of shaping and generate an additional high gain pencil beam from the shaped reflector. This is achieved by using a single feed which is axially and laterally displaced from the focal point. The proposed method is successfully applied to generate an Indian main land coverage contoured beam and a high gain pencil beam over Andaman Islands. The contoured beam with peak gain of 33.05 dBi and the pencil beam with 43.8 dBi peak gain is generated using the single shaped reflector and two feeds. This technique saves mass and volume otherwise would have required for feed cluster to compensate for the surface distortion.", "title": "" }, { "docid": "8ee24b38d7cf4f63402cd4f2c0beaf79", "text": "At the current stratospheric value of Bitcoin, miners with access to significant computational horsepower are literally printing money. For example, the first operator of a USD $1,500 custom ASIC mining platform claims to have recouped his investment in less than three weeks in early February 2013, and the value of a bitcoin has more than tripled since then. Not surprisingly, cybercriminals have also been drawn to this potentially lucrative endeavor, but instead are leveraging the resources available to them: stolen CPU hours in the form of botnets. We conduct the first comprehensive study of Bitcoin mining malware, and describe the infrastructure and mechanism deployed by several major players. By carefully reconstructing the Bitcoin transaction records, we are able to deduce the amount of money a number of mining botnets have made.", "title": "" }, { "docid": "a3034cc659f433317109d9157ea53302", "text": "Cyberbullying is an emerging form of bullying that takes place through contemporary information and communication technologies. Building on past research on the psychosocial risk factors for cyberbullying in this age group, the present study assessed a theory-driven, school-based preventive intervention that targeted moral disengagement, empathy and social cognitive predictors of cyberbullying. Adolescents (N = 355) aged between 16 and 18 years were randomly assigned into the intervention and the control group. Both groups completed anonymous structured questionnaires about demographics, empathy, moral disengagement and cyberbullying-related social cognitive variables (attitudes, actor prototypes, social norms, and behavioral expectations) before the intervention, post-intervention and 6 months after the intervention. The intervention included awareness-raising and interactive discussions about cyberbullying with intervention group students. Analysis of covariance (ANCOVA) showed that, after controlling for baseline measurements, there were significant differences at post-intervention measures in moral disengagement scores, and in favorability of actor prototypes. Further analysis on the specific mechanisms of moral disengagement showed that significant differences were observed in distortion of consequences and attribution of blame. The implications of the intervention are discussed, and guidelines for future school-based interventions against cyberbullying are provided.", "title": "" }, { "docid": "6c2d0a9d2e542a2778a7d798ce33dded", "text": "Grounded theory has frequently been referred to, but infrequently applied in business research. This article addresses such a deficiency by advancing two focal aims. Firstly, it seeks to de-mystify the methodology known as grounded theory by applying this established research practice within the comparatively new context of business research. Secondly, in so doing, it integrates naturalistic examples drawn from the author’s business research, hence explicating the efficacy of grounded theory methodology in gaining deeper understanding of business bounded phenomena. It is from such a socially focused methodology that key questions of what is happening and why leads to the generation of substantive theories and underpinning", "title": "" }, { "docid": "86ee8258559aebfdfa90964fe78429c2", "text": "Voice search is the technology underlying many spoken dialog systems (SDSs) that provide users with the information they request with a spoken query. The information normally exists in a large database, and the query has to be compared with a field in the database to obtain the relevant information. The contents of the field, such as business or product names, are often unstructured text. This article categorized spoken dialog technology into form filling, call routing, and voice search, and reviewed the voice search technology. The categorization was made from the technological perspective. It is important to note that a single SDS may apply the technology from multiple categories. Robustness is the central issue in voice search. The technology in acoustic modeling aims at improved robustness to environment noise, different channel conditions, and speaker variance; the pronunciation research addresses the problem of unseen word pronunciation and pronunciation variance; the language model research focuses on linguistic variance; the studies in search give rise to improved robustness to linguistic variance and ASR errors; the dialog management research enables graceful recovery from confusions and understanding errors; and the learning in the feedback loop speeds up system tuning for more robust performance. While tremendous achievements have been accomplished in the past decade on voice search, large challenges remain. Many voice search dialog systems have automation rates around or below 50% in field trials.", "title": "" }, { "docid": "361b2d1060aada23f790a64e6698909e", "text": "Decimation filter has wide application in both the analog and digital system for data rate conversion as well as filtering. In this paper, we have discussed about efficient structure of a decimation filter. We have three class of filters FIR, IIR and CIC filters. IIR filters are simpler in structure but do not satisfy linear phase requirements which are required in time sensitive features like a video or a speech. FIR filters have a well defined frequency response but they require lot of hardware to store the filter coefficients. CIC filters don’t have this drawback they are coefficient less so hardware requirement is much reduced but as they don’t have well defined frequency response. So another structure is proposed which takes advantage of good feature of both the structures and thus have a cascade of CIC and FIR filters. They exhibit both the advantage of FIR and CIC filters and hence more efficient over all in terms of hardware and frequency response requirements.", "title": "" }, { "docid": "132bb5b7024de19f4160664edca4b4f5", "text": "Generic Competitive Strategy: Basically, strategy is about two things: deciding where you want your business to go, and deciding how to get there. A more complete definition is based on competitive advantage, the object of most corporate strategy: “Competitive advantage grows out of value a firm is able to create for its buyers that exceeds the firm's cost of creating it. Value is what buyers are willing to pay, and superior value stems from offering lower prices than competitors for equivalent benefits or providing unique benefits that more than offset a higher price. There are two basic types of competitive advantage: cost leadership and differentiation.” Michael Porter Competitive strategies involve taking offensive or defensive actions to create a defendable position in the industry. Generic strategies can help the organization to cope with the five competitive forces in the industry and do better than other organization in the industry. Generic strategies include ‘overall cost leadership’, ‘differentiation’, and ‘focus’. Generally firms pursue only one of the above generic strategies. However some firms make an effort to pursue only one of the above generic strategies. However some firms make an effort to pursue more than one strategy at a time by bringing out a differentiated product at low cost. Though approaches like these are successful in short term, they are hardly sustainable in the long term. If firms try to maintain cost leadership as well as differentiation at the same time, they may fail to achieve either.", "title": "" }, { "docid": "71b48c67ba508bdd707340b5d1632018", "text": "Two-photon laser scanning microscopy of calcium dynamics using fluorescent indicators is a widely used imaging method for large-scale recording of neural activity in vivo. Here, we introduce volumetric two-photon imaging of neurons using stereoscopy (vTwINS), a volumetric calcium imaging method that uses an elongated, V-shaped point spread function to image a 3D brain volume. Single neurons project to spatially displaced 'image pairs' in the resulting 2D image, and the separation distance between projections is proportional to depth in the volume. To demix the fluorescence time series of individual neurons, we introduce a modified orthogonal matching pursuit algorithm that also infers source locations within the 3D volume. We illustrated vTwINS by imaging neural population activity in the mouse primary visual cortex and hippocampus. Our results demonstrated that vTwINS provides an effective method for volumetric two-photon calcium imaging that increases the number of neurons recorded while maintaining a high frame rate.", "title": "" }, { "docid": "961bf33dddefb94e75b84d5a1c8803cd", "text": "Smart grid is an intelligent power generation, distribution, and control system. ZigBee, as a wireless mesh networking scheme low in cost, power, data rate, and complexity, is ideal for smart grid applications, e.g., real-time system monitoring, load control, and building automation. Unfortunately, almost all ZigBee channels overlap with wireless local area network (WLAN) channels, resulting in severe performance degradation due to interference. In this paper, we aim to develop practical ZigBee deployment guideline under the interference of WLAN. We identify the “Safe Distance” and “Safe Offset Frequency” using a comprehensive approach including theoretical analysis, software simulation, and empirical measurement. In addition, we propose a frequency agility-based interference avoidance algorithm. The proposed algorithm can detect interference and adaptively switch nodes to “safe” channel to dynamically avoid WLAN interference with small latency and small energy consumption. Our proposed scheme is implemented with a Meshnetics ZigBit Development Kit and its performance is empirically evaluated in terms of the packet error rate (PER) using a ZigBee and Wi-Fi coexistence test bed. It is shown that the empirical results agree with our analytical results. The measurements demonstrate that our design guideline can efficiently mitigate the effect of WiFi interference and enhance the performance of ZigBee networks.", "title": "" }, { "docid": "c5f749c36b3d8af93c96bee59f78efe5", "text": "INTRODUCTION\nMolecular diagnostics is a key component of laboratory medicine. Here, the authors review key triggers of ever-increasing automation in nucleic acid amplification testing (NAAT) with a focus on specific automated Polymerase Chain Reaction (PCR) testing and platforms such as the recently launched cobas® 6800 and cobas® 8800 Systems. The benefits of such automation for different stakeholders including patients, clinicians, laboratory personnel, hospital administrators, payers, and manufacturers are described. Areas Covered: The authors describe how molecular diagnostics has achieved total laboratory automation over time, rivaling clinical chemistry to significantly improve testing efficiency. Finally, the authors discuss how advances in automation decrease the development time for new tests enabling clinicians to more readily provide test results. Expert Commentary: The advancements described enable complete diagnostic solutions whereby specific test results can be combined with relevant patient data sets to allow healthcare providers to deliver comprehensive clinical recommendations in multiple fields ranging from infectious disease to outbreak management and blood safety solutions.", "title": "" }, { "docid": "0959dba02fee08f7e359bcc816f5d22d", "text": "We prove a closed-form solution to tensor voting (CFTV): Given a point set in any dimensions, our closed-form solution provides an exact, continuous, and efficient algorithm for computing a structure-aware tensor that simultaneously achieves salient structure detection and outlier attenuation. Using CFTV, we prove the convergence of tensor voting on a Markov random field (MRF), thus termed as MRFTV, where the structure-aware tensor at each input site reaches a stationary state upon convergence in structure propagation. We then embed structure-aware tensor into expectation maximization (EM) for optimizing a single linear structure to achieve efficient and robust parameter estimation. Specifically, our EMTV algorithm optimizes both the tensor and fitting parameters and does not require random sampling consensus typically used in existing robust statistical techniques. We performed quantitative evaluation on its accuracy and robustness, showing that EMTV performs better than the original TV and other state-of-the-art techniques in fundamental matrix estimation for multiview stereo matching. The extensions of CFTV and EMTV for extracting multiple and nonlinear structures are underway.", "title": "" } ]
scidocsrr
dc89eb493c40f55710b05c4bb88a69c8
To Copy or Not to Copy: Making In-Memory Databases Fast on Modern NICs
[ { "docid": "221b5ba25bff2522ab3ca65ffc94723f", "text": "This paper describes the design and implementation of HERD, a key-value system designed to make the best use of an RDMA network. Unlike prior RDMA-based key-value systems, HERD focuses its design on reducing network round trips while using efficient RDMA primitives; the result is substantially lower latency, and throughput that saturates modern, commodity RDMA hardware.\n HERD has two unconventional decisions: First, it does not use RDMA reads, despite the allure of operations that bypass the remote CPU entirely. Second, it uses a mix of RDMA and messaging verbs, despite the conventional wisdom that the messaging primitives are slow. A HERD client writes its request into the server's memory; the server computes the reply. This design uses a single round trip for all requests and supports up to 26 million key-value operations per second with 5μs average latency. Notably, for small key-value items, our full system throughput is similar to native RDMA read throughput and is over 2X higher than recent RDMA-based key-value systems. We believe that HERD further serves as an effective template for the construction of RDMA-based datacenter services.", "title": "" } ]
[ { "docid": "462d93a89154fb67772bbbba5343399c", "text": "In this paper, we proposed a DBSCAN-based clustering algorithm called NNDD-DBSCAN with the main focus of handling multi-density datasets and reducing parameter sensitivity. The NNDD-DBSCAN used a new distance measuring method called nearest neighbor density distance (NNDD) which makes the new algorithm can clustering properly in multi-density datasets. By analyzing the relationship between the threshold of nearest neighbor density distance and the threshold of nearest neighborcollection, we give a heuristic method to find the appropriate nearest neighbor density distance threshold and reducing parameter sensitivity. Experimental results show that the NNDD-DBSCAN has a good robustadaptation and can get the ideal clustering result both in single density datasets and multi-density datasets.", "title": "" }, { "docid": "a9309fc2fdd67b70178cd88e948cf2ca", "text": "............................................................................................................................... I Co-Authorship Statement.................................................................................................... II Acknowledgments............................................................................................................. III Table of", "title": "" }, { "docid": "68093a9767aea52026a652813c3aa5fd", "text": "Conventional capacitively coupled neural recording amplifiers often present a large input load capacitance to the neural signal source and hence take up large circuit area. They suffer due to the unavoidable trade-off between the input capacitance and chip area versus the amplifier gain. In this work, this trade-off is relaxed by replacing the single feedback capacitor with a clamped T-capacitor network. With this simple modification, the proposed amplifier can achieve the same mid-band gain with less input capacitance, resulting in a higher input impedance and a smaller silicon area. Prototype neural recording amplifiers based on this proposal were fabricated in 0.35 μm CMOS, and their performance is reported. The amplifiers occupy smaller area and have lower input loading capacitance compared to conventional neural amplifiers. One of the proposed amplifiers occupies merely 0.056 mm2. It achieves 38.1-dB mid-band gain with 1.6 pF input capacitance, and hence has an effective feedback capacitance of 20 fF. Consuming 6 μW, it has an input referred noise of 13.3 μVrms over 8.5 kHz bandwidth and NEF of 7.87. In-vivo recordings from animal experiments are also demonstrated.", "title": "" }, { "docid": "a1a97d01518aed3573e934bb9d0428f3", "text": "The use of social networking websites has become a current international phenomenon. Popular websites include MySpace, Facebook, and Friendster. Their rapid widespread use warrants a better understanding. However, there has been little empirical research studying the factors that determine the use of this hedonic computer-mediated communication technology This study contributes to our understanding of the antecedents that influence adoption and use of social networking websites by examining the effect of the perceptions of playfulness, critical mass, trust, and normative pressure on the use of social networking sites.. Structural equation modeling was used to examine the patterns of inter-correlations among the constructs and to empirically test the hypotheses. Each of the antecedents has a significant direct effect on intent to use social networking websites, with playfulness and critical mass the strongest indicators. Intent to use and playfulness had a significant direct effect on actual usage.", "title": "" }, { "docid": "6f734301a698a54177265815189a2bb9", "text": "Online image sharing in social media sites such as Facebook, Flickr, and Instagram can lead to unwanted disclosure and privacy violations, when privacy settings are used inappropriately. With the exponential increase in the number of images that are shared online every day, the development of effective and efficient prediction methods for image privacy settings are highly needed. The performance of models critically depends on the choice of the feature representation. In this paper, we present an approach to image privacy prediction that uses deep features and deep image tags as feature representations. Specifically, we explore deep features at various neural network layers and use the top layer (probability) as an auto-annotation mechanism. The results of our experiments show that models trained on the proposed deep features and deep image tags substantially outperform baselines such as those based on SIFT and GIST as well as those that use “bag of tags” as features.", "title": "" }, { "docid": "1212637c91d8c57299c922b6bde91ce8", "text": "BACKGROUND\nIn the late 1980's, occupational science was introduced as a basic discipline that would provide a foundation for occupational therapy. As occupational science grows and develops, some question its relationship to occupational therapy and criticize the direction and extent of its growth and development.\n\n\nPURPOSE\nThis study was designed to describe and critically analyze the growth and development of occupational science and characterize how this has shaped its current status and relationship to occupational therapy.\n\n\nMETHOD\nUsing a mixed methods design, 54 occupational science documents published in the years 1990 and 2000 were critically analyzed to describe changes in the discipline between two points in time. Data describing a range of variables related to authorship, publication source, stated goals for occupational science and type of research were collected.\n\n\nRESULTS\nDescriptive statistics, themes and future directions are presented and discussed.\n\n\nPRACTICE IMPLICATIONS\nThrough the support of a discipline that is dedicated to the pursuit of a full understanding of occupation, occupational therapy will help to create a new and complex body of knowledge concerning occupation. However, occupational therapy must continue to make decisions about how knowledge produced within occupational science and other disciplines can be best used in practice.", "title": "" }, { "docid": "3a06104103bbfbadbe67a89e84f425ab", "text": "According to the Technology Acceptance Model (TAM), behavioral intentions to use a new IT are primarily the product of a rational analysis of its desirable perceived outcomes, namely perceived usefulness (PU) and perceived ease of use (PEOU). But what happens with the continued use of an IT among experienced users? Does habit also kick in as a major factor or is continued use only the product of its desirable outcomes? This study examines this question in the context of experienced online shoppers. The data show that, as hypothesized, online shoppers’ intentions to continue using a website that they last bought at depend not only on PU and PEOU, but also on habit. In fact, habit alone can explain a large proportion of the variance of continued use of a website. Moreover, the explained variance indicates that habit may also be a major predictor of PU and PEOU among experienced shoppers. Implications are discussed.", "title": "" }, { "docid": "61f339c1eed1b56fdd088996e1086ecc", "text": "The flow pattern of ridges in a fingerprint is unique to the person in that no two people with the same fingerprints have yet been found. Fingerprints have been in use in forensic applications for many years and, more recently, in computer-automated identification and authentication. For automated fingerprint image matching, a machine representation of a fingerprint image is often a set of minutiae in the print; a minimal, but fundamental, representation is just a set of ridge endings and bifurcations. Oddly, however, after all the years of using minutiae, a precise definition of minutiae has never been formulated. We provide a formal definition of a minutia based on the gray scale image. This definition is constructive, in that, given a minutia image, the minutia location and orientation can be uniquely determined.", "title": "" }, { "docid": "aa907899bf41e35082641abdda1a3e85", "text": "This paper describes the measurement and analysis of the motion of a tennis swing. Over the past decade, people have taken a greater interest in their physical condition in an effort to avoid health problems due to aging. Exercise, especially sports, is an integral part of a healthy lifestyle. As a popular lifelong sport, tennis was selected as the subject of this study, with the focus on the correct form for playing tennis, which is difficult to learn. We used a 3D gyro sensor fixed at the waist to detect the angular velocity in the movement of the stroke and serve of expert and novice tennis players for comparison.", "title": "" }, { "docid": "0b44782174d1dae460b86810db8301ec", "text": "We present an overview of Markov chain Monte Carlo, a sampling method for model inference and uncertainty quantification. We focus on the Bayesian approach to MCMC, which allows us to estimate the posterior distribution of model parameters, without needing to know the normalising constant in Bayes’ theorem. Given an estimate of the posterior, we can then determine representative models (such as the expected model, and the maximum posterior probability model), the probability distributions for individual parameters, and the uncertainty about the predictions from these models. We also consider variable dimensional problems in which the number of model parameters is unknown and needs to be inferred. Such problems can be addressed with reversible jump (RJ) MCMC. This leads us to model choice, where we may want to discriminate between models or theories of differing complexity. For problems where the models are hierarchical (e.g. similar structure but with a different number of parameters), the Bayesian approach naturally selects the simpler models. More complex problems require an estimate of the normalising constant in Bayes’ theorem (also known as the evidence) and this is difficult to do reliably for high dimensional problems. We illustrate the applications of RJMCMC with 3 examples from our earlier working involving modelling distributions of geochronological age data, inference of sea-level and sediment supply histories from 2D stratigraphic cross-sections, and identification of spatially discontinuous thermal histories from a suite of apatite fission track samples distributed in 3D. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "04c8009b3014b991e8c520556975c15a", "text": "Today’s deep learning systems are dominated by a dataflow execution model. Given a static dataflow graph and the shape of the input (e.g., mini-batch sizes and image dimensions), the system can fully determine its computation before execution. When the same static graph applies to every data sample, the system may search for an optimal computation schedule offline by trying out many schedules on a sample input, knowing the input values won’t affect computation throughput. However, for many neural networks, data samples have variable sizes and the computation graph topology depends on input or parameter values. In this case, a static graph fails to fully describe the computation and an optimal schedule needs to be dynamically derived to take runtime information into account. Thus we argue for the importance of dynamic scheduling, especially regarding distributed device placement. 1 Dynamic Computation in Neural Networks In a dataflow system, application programs first construct a dataflow graph that describes the computation, and then request the system to execute a subgraph or the whole graph. Although for many neural networks (e.g., AlexNet [7], Inception-v3 [13], and ResNet [3]), the computation can be described by a static acyclic directed graph (DAG) that applies to all data samples, there are many cases where the graph topology varies based on input or parameter values. Recurrent Neural Networks [2] model sequences of data (e.g., sentences). A recurrent neural network (RNN) repeatedly applies a cell function, such as long-short-term-memory (LSTM) [4], to each element of the sequence. Since sequences may have variable length, the cell function is executed for different number of times for different sequences. A typical approach for expressing RNNs as a static DAG is to statically unroll the sequence for a finite number of steps, padding shorter sequences with empty values and likely chopping longer ones. An alternative approach is to construct a distinct graph for each input sequence, paying the graph construction overhead for each data sample. Recursive Neural Networks [12] generalize recurrent neural network to model arbitrary topologies. For example, Tree-LSTM [14] models the syntactic tree of a sentence. Since the topology differs from sentence to sentence, Tree-LSTM constructs a distinct static DAG for each sentence. As shown by Xu et al. [16], per-sample graph construction constitutes a significant overhead (over 60% of runtime in some cases). Xu et al. [16] propose to resolve the graph construction overhead by reusing the graph structure that already exists in the dataset instead of programmatic construction, restricting its applicability. Mixture of Experts (MoE) [11] is an example of conditional computation in neural networks. A MoE layer consists of a gating network and a large number (up to hundreds of thousands) of expert networks. Each data sample sparsely activates a small number of experts as determined by the gating Preprint. Work in progress. network based on runtime values. Therefore, for an input mini-batch, the input size of each expert is unknown until the gating network has been executed on the mini-batch. Expressing dynamic computation via dynamic control flow. Yu et al. [17] presents two dynamic control flow operations cond and while_loop in TensorFlow that represents conditional and iterateive computation respectively. Recursive (including recurrent) neural networks can be expressived as a while loop iterating over the nodes in a topologically sorted order. As the loop body is represented as a subgraph in a static DAG, all dynamic instances of the loop body (i.e., iterations) share the same dependence pattern. Therefore, for recursive neural networks, each iteration is conservatively specified to depend on its previous iteration to ensure correct ordering, resulting in a sequential execution, even though some iterations can potentially be executed in parallel. Jeong et al. [5] take advantage of the additional parallelism by introducing a recursion operation into TensorFlow. With recursion, a node recursively invokes the computation function on other nodes and waits until the recursive calls return to continue its execution. This allows a caller to dynamically specify its distinct dependence on the callees, permitting parallel execution of the functions on independent nodes. 2 The Need for Dynamic Scheduling of Dynamic Control Flow Despite the programming support for expressing dynamic computation, existing dataflow-based deep learning systems employ a static computation schedule derived prior to graph execution. A computation schedule determines how operations are placed on (possibly distributed) computing devices and compiles each device’s graph partition to an executable program. Here we focus on distributed device placement. When the same static computation graph applies to all data samples, it is possible to find an efficient computation schedule prior to execution. TensorFlow [1] relies on application programmers to manually place operations on devices; Mirhoseini et al. [10, 9] learn the device placement from repeated trial executions of various schedules. Jia et al. [6] simulates schedule execution to reduce the planning cost down to sub-seconds to tens of minutes depending on the scale (4 to 64 GPUs) and complexity of the network. Moreover, Jia et al. [6] exploit additional dimensions of parallelization. Neverthless, existing approaches fail to consider that the computation may change based on input or parameter values. We discuss the inefficiency due to overlooking runtime information to motivate dynamic scheduling. Conditional Computation. TensorFlow’s cond is implemented using Switch which forwards an input tensor to one of two subgraphs. MoE generalizes Switch in two ways: (1) the forwarding decision is made separately for each row in the input tensor and (2) each row is forwarded to K out of N subgraphs. Due to MoE’s large size (up to 130 billion parameters), existing implementations (e.g., Tensor2Tensor [15] and Shazeer et al. [11]) statically partition the expert networks to different GPUs. Such static placement faces two problems: (1) the memory for a subgraph (e.g., variables, receive buffers) is statically allocated regardless of whether a subgraph is actually executed; (2) the input sizes among different experts can be highly skewed. These issues lead to heavy over-provisioning of GPU memory while wasting GPUs’ precious computing cycles. As reported by Shazeer et al. [11], a MoE layer consisting of 131072 experts requires 128 Tesla K40 GPUs to fit while achieving a computation throughput of 0.3TFLOPS per GPU (Nvidia’s claimed peak throughput is 4.29TFLOPS/GPU). With dynamic scheduling, the system allocates memory for only subgraphs that are executed and may partition an overwhelmingly large input to an expert along with replicating the expert to multiple GPUs to balance load among GPUs. Iterative and Recursive Computation. TensorFlow creates frames for each dynamic instance of the while_loop loop body. Operations of different frames may run in parallel as long as their dependences are satisfied. However, since each operation is statically placed onto one device, all frames of this operation is bound to this device. This can lead to saturating the computing power of a single device, thus missing the additional parallelism, such as observed by Jeong et al. [5]. Previous work on static device placement observes throughput improvement when placing different iterations of a statically unrolled RNN to different devices [10, 9, 6]. While static scheduling would be prohibitively expensive when different data samples require different graph topology, dynamic scheduling may dynamically schedule different frames to different devices to take advantage of the additional parallelism. Moreover, as recursion is restricted to trees, deep learning systems need a more general approach for precisely capturing the dependence among loop iterations in order to explore parallelism in arbitrary dependence topologies, such as Graph-LSTM [8].", "title": "" }, { "docid": "8636268bd5de6be0987891ba613ae509", "text": "In this paper we address the problem of defining games formally, following Wittgenstein's dictum that games cannot be defined adequately as a formal category. Several influential attempts at definitions will be evaluated and shown to be inadequate. As an alternative, we propose a descriptive model of the definable supercategory that games belong to, cybermedia, that is pragmatic, open, and capable of meeting the needs of the diverse, intensely interdisciplinary field of game studies for a uniting conceptuallization of its main phenomenon. Our approach, the Cybermedia model, consisting of Player, Sign, Mechanical System, and Material Medium, offers a medium-independent, flexible and analytically useful way to contrast different approaches in games research and to determine which aspect of the phenomenon one is talking about when the word ‘game’ is used.", "title": "" }, { "docid": "0ba1155b41dc3df507a6dd4194e4d875", "text": "Live streaming platforms bring events from all around the world to people's computing devices. We conducted a mixed methods study including interviews (N = 42) and a survey (N = 223) to understand how people currently experience events using Facebook Live, Periscope, and Snapchat Live Stories. We identified four dimensions that make remote event viewing engaging: immersion, immediacy, interaction, and sociality. We find that both live streams and the more curated event content found on Snapchat are immersive and immediate, yet Snapchat Live Stories enable quickly switching among different views of the event. Live streams, on the other hand, offer real time interaction and sociality in a way that Snapchat Live Stories do not. However, the interaction's impact depends on comment volume, comment content, and relationship between viewer and broadcaster. We describe how people experience events remotely using these social media, and identify design opportunities around detecting exciting content, leveraging multiple viewpoints, and enabling interactivity to create engaging user experiences for remotely participating in events.", "title": "" }, { "docid": "932c66caf9665e9dea186732217d4313", "text": "Citations are very important parameters and are used to take many important decisions like ranking of researchers, institutions, countries, and to measure the relationship between research papers. All of these require accurate counting of citations and their occurrence (in-text citation counts) within the citing papers. Citation anchors refer to the citation made within the full text of the citing paper for example: ‘[1]’, ‘(Afzal et al, 2015)’, ‘[Afzal, 2015]’ etc. Identification of citation-anchors from the plain-text is a very challenging task due to the various styles and formats of citations. Recently, Shahid et al. highlighted some of the problems such as commonality in content, wrong allotment, mathematical ambiguities, and string variations etc in automatically identifying the in-text citation frequencies. The paper proposes an algorithm, CAD, for identification of citation-anchors and its in-text citation frequency based on different rules. For a comprehensive analysis, the dataset of research papers is prepared: on both Journal of Universal Computer Science (J.UCS) and (2) CiteSeer digital libraries. In experimental study, we conducted two experiments. In the first experiment, the proposed approach is compared with state-of-the-art technique over both datasets. The J.UCS dataset consists of 1200 research papers with 16,000 citation strings or references while the CiteSeer dataset consists of 52 research papers with 1850 references. The total dataset size becomes 1252 citing documents and 17,850 references. The experiments showed that CAD algorithm improved F-score by 44% and 37% respectively on both J.UCS and CiteSeer dataset over the contemporary technique (Shahid et al. in Int J Arab Inf Technol 12:481–488, 2014). The average score is 41% on both datasets. In the second experiment, the proposed approach is further analyzed against the existing state-of-the-art tools: CERMINE and GROBID. According to our results, the proposed approach is best performing with F1 of 0.99, followed by GROBID (F1 0.89) and CERMINE (F1 0.82).", "title": "" }, { "docid": "7d9162b079a155f48688a1d70af5482a", "text": "Determination of microgram quantities of protein in the Bradford Coomassie brilliant blue assay is accomplished by measurement of absorbance at 590 nm. However, as intrinsic nonlinearity compromises the sensitivity and accuracy of this method. It is shown that under standard assay conditions, the ratio of the absorbances, 590 nm over 450 nm, is strictly linear with protein concentration. This simple procedure increases the accuracy and improves the sensitivity of the assay about 10-fold, permitting quantitation down to 50 ng of bovine serum albumin. Furthermore, protein assay in presence of up to 35-fold weight excess of sodium dodecyl sulfate (detergent) over bovine serum albumin (protein) can be performed. A linear equation that perfectly fits the experimental data is provided on the basis of mass action and Beer's law.", "title": "" }, { "docid": "f7b0b504c3ac71e7a739ed9e2db4b151", "text": "The Internet of Things (IoT) is a flagship project that aims to connect objects to the Internet to extend their use. For that, it was needed to find a solution to combine between the IEEE 802.15.4 protocol for Low Power Wireless Personal Area Networks (LoWPANs) and IPv6 network protocol that its large address space will allow million devices to integrate the internet. The development of 6LoWPAN technology was an appropriate solution to deal with this challenge and enable the IoT concept becoming a reality. But this was only the beginning of several challenges and problems like the case of how to secure this new type of networks, especially since it includes two major protocols so the combination of their problems too, over and above new problems specific to that network. In this paper, we analyze the security challenges in 6LoWPAN, we studied the various countermeasures to address these needs, their advantages and disadvantages, and we offer some recommendations to achieve a reliable security scheme for a powerful 6LoWPAN networks.", "title": "" }, { "docid": "19202b2802eef89ccb9e675a7417e02c", "text": "Stitching videos captured by hand-held mobile cameras can essentially enhance entertainment experience of ordinary users. However, such videos usually contain heavy shakiness and large parallax, which are challenging to stitch. In this paper, we propose a novel approach of video stitching and stabilization for videos captured by mobile devices. The main component of our method is a unified video stitching and stabilization optimization that computes stitching and stabilization simultaneously rather than does each one individually. In this way, we can obtain the best stitching and stabilization results relative to each other without any bias to one of them. To make the optimization robust, we propose a method to identify background of input videos, and also common background of them. This allows us to apply our optimization on background regions only, which is the key to handle large parallax problem. Since stitching relies on feature matches between input videos, and there inevitably exist false matches, we thus propose a method to distinguish between right and false matches, and encapsulate the false match elimination scheme and our optimization into a loop, to prevent the optimization from being affected by bad feature matches. We test the proposed approach on videos that are causally captured by smartphones when walking along busy streets, and use stitching and stability scores to evaluate the produced panoramic videos quantitatively. Experiments on a diverse of examples show that our results are much better than (challenging cases) or at least on par with (simple cases) the results of previous approaches.", "title": "" }, { "docid": "5d15118fcb25368fc662deeb80d4ef28", "text": "A5-GMR-1 is a synchronous stream cipher used to provide confidentiality for communications between satellite phones and satellites. The keystream generator may be considered as a finite state machine, with an internal state of 81 bits. The design is based on four linear feedback shift registers, three of which are irregularly clocked. The keystream generator takes a 64-bit secret key and 19-bit frame number as inputs, and produces an output keystream of length berween 28 and 210 bits.\n Analysis of the initialisation process for the keystream generator reveals serious flaws which significantly reduce the number of distinct keystreams that the generator can produce. Multiple (key, frame number) pairs produce the same keystream, and the relationship between the various pairs is easy to determine. Additionally, many of the keystream sequences produced are phase shifted versions of each other, for very small phase shifts. These features increase the effectiveness of generic time-memory tradeoff attacks on the cipher, making such attacks feasible.", "title": "" }, { "docid": "3d32f7037ee239fe2939526559eb67d5", "text": "We propose an end-to-end, domainindependent neural encoder-aligner-decoder model for selective generation, i.e., the joint task of content selection and surface realization. Our model first encodes a full set of over-determined database event records via an LSTM-based recurrent neural network, then utilizes a novel coarse-to-fine aligner to identify the small subset of salient records to talk about, and finally employs a decoder to generate free-form descriptions of the aligned, selected records. Our model achieves the best selection and generation results reported to-date (with 59% relative improvement in generation) on the benchmark WEATHERGOV dataset, despite using no specialized features or linguistic resources. Using an improved k-nearest neighbor beam filter helps further. We also perform a series of ablations and visualizations to elucidate the contributions of our key model components. Lastly, we evaluate the generalizability of our model on the ROBOCUP dataset, and get results that are competitive with or better than the state-of-the-art, despite being severely data-starved.", "title": "" }, { "docid": "482151eeb17cfb627403782cbece07ad", "text": "In this article, we study skew cyclic codes over ring $R=\\mathbb{F}_{q}+v\\mathbb{F}_{q}+v^{2}\\mathbb{F}_{q}$, where $q=p^{m}$, $p$ is an odd prime and $v^{3}=v$. We describe generator polynomials of skew cyclic codes over this ring and investigate the structural properties of skew cyclic codes over $R$ by a decomposition theorem. We also describe the generator polynomials of the duals of skew cyclic codes. Moreover, the idempotent generators of skew cyclic codes over $\\mathbb{F}_{q}$ and $R$ are considered.", "title": "" } ]
scidocsrr
937f4bbb31e5e29cc4170ea16a854330
Comparison of two high frequency transformer designs to achieve zero voltage switching in a 311/100 v 1 kW phase-shifted full-bridge DC-DC converter
[ { "docid": "200c38c7e0715d078796200a41983627", "text": "This Application Note will highlight the design considerations incurred in a high frequency power supply using the Phase Shifted Resonant PWM control technique. An overview of this switching technique including comparisons to existing fixed frequency non-resonant and variable frequency Zero Voltage Switching is included. Numerous design equations and associated voltage, current and timing waveforms supporting this technique will be highlighted. A general purpose Phase Shifted converter design guide and procedure will be introduced to assist in weighing the various design tradeoffs. An experimental 500 Watt, 48 volt at 10.5 amp power supply design operating from a preregulated 400 volt DC input will be presented as an example. Considerations will be given to the details of the magnetic, power switching and control circuitry areas. A summary of comparative advantages, differences and tradeoffs to other conversion alternatives is included. UC3875 CONTROL CIRCUIT SCHEMATIC", "title": "" }, { "docid": "3f212bd523f8044392c82778811a77f9", "text": "In this paper, the electric charger with single phase PWM converter and a bidirectional half-bridge converter is proposed. A transformer leakage inductance is used for making resonance at the resonant converter, which can make switched current to be sinusoidal without extra inductive component. Both operating principle and design guideline are described in detail. The feasibility for the proposed design is verified through hardware implementation and the experimental results. Detailed results have been obtained to show several advantages such as the compact design and reduced switching losses for different values and conditions of leakage inductance and resonance cases.", "title": "" } ]
[ { "docid": "bb50f0ad981d3f81df6810322da7bd71", "text": "Scale-model laboratory tests of a surface effect ship (SES) conducted in a near-shore transforming wave field are discussed. Waves approaching a beach in a wave tank were used to simulate transforming sea conditions and a series of experiments were conducted with a 1:30 scale model SES traversing in heads seas. Pitch and heave motion of the vehicle were recorded in support of characterizing the seakeeping response of the vessel in developing seas. The aircushion pressure and the vessel speed were varied over a range of values and the corresponding vehicle responses were analyzed to identify functional dependence on these parameters. The results show a distinct correlation between the air-cushion pressure and the response amplitude of both pitch and heave.", "title": "" }, { "docid": "bb5f1836b7e694a571f7e9a0d6845761", "text": "Rheumatic heart disease (RHD) results in morbidity and mortality that is disproportionate among individuals in developing countries compared to those living in economically developed countries. The global burden of disease is uncertain because most previous studies to determine the prevalence of RHD in children relied on clinical screening criteria that lacked the sensitivity to detect most cases. The present study was performed to determine the prevalence of RHD in children and young adults in León, Nicaragua, an area previously thought to have a high prevalence of RHD. This was an observational study of 3,150 children aged 5 to 15 years and 489 adults aged 20 to 35 years randomly selected from urban and rural areas of León. Cardiopulmonary examinations and Doppler echocardiographic studies were performed on all subjects. Doppler echocardiographic diagnosis of RHD was based on predefined consensus criteria that were developed by a working group of the World Health Organization and the National Institutes of Health. The overall prevalence of RHD in children was 48 in 1,000 (95% confidence interval 35 in 1,000 to 60 in 1,000). The prevalence in urban children was 34 in 1,000, and in rural children it was 80 in 1,000. Using more stringent Doppler echocardiographic criteria designed to diagnose definite RHD in adults, the prevalence was 22 in 1,000 (95% confidence interval 8 in 1,000 to 37 in 1,000). In conclusion, the prevalence of RHD among children and adults in this economically disadvantaged population far exceeds previously predicted rates. The findings underscore the potential health and economic burden of acute rheumatic fever and RHD and support the need for more effective measures of prevention, which may include safe, effective, and affordable vaccines to prevent the streptococcal infections that trigger the disease.", "title": "" }, { "docid": "522938687849ccc9da8310ac9d6bbf9e", "text": "Machine learning models, especially Deep Neural Networks, are vulnerable to adversarial examples—malicious inputs crafted by adding small noises to real examples, but fool the models. Adversarial examples transfer from one model to another, enabling black-box attacks to real-world applications. In this paper, we propose a strong attack algorithm named momentum iterative fast gradient sign method (MI-FGSM) to discover adversarial examples. MI-FGSM is an extension of iterative fast gradient sign method (I-FGSM) but improves the transferability significantly. Besides, we study how to attack an ensemble of models efficiently. Experiments demonstrate the effectiveness of the proposed algorithm. We hope that MI-FGSM can serve as a benchmark attack algorithm for evaluating the robustness of various models and defense methods.", "title": "" }, { "docid": "785164fa04344d976c1d8ed148715ec2", "text": "Integrated Systems Health Management includes as key elements fault detection, fault diagnostics, and failure prognostics. Whereas fault detection and diagnostics have been the subject of considerable emphasis in the Artificial Intelligence (AI) community in the past, prognostics has not enjoyed the same attention. The reason for this lack of attention is in part because prognostics as a discipline has only recently been recognized as a game-changing technology that can push the boundary of systems health management. This paper provides a survey of AI techniques applied to prognostics. The paper is an update to our previously published survey of data-driven prognostics.", "title": "" }, { "docid": "55a37995369fe4f8ddb446d83ac0cecf", "text": "With the continued proliferation of smart mobile devices, Quick Response (QR) code has become one of the most-used types of two-dimensional code in the world. Aiming at beautifying the visual-unpleasant appearance of QR codes, existing works have developed a series of techniques. However, these works still leave much to be desired, such as personalization, artistry, and robustness. To address these issues, in this paper, we propose a novel type of aesthetic QR codes, SEE (Stylize aEsthEtic) QR code, and a three-stage approach to automatically produce such robust style-oriented codes. Specifically, in the first stage, we propose a method to generate an optimized baseline aesthetic QR code, which reduces the visual contrast between the noise-like black/white modules and the blended image. In the second stage, to obtain art style QR code, we tailor an appropriate neural style transformation network to endow the baseline aesthetic QR code with artistic elements. In the third stage, we design an error-correction mechanism by balancing two competing terms, visual quality and readability, to ensure the performance robust. Extensive experiments demonstrate that SEE QR code has high quality in terms of both visual appearance and robustness, and also offers a greater variety of personalized choices to users.", "title": "" }, { "docid": "379407880b47b82db77dbfa8c2941e04", "text": "The revolutionary advancement in Information and Communication Technologies (“ICT”) has intrinsically altered the scale on which human affairs take place; it has fostered an interdependence of local, national, and international communities that is far greater than any previously experienced. The most significant feature of this technological revolution is the invention of the Internet: this network of interactive and global communications complex that has resulted in a compression of space and time, transmogrified the classic models of commercial interaction, business, and economic paradigms, and reshaped our lives and communities accordingly.", "title": "" }, { "docid": "1c4e1feed1509e0a003dca23ad3a902c", "text": "With an expansive and ubiquitously available gold mine of educational data, Massive Open Online courses (MOOCs) have become the an important foci of learning analytics research. The hope is that this new surge of development will bring the vision of equitable access to lifelong learning opportunities within practical reach. MOOCs offer many valuable learning experiences to students, from video lectures, readings, assignments and exams, to opportunities to connect and collaborate with others through threaded discussion forums and other Web 2.0 technologies. Nevertheless, despite all this potential, MOOCs have so far failed to produce evidence that this potential is being realized in the current instantiation of MOOCs. In this work, we primarily explore video lecture interaction in Massive Open Online Courses (MOOCs), which is central to student learning experience on these educational platforms. As a research contribution, we operationalize video lecture clickstreams of students into behavioral actions, and construct a quantitative information processing index, that can aid instructors to better understand MOOC hurdles and reason about unsatisfactory learning outcomes. Our results illuminate the effectiveness of developing such a metric inspired by cognitive psychology, towards answering critical questions regarding students’ engagement, their future click interactions and participation trajectories that lead to in-video dropouts. We leverage recurring click behaviors to differentiate distinct video watching profiles for students in MOOCs. Additionally, we discuss about prediction of complete course dropouts, incorporating diverse perspectives from statistics and machine learning, to offer a more nuanced view into how the second generation of MOOCs be benefited, if course instructors were to better comprehend factors that lead to student attrition. Implications for research and practice are discussed.", "title": "" }, { "docid": "a8ca6ef7b99cca60f5011b91d09e1b06", "text": "When virtual teams need to establish trust at a distance, it is advantageous for them to use rich media to communicate. We studied the emergence of trust in a social dilemma game in four different communication situations: face-to-face, video, audio, and text chat. All three of the richer conditions were significant improvements over text chat. Video and audio conferencing groups were nearly as good as face-to-face, but both did show some evidence of what we term delayed trust (slower progress toward full cooperation) and fragile trust (vulnerability to opportunistic behavior)", "title": "" }, { "docid": "aaba5f2d01dcbed417605c0779c43e32", "text": "BACKGROUND\nMinor abrasions and skin tears are usually treated with gauze dressings and topical antibiotics requiring frequent and messy dressing changes.\n\n\nOBJECTIVE\nWe describe our experience with a low-cost, cyanoacrylate-based liquid dressing applied only once for minor abrasions and skin tears.\n\n\nMETHODS\nWe conducted a single-center, prospective, noncomparative study in adult emergency department (ED) patients with minor nonbleeding skin abrasions and class I and II skin tears. After cleaning the wound and achieving hemostasis, the wounds were covered with a single layer of a cyanoacrylate liquid dressing. Patients were followed every 1-2 days until healing.\n\n\nRESULTS\nWe enrolled 40 patients with 50 wounds including 39 abrasions and 11 skin tears. Mean (standard deviation) age was 54.5 (21.9) years and 57.5% were male. Wounds were located on the face (n = 16), hands (n = 14), legs (n = 11), and arms (n = 9). Pain scores (0 to 10 from none to worst) after application of the liquid dressing were 0 in 62% and 1-3 in the remaining patients. Follow-up was available on 36 patients and 46 wounds. No wounds re-bled and there were no wound infections. Only one wound required an additional dressing. Median (interquartile range [IQR]) time to complete sloughing of the adhesive was 7 (5.5-8) days. Median (IQR) time to complete healing and sloughing of the overlying scab was 10 (7.4-14) days.\n\n\nCONCLUSIONS\nOur study suggests that a single application of a low-cost cyanoacrylate-based liquid adhesive is a safe and effective treatment for superficial nonbleeding abrasions and class I and II skin tears that eliminates the need for topical antibiotics and dressings.", "title": "" }, { "docid": "eeb6f968622316d013942b6ea2b8c735", "text": "Using deep learning for different machine learning tasks such as image classification and word embedding has recently gained many attentions. Its appealing performance reported across specific Natural Language Processing (NLP) tasks in comparison with other approaches is the reason for its popularity. Word embedding is the task of mapping words or phrases to a low dimensional numerical vector. In this paper, we use deep learning to embed Wikipedia Concepts and Entities. The English version of Wikipedia contains more than five million pages, which suggest its capability to cover many English Entities, Phrases, and Concepts. Each Wikipedia page is considered as a concept. Some concepts correspond to entities, such as a person’s name, an organization or a place. Contrary to word embedding, Wikipedia Concepts Embedding is not ambiguous, so there are different vectors for concepts with similar surface form but different mentions. We proposed several approaches and evaluated their performance based on Concept Analogy and Concept Similarity tasks. The results show that proposed approaches have the performance comparable and in some cases even higher than the state-of-the-art methods.", "title": "" }, { "docid": "78ba417cf2cb6a809414feefe163b710", "text": "The product bundling problem is a challenging task in the e-Commerce domain. We propose a generative engine in order to find the bundle of products that best satisfies user requirements and, at the same time, seller needs such as the minimization of the dead stocks and the maximization of net income. The proposed system named Intelligent Bundle Suggestion and Generation (IBSAG) is designed in order to satisfy these requirements. Market Basket Analysis supports the system in user requirement elicitation task. Experimental results prove the ability of system in finding the optimal tradeoff between different and conflicting constraints.", "title": "" }, { "docid": "4e14e9cb95ed8bc3b352e3e1119b53e1", "text": "We introduce a fast and efficient convolutional neural network, ESPNet, for semantic segmentation of high resolution images under resource constraints. ESPNet is based on a new convolutional module, efficient spatial pyramid (ESP), which is efficient in terms of computation, memory, and power. ESPNet is 22 times faster (on a standard GPU) and 180 times smaller than the state-of-the-art semantic segmentation network PSPNet [1], while its categorywise accuracy is only 8% less. We evaluated ESPNet on a variety of semantic segmentation datasets including Cityscapes, PASCAL VOC, and a breast biopsy whole slide image dataset. Under the same constraints on memory and computation, ESPNet outperforms all the current efficient CNN networks such as MobileNet [16], ShuffleNet [17], and ENet [20] on both standard metrics and our newly introduced performance metrics that measure efficiency on edge devices. Our network can process high resolution images at a rate of 112 and 9 frames per second on a standard GPU and edge device, respectively.", "title": "" }, { "docid": "b2de917d74765e39562c60c74a88d7f3", "text": "Computer-phobic university students are easy to find today especially when it come to taking online courses. Affect has been shown to influence users’ perceptions of computers. Although self-reported computer anxiety has declined in the past decade, it continues to be a significant issue in higher education and online courses. More importantly, anxiety seems to be a critical variable in relation to student perceptions of online courses. A substantial amount of work has been done on computer anxiety and affect. In fact, the technology acceptance model (TAM) has been extensively used for such studies where affect and anxiety were considered as antecedents to perceived ease of use. However, few, if any, have investigated the interplay between the two constructs as they influence perceived ease of use and perceived usefulness towards using online systems for learning. In this study, the effects of affect and anxiety (together and alone) on perceptions of an online learning system are investigated. Results demonstrate the interplay that exists between affect and anxiety and their moderating roles on perceived ease of use and perceived usefulness. Interestingly, the results seem to suggest that affect and anxiety may exist simultaneously as two weights on each side of the TAM scale.", "title": "" }, { "docid": "c376b3b413e21c907b344815ff1fda2c", "text": "There is a current trend of wearable sensing with regards to health. Wearable sensors and devices allow us to monitor various aspects of our lives. Through this monitoring, wearable systems can utilise data to positively influence an individual’s overall health and wellbeing. We envisage a future where technology can effectively help us to become fitter and healthier, but the current state of wearables and future directions are unclear. In this paper, we present an overview of current methods used within wearable applications to monitor and support positive health and wellbeing within an individual. We then highlight issues and challenges outlined by previous studies and describe the future focuses of work.", "title": "" }, { "docid": "45f120b05b3c48cd95d5dd55031987cb", "text": "n engl j med 359;6 www.nejm.org august 7, 2008 628 From the Department of Medicine (O.O.F., E.S.A.) and the Division of Infectious Diseases (P.A.M.), Johns Hopkins Bayview Medical Center, Johns Hopkins School of Medicine, Baltimore; the Division of Infectious Diseases (D.R.K.) and the Division of General Medicine (S.S.), University of Michigan Medical School, Ann Arbor; and the Department of Veterans Affairs Health Services Research and Development Center of Excellence, Ann Arbor, MI (S.S.). Address reprint requests to Dr. Antonarakis at the Johns Hopkins Bayview Medical Center, Department of Medicine, B-1 North, 4940 Eastern Ave., Baltimore, MD 21224, or at eantona1@ jhmi.edu.", "title": "" }, { "docid": "5028d250c60a70c0ed6954581ab6cfa7", "text": "Social Commerce as a result of the advancement of Social Networking Sites and Web 2.0 is increasing as a new model of online shopping. With techniques to improve the website using AJAX, Adobe Flash, XML, and RSS, Social Media era has changed the internet user behavior to be more communicative and active in internet, they love to share information and recommendation among communities. Social commerce also changes the way people shopping through online. Social commerce will be the new way of online shopping nowadays. But the new challenge is business has to provide the interactive website yet interesting website for internet users, the website should give experience to satisfy their needs. This purpose of research is to analyze the website quality (System Quality, Information Quality, and System Quality) as well as interaction feature (communication feature) impact on social commerce website and customers purchase intention. Data from 134 customers of social commerce website were used to test the model. Multiple linear regression is used to calculate the statistic result while confirmatory factor analysis was also conducted to test the validity from each variable. The result shows that website quality and communication feature are important aspect for customer purchase intention while purchasing in social commerce website.", "title": "" }, { "docid": "f6abaaeb06709e20122ef3fe07a88894", "text": "BACKGROUND\nBullying is a problem in schools in many countries. There would be a benefit in the availability of a psychometrically sound instrument for its measurement, for use by teachers and researchers. The Olweus Bully/Victim Questionnaire has been used in a number of studies but comprehensive evidence on its validity is not available.\n\n\nAIMS\nTo examine the conceptual design, construct validity and reliability of the Revised Olweus Bully/Victim Questionnaire (OBVQ) and to provide further evidence on the prevalence of different forms of bullying behaviour.\n\n\nSAMPLE\nAll 335 pupils (160 [47.8%] girls; 175 [52.2%]) boys, mean age 11.9 years [range 11.2-12.8 years]), in 21 classes of a stratified sample of 7 Greek Cypriot primary schools.\n\n\nMETHOD\nThe OBVQ was administered to the sample. Separate scales were created comprising (a) the items of the questionnaire concerning the extent to which pupils are being victimized; and (b) those concerning the extent to which pupils express bullying behaviour. Using the Rasch model, both scales were analysed for reliability, fit to the model, meaning, and validity. Both scales were also analysed separately for each of two sample groups (i.e. boys and girls) to test their invariance.\n\n\nRESULTS\nAnalysis of the data revealed that the instrument has satisfactory psychometric properties; namely, construct validity and reliability. The conceptual design of the instrument was also confirmed. The analysis leads also to suggestions for improving the targeting of items against student measures. Support was also provided for the relative prevalence of verbal, indirect and physical bullying. As in other countries, Cypriot boys used and experienced more bullying than girls, and boys used more physical and less indirect forms of bullying than girls.\n\n\nCONCLUSIONS\nThe OBVQ is a psychometrically sound instrument that measures two separate aspects of bullying, and whose use is supported for international studies of bullying in different countries. However, improvements to the questionnaire were also identified to provide increased usefulness to teachers tackling this significant problem facing schools in many countries.", "title": "" }, { "docid": "fdc8a54623f38ec29012d2f0f3bda8b1", "text": "Object tracking is an important issue for research and application related to visual servoing and more generally for robot vision. In this paper, we address the problem of realizing visual servoing tasks on complex objects in real environments. We briefly present a set of tracking algorithms (2D features-based or motion-based tracking, 3D model-based tracking, . . . ) that have been used for ten years to achieve this goal.", "title": "" }, { "docid": "4d99090b874776b89092f63f21c8ea93", "text": "Object viewpoint classification aims at predicting an approximate 3D pose of objects in a scene and is receiving increasing attention. State-of-the-art approaches to viewpoint classification use generative models to capture relations between object parts. In this work we propose to use a mixture of holistic templates (e.g. HOG) and discriminative learning for joint viewpoint classification and category detection. Inspired by the work of Felzenszwalb et al 2009, we discriminatively train multiple components simultaneously for each object category. A large number of components are learned in the mixture and they are associated with canonical viewpoints of the object through different levels of supervision, being fully supervised, semi-supervised, or unsupervised. We show that discriminative learning is capable of producing mixture components that directly provide robust viewpoint classification, significantly outperforming the state of the art: we improve the viewpoint accuracy on the Savarese et al 3D Object database from 57% to 74%, and that on the VOC 2006 car database from 73% to 86%. In addition, the mixture-of-templates approach to object viewpoint/pose has a natural extension to the continuous case by discriminatively learning a linear appearance model locally at each discrete view. We evaluate continuous viewpoint estimation on a dataset of everyday objects collected using IMUs for groundtruth annotation: our mixture model shows great promise comparing to a number of baselines including discrete nearest neighbor and linear regression.", "title": "" }, { "docid": "3b64e99ea608819fc4bf06a6850a5aff", "text": "Cloud computing is one of the most useful technology that is been widely used all over the world. It generally provides on demand IT services and products. Virtualization plays a major role in cloud computing as it provides a virtual storage and computing services to the cloud clients which is only possible through virtualization. Cloud computing is a new business computing paradigm that is based on the concepts of virtualization, multi-tenancy, and shared infrastructure. This paper discusses about cloud computing, how virtualization is done in cloud computing, virtualization basic architecture, its advantages and effects [1].", "title": "" } ]
scidocsrr
776f809fa8e92d5db919cd8476d8fb24
A Systematic Mapping Study of Software Development With GitHub
[ { "docid": "0153774b49121d8735cc3d33df69fc00", "text": "A common requirement of many empirical software engineering studies is the acquisition and curation of data from software repositories. During the last few years, GitHub has emerged as a popular project hosting, mirroring and collaboration platform. GitHub provides an extensive rest api, which enables researchers to retrieve both the commits to the projects' repositories and events generated through user actions on project resources. GHTorrent aims to create a scalable off line mirror of GitHub's event streams and persistent data, and offer it to the research community as a service. In this paper, we present the project's design and initial implementation and demonstrate how the provided datasets can be queried and processed.", "title": "" } ]
[ { "docid": "f782b9708896f97f2e1312fa8605dc3e", "text": "Persuasive Robotics is the study of persuasion as it applies to human-robot interaction (HRI). Persuasion can be generally defined as an attempt to change another's beliefs or behavior. The act of influencing others is fundamental to nearly every type of social interaction. Any agent desiring to seamlessly operate in a social manner will need to incorporate this type of core human behavior. As in human interaction, myriad aspects of a humanoid robot's appearance and behavior can significantly alter its persuasiveness - this work will focus on one particular factor: gender. In the current study, run at the Museum of Science in Boston, subjects interacted with a humanoid robot whose gender was varied. After a short interaction and persuasive appeal, subjects responded to a donation request made by the robot, and subsequently completed a post-study questionnaire. Findings showed that men were more likely to donate money to the female robot, while women showed little preference. Subjects also tended to rate the robot of the opposite sex as more credible, trustworthy, and engaging. In the case of trust and engagement the effect was much stronger between male subjects and the female robot. These results demonstrate the importance of considering robot and human gender in the design of HRI.", "title": "" }, { "docid": "f0285873e91d0470e8fbd8ce4430742f", "text": "Copying an element from a photo and pasting it into a painting is a challenging task. Applying photo compositing techniques in this context yields subpar results that look like a collage — and existing painterly stylization algorithms, which are global, perform poorly when applied locally. We address these issues with a dedicated algorithm that carefully determines the local statistics to be transferred. We ensure both spatial and inter-scale statistical consistency and demonstrate that both aspects are key to generating quality results. To cope with the diversity of abstraction levels and types of paintings, we introduce a technique to adjust the parameters of the transfer depending on the painting. We show that our algorithm produces significantly better results than photo compositing or global stylization techniques and that it enables creative painterly edits that would be otherwise difficult to achieve. CCS Concepts •Computing methodologies → Image processing;", "title": "" }, { "docid": "a33ccc1d1f906b2f09669166a1fe093c", "text": "A writer’s style depends not just on personal traits but also on her intent and mental state. In this paper, we show how variants of the same writing task can lead to measurable differences in writing style. We present a case study based on the story cloze task (Mostafazadeh et al., 2016a), where annotators were assigned similar writing tasks with different constraints: (1) writing an entire story, (2) adding a story ending for a given story context, and (3) adding an incoherent ending to a story. We show that a simple linear classifier informed by stylistic features is able to successfully distinguish among the three cases, without even looking at the story context. In addition, combining our stylistic features with language model predictions reaches state of the art performance on the story cloze challenge. Our results demonstrate that different task framings can dramatically affect the way people write.", "title": "" }, { "docid": "1bc4aabbc8aed4f3034358912d9728d5", "text": "Anjali Mishra1, Amit Mishra2 1 Master’s Degree Student, Electronics and Communication Engineering 2 Assistant Professor, Electronics and Communication Engineering 1,2 Vindhya Institute of Technology & Science, Jabalpur, Madhya Pradesh, India PIN – 482021 Email: [email protected] , 2 [email protected] ---------------------------------------------------------------------***--------------------------------------------------------------------Abstract Cognitive Radio presents a new opportunity area to explore for better utilization of a scarce natural resource like spectrum which is under focus due to increased presence of new communication devices, density of users and development of new data intensive applications. Cognitive Radio utilizes dynamic utilization of spectrum and is positioned as a promising solution to spectrum underutilization problem. However, reliability of a CR system in a noisy environment remains a challenge area. Especially manmade impulsive noise makes spectrum sensing difficult. In this paper we have presented a simulation model to analyze the effect of impulsive noise in Cognitive Radio system. Primary user detection in presence of impulsive noise is investigated for different noise thresholds and other signal parameters of interest using the unconventional power spectral density based detection approach. Also, possible alternatives for accurate primary user detection which are of interest for future research in this area are discussed for practical implementation.", "title": "" }, { "docid": "0d2b961b5546091f05ed7a8eff5f1d7f", "text": "Initial Cryptoasset Offering (ICO), also often called Initial Coin Offering or Initial Token Offering (ITO) is a new means of fundraising through blockchain technology, which allows startups to raise large amounts of funds from the crowd in an unprecedented speed. However it is not easy for ordinary investors to distinguish genuine fundraising activities through ICOs from scams. Different websites that gather and evaluate ICOs at different stages have emerged as a solution to this issue. What remains unclear is how these websites are evaluating ICOs, and consequently how reliable and credible their evaluations are. In this paper we present the first findings of an analysis of a set of 28 ICO evaluation websites, aiming at revealing the state of the practice in terms of ICO evaluation. Key information about ICOs collected by these websites are categorised, and key factors that differentiate the evaluation mechanisms employed by these evaluation websites are identified. The findings of our study could help a better understanding of what entails to properly evaluate ICOs. It is also a first step towards discovering the key success factors of ICOs.", "title": "" }, { "docid": "e2f6434cf7acfa6bd722f893c9bd1851", "text": "Image Synthesis for Self-Supervised Visual Representation Learning", "title": "" }, { "docid": "fccbcdff722a297e5a389674d7557a18", "text": "For the last few decades more than twenty standardized usability questionnaires for evaluating software systems have been proposed. These instruments have been widely used in the assessment of usability of user interfaces. They have their own characteristics, can be generic or address specific kinds of systems and can be composed of one or several items. Some comparison or comparative studies were also conducted to identify the best one in different situations. All these issues should be considered while choosing a questionnaire. In this paper, we present an extensive review of these questionnaires considering their key features, some classifications and main comparison studies already performed. Moreover, we present the result of a detailed analysis of all items being evaluated in each questionnaire to indicate those that can identify users’ perceptions about specific usability problems. This analysis was performed by confronting each questionnaire item (around 475 items) with usability criteria proposed by quality standards (ISO 9421-11 and ISO/WD 9241-112) and classical quality ergonomic criteria.", "title": "" }, { "docid": "ed33b5fae6bc0af64668b137a3a64202", "text": "In this study the effect of the Edmodo social learning environment on mobile assisted language learning (MALL) was examined by seeking the opinions of students. Using a quantitative experimental approach, this study was conducted by conducting a questionnaire before and after using the social learning network Edmodo. Students attended lessons with their mobile devices. The course materials were shared in the network via Edmodo group sharing tools. The students exchanged idea and developed projects, and felt as though they were in a real classroom setting. The students were also able to access various multimedia content. The results of the study indicate that Edmodo improves students’ foreign language learning, increases their success, strengthens communication among students, and serves as an entertaining learning environment for them. The educationally suitable sharing structure and the positive user opinions described in this study indicate that Edmodo is also usable in other lessons. Edmodo can be used on various mobile devices, including smartphones, in addition to the web. This advantageous feature contributes to the usefulness of Edmodo as a scaffold for education.", "title": "" }, { "docid": "634509a9d6484ba51d01f9c049551df5", "text": "In this paper, we propose a joint training approach to voice activity detection (VAD) to address the issue of performance degradation due to unseen noise conditions. Two key techniques are integrated into this deep neural network (DNN) based VAD framework. First, a regression DNN is trained to map the noisy to clean speech features similar to DNN-based speech enhancement. Second, the VAD part to discriminate speech against noise backgrounds is also a DNN trained with a large amount of diversified noisy data synthesized by a wide range of additive noise types. By stacking the classification DNN on top of the enhancement DNN, this integrated DNN can be jointly trained to perform VAD. The feature mapping DNN serves as a noise normalization module aiming at explicitly generating the “clean” features which are easier to be correctly recognized by the following classification DNN. Our experiment results demonstrate the proposed noise-universal DNNbased VAD algorithm achieves a good generalization capacity to unseen noises, and the jointly trained DNNs consistently and significantly outperform the conventional classification-based DNN for all the noise types and signal-to-noise levels tested.", "title": "" }, { "docid": "efc63a7feba2dad141177fcd0160f7e2", "text": "Recently, very high resolution (VHR) panchromatic and multispectral (MS) remote-sensing images can be acquired easily. However, it is still a challenging task to fuse and classify these VHR images. Generally, there are two ways for the fusion and classification of panchromatic and MS images. One way is to use a panchromatic image to sharpen an MS image, and then classify a pan-sharpened MS image. Another way is to extract features from panchromatic and MS images, respectively, and then combine these features for classification. In this paper, we propose a superpixel-based multiple local convolution neural network (SML-CNN) model for panchromatic and MS images classification. In order to reduce the amount of input data for the CNN, we extend simple linear iterative clustering algorithm for segmenting MS images and generating superpixels. Superpixels are taken as the basic analysis unit instead of pixels. To make full advantage of the spatial-spectral and environment information of superpixels, a superpixel-based multiple local regions joint representation method is proposed. Then, an SML-CNN model is established to extract an efficient joint feature representation. A softmax layer is used to classify these features learned by multiple local CNN into different categories. Finally, in order to eliminate the adverse effects on the classification results within and between superpixels, we propose a multi-information modification strategy that combines the detailed information and semantic information to improve the classification performance. Experiments on the classification of Vancouver and Xi’an panchromatic and MS image data sets have demonstrated the effectiveness of the proposed approach.", "title": "" }, { "docid": "c09adc1924c9c1b32c33b23d9df489b9", "text": "In recent years, “document store” NoSQL systems have exploded in popularity. A large part of this popularity has been driven by the adoption of the JSON data model in these NoSQL systems. JSON is a simple but expressive data model that is used in many Web 2.0 applications, and maps naturally to the native data types of many modern programming languages (e.g. Javascript). The advantages of these NoSQL document store systems (like MongoDB and CouchDB) are tempered by a lack of traditional RDBMS features, notably a sophisticated declarative query language, rich native query processing constructs (e.g. joins), and transaction management providing ACID safety guarantees. In this paper, we investigate whether the advantages of the JSON data model can be added to RDBMSs, gaining some of the traditional benefits of relational systems in the bargain. We present Argo, an automated mapping layer for storing and querying JSON data in a relational system, and NoBench, a benchmark suite that evaluates the performance of several classes of queries over JSON data in NoSQL and SQL databases. Our results point to directions of how one can marry the best of both worlds, namely combining the flexibility of JSON to support the popular document store model with the rich query processing and transactional properties that are offered by traditional relational DBMSs.", "title": "" }, { "docid": "b100ca202f99e3ee086cd61f01349a30", "text": "This paper is concerned with inertial-sensor-based tracking of the gravitation direction in mobile devices such as smartphones. Although this tracking problem is a classical one, choosing a good state-space for this problem is not entirely trivial. Even though for many other orientation related tasks a quaternion-based representation tends to work well, for gravitation tracking their use is not always advisable. In this paper we present a convenient linear quaternion-free state-space model for gravitation tracking. We also discuss the efficient implementation of the Kalman filter and smoother for the model. Furthermore, we propose an adaption mechanism for the Kalman filter which is able to filter out shot-noises similarly as has been proposed in context of adaptive and robust Kalman filtering. We compare the proposed approach to other approaches using measurement data collected with a smartphone.", "title": "" }, { "docid": "dadd12e17ce1772f48eaae29453bc610", "text": "Publications Learning Word Vectors for Sentiment Analysis. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. The 49 th Annual Meeting of the Association for Computational Linguistics (ACL 2011). Spectral Chinese Restaurant Processes: Nonparametric Clustering Based on Similarities. Richard Socher, Andrew Maas, and Christopher D. Manning. The 15 th International Conference on Artificial Intelligence and Statistics (AISTATS 2010). A Probabilistic Model for Semantic Word Vectors. Andrew L. Maas and Andrew Y. Ng. NIPS 2010 Workshop on Deep Learning and Unsupervised Feature Learning. One-Shot Learning with Bayesian Networks. Andrew L. Maas and Charles Kemp. Proceedings of the 31 st", "title": "" }, { "docid": "a8a483db765f791a6bd27e066eee20b0", "text": "Autoerotic death by hanging or ligature is a method of autoeroticism well known by forensic pathologists. In order to analyze autoerotic deaths of nonclassic hanging or ligature type, this paper reviews all published cases of autoerotic deaths from 1954 to 2004, with the exclusion of homicide cases or cases in which the autoerotic activity was not solitary. These articles were obtained through a systematic Medline database search. A total of 408 cases of such deaths has been reported in 57 articles. For each case, the following characteristics are presented here: sex, age, race, method of autoerotic activity, cause of death, and location where the body was found. Autoerotic death practioners were predominantly Caucasian males. Victims were aged from 9 to 77 years and were mainly found in various indoor locations. Most cases were asphyxia by hanging, ligature, plastic bags, chemical substances, or a mixture of these. Still, atypical methods of autoerotic activity leading to death accounted for about 10.3% of cases in the literature and are classified here into five broad categories: electrocution (3.7%), overdressing/body wrapping (1.5%), foreign body insertion (1.2%), atypical asphyxia method (2.9%), and miscellaneous (1.0%). All these atypical methods are further discussed individually.", "title": "" }, { "docid": "94189593d39be7c5e5411482c7b996e3", "text": "In this paper, interval-valued fuzzy planar graphs are defined and several properties are studied. The interval-valued fuzzy graphs are more efficient than fuzzy graphs, since the degree of membership of vertices and edges lie within the interval [0, 1] instead at a point in fuzzy graphs. We also use the term ‘degree of planarity’ to measures the nature of planarity of an interval-valued fuzzy graph. The other relevant terms such as strong edges, interval-valued fuzzy faces, strong interval-valued fuzzy faces are defined here. The interval-valued fuzzy dual graph which is closely associated to the interval-valued fuzzy planar graph is defined. Several properties of interval-valued fuzzy dual graph are also studied. An example of interval-valued fuzzy planar graph is given.", "title": "" }, { "docid": "0954886e4de4def3b8593cc9c72dafcd", "text": "AIM\nThe aim was to perform comparative analysis of bioactive, contemporary bulk-fill resin-based composites (RBCs) and conventional glass-ionomer materials for flexural strength (FS), diametral tensile strength (DTS), and Vickers hardness number (VHN) in the presence of thermocycling.\n\n\nMATERIALS AND METHODS\nFive restorative materials [Tetric N-Ceram Bulk Fill; smart dentin replacement (SDR) Flowable Material; Bioactive restorative material (ACTIVA Bulk Fill); Ketac Universal Aplicap; and GC Fuji II] were evaluated for DTS, FS, and VHN. Half the samples in each material group were ther-mocycled. The DTS was performed under compressive load at a cross-head speed of 1.0 mm/min. The FS was assessed by three-point bending test at a cross-head speed of 0.5 mm/min. The VHN was determined using a Vickers diamond indenter at 50 gf load for 15 seconds. Differences in FS, DTS, and VHN were analyzed using analysis of variance (ANOVA) and Tukey post hoc tests at a = 0.05 level of significance.\n\n\nRESULTS\nN-Ceram, ACTIVA, and SDR demonstrated the highest and comparable (p > 0.05) FS. The SDR had the highest DTS value (141.28 ± 0.94), followed by N-Ceram (136.61 ± 1.56) and ACTIVA (129.05 ± 1.78). Ketac had the highest VHN value before and after thermocycling.\n\n\nCONCLUSION\nACTIVA showed mechanical properties (FS and DTS) comparable with bulk-fill resin composite materials. ACTIVA showed potential for durability, as VHN was comparable post-thermocycling.\n\n\nCLINICAL SIGNIFICANCE\nBioactive materials showed acceptable DTS and FS values. However, hardness was compromised compared with included materials. ACTIVA Bulk Fill shows potential for dentin replacement but it needs to be covered with a surface-resistant restorative material. Further studies to improve surface characteristics of ACTIVA Bulk Fill are recommended.", "title": "" }, { "docid": "40c90bf58aae856c7c72bac573069173", "text": "Most deep reinforcement learning algorithms are data inefficient in complex and rich environments, limiting their applicability to many scenarios. One direction for improving data efficiency is multitask learning with shared neural network parameters, where efficiency may be improved through transfer across related tasks. In practice, however, this is not usually observed, because gradients from different tasks can interfere negatively, making learning unstable and sometimes even less data efficient. Another issue is the different reward schemes between tasks, which can easily lead to one task dominating the learning of a shared model. We propose a new approach for joint training of multiple tasks, which we refer to as Distral (distill & transfer learning). Instead of sharing parameters between the different workers, we propose to share a “distilled” policy that captures common behaviour across tasks. Each worker is trained to solve its own task while constrained to stay close to the shared policy, while the shared policy is trained by distillation to be the centroid of all task policies. Both aspects of the learning process are derived by optimizing a joint objective function. We show that our approach supports efficient transfer on complex 3D environments, outperforming several related methods. Moreover, the proposed learning process is more robust to hyperparameter settings and more stable—attributes that are critical in deep reinforcement learning.", "title": "" }, { "docid": "65963f25d2191abcd7b4a83a257f8269", "text": "A novel technique for obtaining low sidelobes pattern in a planar microstrip array antenna is proposed. This technique involves the addition of two external complementary split ring resonators (CSRRs) at both the ends of each row of the antenna in the ground plane. These two CSRRs together produce an interferometer pattern for the reduction of sidelobes. An 8×4 planar array at 9.9 GHz is designed and fabricated to demonstrate the concept and a sidelobe reduction of 4.3 dB achieved.", "title": "" }, { "docid": "9a10716e1d7e24b790fb5dd48ad254ab", "text": "Probabilistic models based on Bayes' rule are an increasingly popular approach to understanding human cognition. Bayesian models allow immense representational latitude and complexity. Because they use normative Bayesian mathematics to process those representations, they define optimal performance on a given task. This article focuses on key mechanisms of Bayesian information processing, and provides numerous examples illustrating Bayesian approaches to the study of human cognition. We start by providing an overview of Bayesian modeling and Bayesian networks. We then describe three types of information processing operations-inference, parameter learning, and structure learning-in both Bayesian networks and human cognition. This is followed by a discussion of the important roles of prior knowledge and of active learning. We conclude by outlining some challenges for Bayesian models of human cognition that will need to be addressed by future research. WIREs Cogn Sci 2011 2 8-21 DOI: 10.1002/wcs.80 For further resources related to this article, please visit the WIREs website.", "title": "" }, { "docid": "8e770bdbddbf28c1a04da0f9aad4cf16", "text": "This paper presents a novel switch-mode power amplifier based on a multicell multilevel circuit topology. The total output voltage of the system is formed by series connection of several switching cells having a low dc-link voltage. Therefore, the cells can be realized using modern low-voltage high-current power MOSFET devices and the dc link can easily be buffered by rechargeable batteries or “super” capacitors to achieve very high amplifier peak output power levels (“flying-battery” concept). The cells are operated in a phase-shifted interleaved pulsewidth-modulation mode, which, in connection with the low partial voltage of each cell, reduces the filtering effort at the output of the total amplifier to a large extent and, consequently, improves the dynamic system behavior. The paper describes the operating principle of the system, analyzes the fundamental relationships being relevant for the circuit design, and gives guidelines for the dimensioning of the control circuit. Furthermore, simulation results as well as results of measurements taken from a laboratory setup are presented.", "title": "" } ]
scidocsrr
9d8028780e01ef791681d5320005e40a
Electroactive polymer-based devices for e-textiles in biomedicine
[ { "docid": "991ab90963355f16aa2a83655577ba54", "text": "Highly durable, flexible, and even washable multilayer electronic circuitry can be constructed on textile substrates, using conductive yarns and suitably packaged components. In this paper we describe the development of e-broidery (electronic embroidery, i.e., the patterning of conductive textiles by numerically controlled sewing or weaving processes) as a means of creating computationally active textiles. We compare textiles to existing flexible circuit substrates with regard to durability, conformability, and wearability. We also report on: some unique applications enabled by our work; the construction of sensors and user interface elements in textiles; and a complete process for creating flexible multilayer circuits on fabric substrates. This process maintains close compatibility with existing electronic components and design tools, while optimizing design techniques and component packages for use in textiles. E veryone wears clothing. It conveys a sense of the wearer's identity, provides protection from the environment, and supplies a convenient way to carry all the paraphernalia of daily life. Of course, clothing is made from textiles, which are themselves among the first composite materials engineered by humans. Textiles have mechanical, aesthetic, and material advantages that make them ubiquitous in both society and industry. The woven structure of textiles and spun fibers makes them durable, washable, and conformal, while their composite nature affords tremendous variety in their texture, for both visual and tactile senses. Sadly, not everyone wears a computer, although there is presently a great deal of interest in \" wear-able computing. \" 1 Wearable computing may be seen as the result of a design philosophy that integrates embedded computation and sensing into everyday life to give users continuous access to the capabilities of personal computing. Ideally, computers would be as convenient, durable, and comfortable as clothing, but most wearable computers still take an awkward form that is dictated by the materials and processes traditionally used in electronic fabrication. The design principle of packaging electronics in hard plastic boxes (no matter how small) is pervasive, and alternatives are difficult to imagine. As a result, most wearable computing equipment is not truly wearable except in the sense that it fits into a pocket or straps onto the body. What is needed is a way to integrate technology directly into textiles and clothing. Furthermore, textile-based computing is not limited to applications in wearable computing; in fact, it is broadly applicable to ubiquitous computing, allowing the integration of interactive elements into furniture and decor in general. In …", "title": "" } ]
[ { "docid": "fb5e9a15429c9361dbe577ca8db18e46", "text": "Most experiments are done in laboratories. However, there is also a theory and practice of field experimentation. It has had its successes and failures over the past four decades but is now increasingly used for answering causal questions. This is true for both randomized and-perhaps more surprisingly-nonrandomized experiments. In this article, we review the history of the use of field experiments, discuss some of the reasons for their current renaissance, and focus the bulk of the article on the particular technical developments that have made this renaissance possible across four kinds of widely used experimental and quasi-experimental designs-randomized experiments, regression discontinuity designs in which those units above a cutoff get one treatment and those below get another, short interrupted time series, and nonrandomized experiments using a nonequivalent comparison group. We focus this review on some of the key technical developments addressing problems that previously stymied accurate effect estimation, the solution of which opens the way for accurate estimation of effects under the often difficult conditions of field implementation-the estimation of treatment effects under partial treatment implementation, the prevention and analysis of attrition, analysis of nested designs, new analytic developments for both regression discontinuity designs and short interrupted time series, and propensity score analysis. We also cover the key empirical evidence showing the conditions under which some nonrandomized experiments may be able to approximate results from randomized experiments.", "title": "" }, { "docid": "796d6bd4d76be658d0886d6bc4952af1", "text": "This paper reports on the successful use of Graasp, a social media platform, by university students for their collaborative work. Graasp features a number of innovations, such as administrator-free creation of collaborative spaces, a context-aware recommendation and privacy management. In the context of a EU-funded project involving large test beds, we have been able to extend this platform with lightweight tools (widgets) aimed for learning and competence development and to validate its usefulness in a collaborative learning context.", "title": "" }, { "docid": "e80212110cc32d51a3782259932a8490", "text": "In this paper, a new approach for handling fuzzy AHP is introduced, with the use of triangular fuzzy numbers for pairwise comprison scale of fuzzy AHP, and the use of the extent analysis method for the synthetic extent value S i of the pairwise comparison. By applying the principle of the comparison of fuzzy numbers, that is, V ( M l >1 M 2) = 1 iff mj >i m z, V ( M z >/M~) = hgt(M~ A M z) =/xM,(d), the vectors of weight with respect to each element under a certain criterion are represented by d( A i) = min V(S i >1 Sk), k = 1, 2 . . . . . n; k -4= i. This decision process is demonstrated by an example.", "title": "" }, { "docid": "24387104af78fd752c20764e81e4aaa5", "text": "This paper considers the problem of tracking a dynamic sparse channel in a broadband wireless communication system. A probabilistic signal model is firstly proposed to describe the special features of temporal correlations of dynamic sparse channels: path delays change slowly over time, while path gains evolve faster. Based on such temporal correlations, we then propose the differential orthogonal matching pursuit (D-OMP) algorithm to track a dynamic sparse channel in a sequential way by updating the small channel variation over time. Compared with other channel tracking algorithms, simulation results demonstrate that the proposed D-OMP algorithm can track dynamic sparse channels faster with improved accuracy.", "title": "" }, { "docid": "445487bf85f9731b94f79a8efc9d2830", "text": "The realism of avatars in terms of behavior and form is critical to the development of collaborative virtual environments. In the study we utilized state of the art, real-time face tracking technology to track and render facial expressions unobtrusively in a desktop CVE. Participants in dyads interacted with each other via either a video-conference (high behavioral realism and high form realism), voice only (low behavioral realism and low form realism), or an emotibox that rendered the dimensions of facial expressions abstractly in terms of color, shape, and orientation on a rectangular polygon (high behavioral realism and low form realism). Verbal and non-verbal self-disclosure were lowest in the videoconference condition while self-reported copresence and success of transmission and identification of emotions were lowest in the emotibox condition. Previous work demonstrates that avatar realism increases copresence while decreasing self-disclosure. We discuss the possibility of a hybrid realism solution that maintains high copresence without lowering self-disclosure, and the benefits of such an avatar on applications such as distance learning and therapy.", "title": "" }, { "docid": "a38fe2a01aa7894a0b11e70841543332", "text": "This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, redistribution , reselling , loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material. Geographic data and tools are essential in all aspects of emergency management: preparedness, response, recovery, and mitigation. Geographic information created by amateur citizens, often known as volunteered geographic information, has recently provided an interesting alternative to traditional authoritative information from mapping agencies and corporations, and several recent papers have provided the beginnings of a literature on the more fundamental issues raised by this new source. Data quality is a major concern, since volunteered information is asserted and carries none of the assurances that lead to trust in officially created data. During emergencies time is the essence, and the risks associated with volunteered information are often outweighed by the benefits of its use. An example is discussed using the four wildfires that impacted the Santa Barbara area in 2007Á2009, and lessons are drawn. 1. Introduction Recent disasters have drawn attention to the vulnerability of human populations and infrastructure, and the extremely high cost of recovering from the damage they have caused. In all of these cases impacts were severe, in damage, injury, and loss of life, and were spread over large areas. In all of these cases modern technology has brought reports and images to the almost immediate attention of much of the world's population, and in the Katrina case it was possible for millions around the world to watch the events as they unfolded in near-real time. Images captured from satellites have been used to create damage assessments, and digital maps have been used to direct supplies and to guide the recovery effort, in an increasingly important application of Digital Earth. Nevertheless it has been clear in all of these cases that the potential of such data, and of geospatial data and tools more generally, …", "title": "" }, { "docid": "b231f2c6b19d5c38b8aa99ec1b1e43da", "text": "Many models of social network formation implicitly assume that network properties are static in steady-state. In contrast, actual social networks are highly dynamic: allegiances and collaborations expire and may or may not be renewed at a later date. Moreover, empirical studies show that human social networks are dynamic at the individual level but static at the global level: individuals’ degree rankings change considerably over time, whereas network-level metrics such as network diameter and clustering coefficient are relatively stable. There have been some attempts to explain these properties of empirical social networks using agent-based models in which agents play social dilemma games with their immediate neighbours, but can also manipulate their network connections to strategic advantage. However, such models cannot straightforwardly account for reciprocal behaviour based on reputation scores (“indirect reciprocity”), which is known to play an important role in many economic interactions. In order to account for indirect reciprocity, we model the network in a bottom-up fashion: the network emerges from the low-level interactions between agents. By so doing we are able to simultaneously account for the effect of both direct reciprocity (e.g. “tit-for-tat”) as well as indirect reciprocity (helping strangers in order to increase one’s reputation). This leads to a strategic equilibrium in the frequencies with which strategies are adopted in the population as a whole, but intermittent cycling over different strategies at the level of individual agents, which in turn gives rise to social networks which are dynamic at the individual level but stable at the network level.", "title": "" }, { "docid": "8869cab615e5182c7c03f074ead081f7", "text": "This article introduces the principal concepts of multimedia cloud computing and presents a novel framework. We address multimedia cloud computing from multimedia-aware cloud (media cloud) and cloud-aware multimedia (cloud media) perspectives. First, we present a multimedia-aware cloud, which addresses how a cloud can perform distributed multimedia processing and storage and provide quality of service (QoS) provisioning for multimedia services. To achieve a high QoS for multimedia services, we propose a media-edge cloud (MEC) architecture, in which storage, central processing unit (CPU), and graphics processing unit (GPU) clusters are presented at the edge to provide distributed parallel processing and QoS adaptation for various types of devices.", "title": "" }, { "docid": "8c9ec6bcc85416846921f993141982a6", "text": "Encryption and integrity trees guard against physical attacks, but harm performance. Prior academic work has speculated around the latency of integrity verification, but has done so in an insecure manner. No industrial implementations of secure processors have included speculation. This work presents PoisonIvy, a mechanism which speculatively uses data before its integrity has been verified while preserving security and closing address-based side-channels. PoisonIvy reduces performance overheads from 40% to 20% for memory intensive workloads and down to 1.8%, on average.", "title": "" }, { "docid": "43f9cd44dee709339fe5b11eb73b15b6", "text": "Mutual interference of radar systems has been identified as one of the major challenges for future automotive radar systems. In this work the interference of frequency (FMCW) and phase modulated continuous wave (PMCW) systems is investigated by means of simulations. All twofold combinations of the aforementioned systems are considered. The interference scenario follows a typical use-case from the well-known MOre Safety for All by Radar Interference Mitigation (MOSARIM) study. The investigated radar systems operate with similar system parameters to guarantee a certain comparability, but with different waveform durations, and chirps with different slopes and different phase code sequences, respectively. Since the effects in perfect synchrony are well understood, we focus on the cases where both systems exhibit a certain asynchrony. It is shown that the energy received from interferers can cluster in certain Doppler bins in the range-Doppler plane when systems exhibit a slight asynchrony.", "title": "" }, { "docid": "da04a904a236c9b4c3c335eb7c65246e", "text": "BACKGROUND\nIdentifying the emotional state is helpful in applications involving patients with autism and other intellectual disabilities; computer-based training, human computer interaction etc. Electrocardiogram (ECG) signals, being an activity of the autonomous nervous system (ANS), reflect the underlying true emotional state of a person. However, the performance of various methods developed so far lacks accuracy, and more robust methods need to be developed to identify the emotional pattern associated with ECG signals.\n\n\nMETHODS\nEmotional ECG data was obtained from sixty participants by inducing the six basic emotional states (happiness, sadness, fear, disgust, surprise and neutral) using audio-visual stimuli. The non-linear feature 'Hurst' was computed using Rescaled Range Statistics (RRS) and Finite Variance Scaling (FVS) methods. New Hurst features were proposed by combining the existing RRS and FVS methods with Higher Order Statistics (HOS). The features were then classified using four classifiers - Bayesian Classifier, Regression Tree, K- nearest neighbor and Fuzzy K-nearest neighbor. Seventy percent of the features were used for training and thirty percent for testing the algorithm.\n\n\nRESULTS\nAnalysis of Variance (ANOVA) conveyed that Hurst and the proposed features were statistically significant (p < 0.001). Hurst computed using RRS and FVS methods showed similar classification accuracy. The features obtained by combining FVS and HOS performed better with a maximum accuracy of 92.87% and 76.45% for classifying the six emotional states using random and subject independent validation respectively.\n\n\nCONCLUSIONS\nThe results indicate that the combination of non-linear analysis and HOS tend to capture the finer emotional changes that can be seen in healthy ECG data. This work can be further fine tuned to develop a real time system.", "title": "" }, { "docid": "fe05cc4e31effca11e2718ce05635a97", "text": "In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data. In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples. In this work, we present a simple but effective gradientbased approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks. Following a recently proposed framework for security evaluation, we simulate attack scenarios that exhibit different risk levels for the classifier by increasing the attacker’s knowledge of the system and her ability to manipulate attack samples. This gives the classifier designer a better picture of the classifier performance under evasion attacks, and allows him to perform a more informed model selection (or parameter setting). We evaluate our approach on the relevant security task of malware detection in PDF files, and show that such systems can be easily evaded. We also sketch some countermeasures suggested by our analysis.", "title": "" }, { "docid": "63b283d40abcccd17b4771535ac000e4", "text": "Developing agents to engage in complex goaloriented dialogues is challenging partly because the main learning signals are very sparse in long conversations. In this paper, we propose a divide-and-conquer approach that discovers and exploits the hidden structure of the task to enable efficient policy learning. First, given successful example dialogues, we propose the Subgoal Discovery Network (SDN) to divide a complex goal-oriented task into a set of simpler subgoals in an unsupervised fashion. We then use these subgoals to learn a multi-level policy by hierarchical reinforcement learning. We demonstrate our method by building a dialogue agent for the composite task of travel planning. Experiments with simulated and real users show that our approach performs competitively against a state-of-theart method that requires human-defined subgoals. Moreover, we show that the learned subgoals are often human comprehensible.", "title": "" }, { "docid": "f53ef680a4a6717d40b536e74a8e3c95", "text": "In today's fully-integrated converters, the integrated LC components dominate the chip-area and have become the major limitation of reducing the cost and increasing the current density. This paper presents a 100 MHz four-phase fully-integrated buck converter with standard package bondwire inductors and a flying capacitor (CFLY) topology for chip-area reduction, occupying 1.25 mm2 effective area in 0.13-μm CMOS technology. A four-phase operation is introduced for chip-area reduction with the cost penalty minimized by utilizing standard package bondwire inductance as power inductors. Meanwhile, an extra more than 40% chip-area saving is achieved by the simple but effective CFLY topology to take advantage of the parasitic bondwire inductance at the input for ripple attenuation. A maximum output current of 1.2 A is obtained by the four-phase operation, while only 3.73 nF overall integrated capacitors are required. Also, with the chip-area hungry integrated spiral metal inductors eliminated, the current density is significantly increased. 0.96 A/mm2 current density and 82.4% efficiency is obtained with 1.2 V to 0.9 V voltage conversion without using any off-chip inductors or advanced processes. The reliability is also verified by measurement with various bondwire inductances and configurations.", "title": "" }, { "docid": "d3382f092125fd5fb33469811ca7b650", "text": "Mobile networks enable users to post on social media services (e.g., Twitter) from anywhere. The activities of mobile users involve three major entities: user, post, and location. The interaction of these entities is the key to answer questions such as who will post a message where and on what topic? In this paper, we address the problem of profiling mobile users by modeling their activities, i.e., we explore topic modeling considering the spatial and textual aspects of user posts, and predict future user locations. We propose the first ST (Spatial Topic) model to capture the correlation between users' movements and between user interests and the function of locations. We employ the sparse coding technique which greatly speeds up the learning process. We perform experiments on two real life data sets from Twitter and Yelp. Through comprehensive experiments, we demonstrate that our proposed model consistently improves the average precision@1,5,10,15,20 for location recommendation by at least 50% (Twitter) and 300% (Yelp) against existing state-of-the-art recommendation algorithms and geographical topic models.", "title": "" }, { "docid": "ad68a9ecf4ba36ec924ec22afaafd9f3", "text": "The convergence rate and final performance of common deep learning models have significantly benefited from heuristics such as learning rate schedules, knowledge distillation, skip connections, and normalization layers. In the absence of theoretical underpinnings, controlled experiments aimed at explaining these strategies can aid our understanding of deep learning landscapes and the training dynamics. Existing approaches for empirical analysis rely on tools of linear interpolation and visualizations with dimensionality reduction, each with their limitations. Instead, we revisit such analysis of heuristics through the lens of recently proposed methods for loss surface and representation analysis, viz., mode connectivity and canonical correlation analysis (CCA), and hypothesize reasons for the success of the heuristics. In particular, we explore knowledge distillation and learning rate heuristics of (cosine) restarts and warmup using mode connectivity and CCA. Our empirical analysis suggests that: (a) the reasons often quoted for the success of cosine annealing are not evidenced in practice; (b) that the effect of learning rate warmup is to prevent the deeper layers from creating training instability; and (c) that the latent knowledge shared by the teacher is primarily disbursed to the deeper layers.", "title": "" }, { "docid": "2dbd0d931c4c35fb4a7f24495b099fc9", "text": "This paper presents a number of new algorithms for discovering the Markov Blanket of a target variable T from training data. The Markov Blanket can be used for variable selection for classification, for causal discovery, and for Bayesian Network learning. We introduce a low-order polynomial algorithm and several variants that soundly induce the Markov Blanket under certain broad conditions in datasets with thousands of variables and compare them to other state-of-the-art local and global methods with excel-", "title": "" }, { "docid": "b6d6da15fd000be1a01d4b0f1bb0d087", "text": "Purpose – The purpose of the paper is to distinguish features of m-commerce from those of e-commerce and identify factors to influence customer satisfaction (m-satisfaction) and loyalty (m-loyalty) in m-commerce by empirically-based case study. Design/methodology/approach – First, based on previous literature, the paper builds sets of customer satisfaction factors for both e-commerce and m-commerce. Second, features of m-commerce are identified by comparing it with current e-commerce through decision tree (DT). Third, with the derived factors from DT, significant factors and relationships among the factors, m-satisfaction and m-loyalty are examined by m-satisfaction model employing structural equation model. Findings – The paper finds that m-commerce is partially similar in factors like “transaction process” and “customization” which lead customer satisfaction after connecting an m-commerce site, but it has unique aspects of “content reliability”, “availability”, and “perceived price level of mobile Internet (m-Internet)” which build customer’s intention to the m-commerce site. Through the m-satisfaction model, “content reliability”, and “transaction process” are proven to be significantly influential factors to m-satisfaction and m-loyalty. Research implications/limitations – The paper can be a meaningful step to provide empirical analysis and evaluation based on questionnaire survey targeting actual users. The research is based on a case study on digital music transaction, which is indicative, rather than general. Practical implications – The paper meets the needs to focus on customer under the fiercer competition in Korean m-commerce market. It can guide those who want to initiate, move or broaden their business to m-commerce from e-commerce. Originality/value – The paper develops a revised ACSI model to identify individual critical factors and the degree of effect.", "title": "" }, { "docid": "20daad42c2587043562f3864f9e888c2", "text": "In recent years, deep neural network approaches have naturally extended to the video domain, in their simplest case by aggregating per-frame classifications as a baseline for action recognition. A majority of the work in this area extends from the imaging domain, leading to visual-feature heavy approaches on temporal data. To address this issue we introduce “Let’s Dance”, a 1000 video dataset (and growing) comprised of 10 visually overlapping dance categories that require motion for their classification. We stress the important of human motion as a key distinguisher in our work given that, as we show in this work, visual information is not sufficient to classify motion-heavy categories. We compare our datasets’ performance using imaging techniques with UCF-101 and demonstrate this inherent difficulty. We present a comparison of numerous state-of-theart techniques on our dataset using three different representations (video, optical flow and multi-person pose data) in order to analyze these approaches. We discuss the motion parameterization of each of them and their value in learning to categorize online dance videos. Lastly, we release this dataset (and its three representations) for the research community to use.", "title": "" } ]
scidocsrr
83db7f0bb6967791152e4a39edd07e27
Integrating Both Visual and Audio Cues for Enhanced Video Caption
[ { "docid": "6af09f57f2fcced0117dca9051917a0d", "text": "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.", "title": "" }, { "docid": "4f58d355a60eb61b1c2ee71a457cf5fe", "text": "Real-world videos often have complex dynamics, methods for generating open-domain video descriptions should be sensitive to temporal structure and allow both input (sequence of frames) and output (sequence of words) of variable length. To approach this problem we propose a novel end-to-end sequence-to-sequence model to generate captions for videos. For this we exploit recurrent neural networks, specifically LSTMs, which have demonstrated state-of-the-art performance in image caption generation. Our LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip. Our model naturally is able to learn the temporal structure of the sequence of frames as well as the sequence model of the generated sentences, i.e. a language model. We evaluate several variants of our model that exploit different visual features on a standard set of YouTube videos and two movie description datasets (M-VAD and MPII-MD).", "title": "" }, { "docid": "1e4cf4cce07a24916e99c43aa779ac54", "text": "Video captioning which automatically translates video clips into natural language sentences is a very important task in computer vision. By virtue of recent deep learning technologies, video captioning has made great progress. However, learning an effective mapping from the visual sequence space to the language space is still a challenging problem due to the long-term multimodal dependency modelling and semantic misalignment. Inspired by the facts that memory modelling poses potential advantages to longterm sequential problems [35] and working memory is the key factor of visual attention [33], we propose a Multimodal Memory Model (M) to describe videos, which builds a visual and textual shared memory to model the longterm visual-textual dependency and further guide visual attention on described visual targets to solve visual-textual alignments. Specifically, similar to [10], the proposed M attaches an external memory to store and retrieve both visual and textual contents by interacting with video and sentence with multiple read and write operations. To evaluate the proposed model, we perform experiments on two public datasets: MSVD and MSR-VTT. The experimental results demonstrate that our method outperforms most of the stateof-the-art methods in terms of BLEU and METEOR.", "title": "" } ]
[ { "docid": "602c176fc4150543f443f0891161b1bb", "text": "In the wake of a polarizing election, the cyber world is laden with hate speech. Context accompanying a hate speech text is useful for identifying hate speech, which however has been largely overlooked in existing datasets and hate speech detection models. In this paper, we provide an annotated corpus of hate speech with context information well kept. Then we propose two types of hate speech detection models that incorporate context information, a logistic regression model with context features and a neural network model with learning components for context. Our evaluation shows that both models outperform a strong baseline by around 3% to 4% in F1 score and combining these two models further improve the performance by another 7% in F1 score.", "title": "" }, { "docid": "3f21c1bb9302d29bc2c816aaabf2e613", "text": "BACKGROUND\nPlasma brain natriuretic peptide (BNP) level increases in proportion to the degree of right ventricular dysfunction in pulmonary hypertension. We sought to assess the prognostic significance of plasma BNP in patients with primary pulmonary hypertension (PPH).\n\n\nMETHODS AND RESULTS\nPlasma BNP was measured in 60 patients with PPH at diagnostic catheterization, together with atrial natriuretic peptide, norepinephrine, and epinephrine. Measurements were repeated in 53 patients after a mean follow-up period of 3 months. Forty-nine of the patients received intravenous or oral prostacyclin. During a mean follow-up period of 24 months, 18 patients died of cardiopulmonary causes. According to multivariate analysis, baseline plasma BNP was an independent predictor of mortality. Patients with a supramedian level of baseline BNP (>/=150 pg/mL) had a significantly lower survival rate than those with an inframedian level, according to Kaplan-Meier survival curves (P<0.05). Plasma BNP in survivors decreased significantly during the follow-up (217+/-38 to 149+/-30 pg/mL, P<0. 05), whereas that in nonsurvivors increased (365+/-77 to 544+/-68 pg/mL, P<0.05). Thus, survival was strikingly worse for patients with a supramedian value of follow-up BNP (>/=180 pg/mL) than for those with an inframedian value (P<0.0001).\n\n\nCONCLUSIONS\nA high level of plasma BNP, and in particular, a further increase in plasma BNP during follow-up, may have a strong, independent association with increased mortality rates in patients with PPH.", "title": "" }, { "docid": "37a7cd907529af8e5b384a6d73ea5be2", "text": "This paper presents a flexible FPGA architecture evaluation framework, named fpgaEVA-LP, for power efficiency analysis of LUT-based FPGA architectures. Our work has several contributions: (i) We develop a mixed-level FPGA power model that combines switch-level models for interconnects and macromodels for LUTs; (ii) We develop a tool that automatically generates a back-annotated gate-level netlist with post-layout extracted capacitances and delays; (iii) We develop a cycle-accurate power simulator based on our power model. It carries out gate-level simulation under real delay model and is able to capture glitch power; (iv) Using the framework fpgaEVA-LP, we study the power efficiency of FPGAs, in 0.10um technology, under various settings of architecture parameters such as LUT sizes, cluster sizes and wire segmentation schemes and reach several important conclusions. We also present the detailed power consumption distribution among different FPGA components and shed light on the potential opportunities of power optimization for future FPGA designs (e.g., ≤: 0.10um technology).", "title": "" }, { "docid": "1ca39ff80d1595ed4c9d8b1e04bc25be", "text": "BACKGROUND\nAeromonas species are common inhabitants of aquatic environments giving rise to infections in both fish and humans. Identification of aeromonads to the species level is problematic and complex due to their phenotypic and genotypic heterogeneity.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nAeromonas hydrophila or Aeromonas sp were genetically re-identified using a combination of previously published methods targeting GCAT, 16S rDNA and rpoD genes. Characterization based on the genus specific GCAT-PCR showed that 94 (96%) of the 98 strains belonged to the genus Aeromonas. Considering the patterns obtained for the 94 isolates with the 16S rDNA-RFLP identification method, 3 clusters were recognised, i.e. A. caviae (61%), A. hydrophila (17%) and an unknown group (22%) with atypical RFLP restriction patterns. However, the phylogenetic tree constructed with the obtained rpoD sequences showed that 47 strains (50%) clustered with the sequence of the type strain of A. aquariorum, 18 (19%) with A. caviae, 16 (17%) with A. hydrophila, 12 (13%) with A. veronii and one strain (1%) with the type strain of A. trota. PCR investigation revealed the presence of 10 virulence genes in the 94 isolates as: lip (91%), exu (87%), ela (86%), alt (79%), ser (77%), fla (74%), aer (72%), act (43%), aexT (24%) and ast (23%).\n\n\nCONCLUSIONS/SIGNIFICANCE\nThis study emphasizes the importance of using more than one method for the correct identification of Aeromonas strains. The sequences of the rpoD gene enabled the unambiguous identication of the 94 Aeromonas isolates in accordance with results of other recent studies. Aeromonas aquariorum showed to be the most prevalent species (50%) containing an important subset of virulence genes lip/alt/ser/fla/aer. Different combinations of the virulence genes present in the isolates indicate their probable role in the pathogenesis of Aeromonas infections.", "title": "" }, { "docid": "a1103e64fcbfdee936b8529d0b425e8d", "text": "A crucial criterion for the dimensioning of three phase PWM converters is the cooling of the power semiconductors and thus determination of power dissipation in the semiconductors at certain operating points and its maximum. Methods for the calculation and simulation of semiconductor losses in the most common voltage source and current source three phase PWM converters are well known. Here a complete analytical calculation of the power semiconductor losses for both converter types is presented, most parts are already known, some parts are developed here, as far as the authors know. Conduction losses as well as switching losses are included in the calculation using a simplified model, based on power semiconductor data sheet information. This approach should benefit the prediction and further investigations of the performance of power semiconductor losses for both kinds of converters. Results of the calculation are shown. Dependencies of the semiconductor power losses on the type of converter, the operating point and the pulse width modulation are pointed out, showing the general behaviour of power losses for both converter types.", "title": "" }, { "docid": "d563b025b084b53c30afba4211870f2d", "text": "Collaborative filtering (CF) techniques recommend items to users based on their historical ratings. In real-world scenarios, user interests may drift over time since they are affected by moods, contexts, and pop culture trends. This leads to the fact that a user’s historical ratings comprise many aspects of user interests spanning a long time period. However, at a certain time slice, one user’s interest may only focus on one or a couple of aspects. Thus, CF techniques based on the entire historical ratings may recommend inappropriate items. In this paper, we consider modeling user-interest drift over time based on the assumption that each user has multiple counterparts over temporal domains and successive counterparts are closely related. We adopt the cross-domain CF framework to share the static group-level rating matrix across temporal domains, and let user-interest distribution over item groups drift slightly between successive temporal domains. The derived method is based on a Bayesian latent factor model which can be inferred using Gibbs sampling. Our experimental results show that our method can achieve state-of-the-art recommendation performance as well as explicitly track and visualize user-interest drift over time.", "title": "" }, { "docid": "e767659e0d8a778dacda0f6642a3d292", "text": "Alrstract-We present a new self-organizing neural network model that has two variants. The first variant performs unsupervised learning and can be used for data visualization, clustering, and vector quantization. The main advantage over existing approaches ( e.g., the Kohonen feature map) is the ability o f the model to automatically find a suitable network structure and size. This is achieved through a controlled growth process that also includes occasional removal o f units. The second variant of the model is a supervised learning method that results from the combination of the above-mentioned self-organizing network with the radial basis function (RBF) approach. In this model it is possible--in contrast to earlier approaches--to perform the positioning of the RBF units and the supervised training of the weights in parallel. Therefore, the current classification error can be used to determine where to insert new RBF units. This leads to small networks that generalize very well. Results on the two-spirals benchmark and a vowel classification problem are presented that are better than any results previously published.", "title": "" }, { "docid": "04261fd45725155422f574a7f94f570b", "text": "Bitcoin’s core innovation is its solution to double-spending, called Nakamoto consensus. This provides a probabilistic guarantee that transactions will not be reversed or redirected, provided that it is improbable for an attacker to obtain a majority of mining power in the network. While this may be true in the traditional sense, this assumption becomes tenuous when miners are assumed to be rational and hence venal. Accordingly, we present the whale attack, in which a minority attacker increases her chances of double-spending by incentivizing miners to subvert the consensus protocol and to collude via whale transactions, or transactions carrying anomalously large fees. We analyze the expected cost to carry out the attack, and simulate the attack under realistic system parameters. Our results show that double-spend attacks, conventionally thought to be impractical for minority attackers, can actually be financially feasible and worthwhile under the whale attack. Perhaps more importantly, this work demonstrates that rationality should not underestimated when evaluating Bitcoin’s security.", "title": "" }, { "docid": "df808fcf51612bf81e8fd328d298291d", "text": "Chemomechanical preparation of the root canal includes both mechanical instrumentation and antibacterial irrigation, and is principally directed toward the elimination of micro-organisms from the root canal system. A variety of instruments and techniques have been developed and described for this critical stage of root canal treatment. Since their introduction in 1988, nickel-titanium (NiTi) rotary instruments have become a mainstay in clinical endodontics because of their exceptional ability to shape root canals with potentially fewer procedural complications. Safe clinical usage of NiTi instruments requires an understanding of basic metallurgy of the alloy including fracture mechanisms and their correlation to canal anatomy. This paper reviews the biologic principles of preparing root canals with an emphasis on correct use of current rotary NiTi instrumentation techniques and systems. The role and properties of contemporary root canal irrigants is also discussed.", "title": "" }, { "docid": "ab885307b119bc6aa62a9936e8b75d29", "text": "Grayscale image colorization is an important computer graphics problem with a variety of applications. Recent fully automatic colorization methods have made impressive progress by formulating image colorization as a pixel-wise prediction task and utilizing deep convolutional neural networks. Though tremendous improvements have been made, the result of automatic colorization is still far from perfect. Specifically, there still exist common pitfalls in maintaining color consistency in homogeneous regions as well as precisely distinguishing colors near region boundaries. To tackle these problems, we propose a novel fully automatic colorization pipeline which involves a boundary-guided CRF (conditional random field) and a CNN-based color transform as post-processing steps. In addition, as there usually exist multiple plausible colorization proposals for a single image, automatic evaluation for different colorization methods remains a challenging task. We further introduce two novel automatic evaluation schemes to efficiently assess colorization quality in terms of spatial coherence and localization. Comprehensive experiments demonstrate great quality improvement in results of our proposed colorization method under multiple evaluation metrics.", "title": "" }, { "docid": "2b677a052846d4f52f7b6a1eac94114d", "text": "This paper presents a unifying view of messagepassing algorithms, as methods to approximate a complex Bayesian network by a simpler network with minimum information divergence. In this view, the difference between mean-field methods and belief propagation is not the amount of structure they model, but only the measure of loss they minimize (‘exclusive’ versus ‘inclusive’ Kullback-Leibler divergence). In each case, message-passing arises by minimizing a localized version of the divergence, local to each factor. By examining these divergence measures, we can intuit the types of solution they prefer (symmetry-breaking, for example) and their suitability for different tasks. Furthermore, by considering a wider variety of divergence measures (such as alpha-divergences), we can achieve different complexity and performance goals.", "title": "" }, { "docid": "42dcf0c4b5927323c84f1bc1ff9511ec", "text": "In this work, we study two first-order primal-dual based algorithms, the Gradient Primal-Dual Algorithm (GPDA) and the Gradient Alternating Direction Method of Multipliers (GADMM), for solving a class of linearly constrained non-convex optimization problems. We show that with random initialization of the primal and dual variables, both algorithms are able to compute second-order stationary solutions (ss2) with probability one. This is the first result showing that primal-dual algorithm is capable of finding ss2 when only using first-order information; it also extends the existing results for first-order, but primal-only algorithms. An important implication of our result is that it also gives rise to the first global convergence result to the ss2, for two classes of unconstrained distributed non-convex learning problems over multi-agent networks.", "title": "" }, { "docid": "8165132bed6f74274c7a9aa3ba91767b", "text": "Pattern detection over streams of events is gaining more and more attention, especially in the field of eCommerce. Our industrial partner Cdiscount, which is one of the largest eCommerce companies in France, wants to use pattern detection for real-time customer behavior analysis. The main challenges to consider are efficiency and scalability, as the detection of customer behavior must be achieved within a few seconds, while millions of unique customers visit the website every day, each performing hundreds of actions. In this paper, we present our approach to large-scale and efficient pattern detection for eCommerce. It relies on a domain-specific language to define behavior patterns. Patterns are then compiled into deterministic finite automata, which are run on a Big Data streaming platform to carry out the detection work. Our evaluation shows that our approach is efficient and scalable, and fits the requirements of Cdiscount.", "title": "" }, { "docid": "926b7a9f0214c36b30c10f00a337c836", "text": "The Virtual Storyteller is a multi-agent framework that generates stories based on a concept called emergent narrative. In this paper, we describe the motivation and approach of the Virtual Storyteller, and give an overview of the computational processes involved in the story generation process. We also discuss some of the challenges posed by our chosen approach.", "title": "" }, { "docid": "a163d22ae7ef1e775e92f95476c6711e", "text": "With fast development and wide applications of next-generation sequencing (NGS) technologies, genomic sequence information is within reach to aid the achievement of goals to decode life mysteries, make better crops, detect pathogens, and improve life qualities. NGS systems are typically represented by SOLiD/Ion Torrent PGM from Life Sciences, Genome Analyzer/HiSeq 2000/MiSeq from Illumina, and GS FLX Titanium/GS Junior from Roche. Beijing Genomics Institute (BGI), which possesses the world's biggest sequencing capacity, has multiple NGS systems including 137 HiSeq 2000, 27 SOLiD, one Ion Torrent PGM, one MiSeq, and one 454 sequencer. We have accumulated extensive experience in sample handling, sequencing, and bioinformatics analysis. In this paper, technologies of these systems are reviewed, and first-hand data from extensive experience is summarized and analyzed to discuss the advantages and specifics associated with each sequencing system. At last, applications of NGS are summarized.", "title": "" }, { "docid": "20934b5544b2b7fac979d0df8cda074b", "text": "OBJECTIVES\nWe assessed whether a 2-phase labeling and choice architecture intervention would increase sales of healthy food and beverages in a large hospital cafeteria.\n\n\nMETHODS\nPhase 1 was a 3-month color-coded labeling intervention (red = unhealthy, yellow = less healthy, green = healthy). Phase 2 added a 3-month choice architecture intervention that increased the visibility and convenience of some green items. We compared relative changes in 3-month sales from baseline to phase 1 and from phase 1 to phase 2.\n\n\nRESULTS\nAt baseline (977,793 items, including 199,513 beverages), 24.9% of sales were red and 42.2% were green. Sales of red items decreased in both phases (P < .001), and green items increased in phase 1 (P < .001). The largest changes occurred among beverages. Red beverages decreased 16.5% during phase 1 (P < .001) and further decreased 11.4% in phase 2 (P < .001). Green beverages increased 9.6% in phase 1 (P < .001) and further increased 4.0% in phase 2 (P < .001). Bottled water increased 25.8% during phase 2 (P < .001) but did not increase at 2 on-site comparison cafeterias (P < .001).\n\n\nCONCLUSIONS\nA color-coded labeling intervention improved sales of healthy items and was enhanced by a choice architecture intervention.", "title": "" }, { "docid": "b3068a1b1acb0782d2c2b1dac65042cf", "text": "Measurement of N (nitrogen), P (phosphorus) and K ( potassium) contents of soil is necessary to decide how much extra contents of these nutrients are to b e added in the soil to increase crop fertility. Thi s improves the quality of the soil which in turn yields a good qua lity crop. In the present work fiber optic based c olor sensor has been developed to determine N, P, and K values in t he soil sample. Here colorimetric measurement of aq ueous solution of soil has been carried out. The color se nsor is based on the principle of absorption of col or by solution. It helps in determining the N, P, K amounts as high, m edium, low, or none. The sensor probe along with p roper signal conditioning circuits is built to detect the defici ent component of the soil. It is useful in dispensi ng only required amount of fertilizers in the soil.", "title": "" }, { "docid": "75c1e0d8af5ff854bc4c42e2cb1e2256", "text": "FADES is an edge offloading architecture that empowers us to run compact, single purpose tasks at the edge of the network to support a variety of IoT and cloud services. The design principle behind FADES is to efficiently exploit the resources of constrained edge devices through fine-grained computation offloading. FADES takes advantage of MirageOS unikernels to isolate and embed application logic in concise Xen-bootable images. We have implemented FADES and evaluated the system performance under various hardware and network conditions. Our results show that FADES can effectively strike a balance between running complex applications in the cloud and simple operations at the edge. As a solid step to enable fine-grained edge offloading, our experiments also reveal the limitation of existing IoT hardware and virtualization platforms, which shed light on future research to bring unikernel into IoT domain.", "title": "" }, { "docid": "143111f5fe59b99279d71cf70c588fe2", "text": "In neural architecture search (NAS), the space of neural network architectures is automatically explored to maximize predictive accuracy for a given task. Despite the success of recent approaches, most existing methods cannot be directly applied to large scale problems because of their prohibitive computational complexity or high memory usage. In this work, we propose a Probabilistic approach to neural ARchitecture SEarCh (PARSEC) that drastically reduces memory requirements while maintaining state-of-the-art computational complexity, making it possible to directly search over more complex architectures and larger datasets. Our approach only requires as much memory as is needed to train a single architecture from our search space. This is due to a memory-efficient sampling procedure wherein we learn a probability distribution over high-performing neural network architectures. Importantly, this framework enables us to transfer the distribution of architectures learnt on smaller problems to larger ones, further reducing the computational cost. We showcase the advantages of our approach in applications to CIFAR-10 and ImageNet, where our approach outperforms methods with double its computational cost and matches the performance of methods with costs that are three orders of magnitude larger.", "title": "" }, { "docid": "dc5f111bfe7fa27ae7e9a4a5ba897b51", "text": "We propose AffordanceNet, a new deep learning approach to simultaneously detect multiple objects and their affordances from RGB images. Our AffordanceNet has two branches: an object detection branch to localize and classify the object, and an affordance detection branch to assign each pixel in the object to its most probable affordance label. The proposed framework employs three key components for effectively handling the multiclass problem in the affordance mask: a sequence of deconvolutional layers, a robust resizing strategy, and a multi-task loss function. The experimental results on the public datasets show that our AffordanceNet outperforms recent state-of-the-art methods by a fair margin, while its end-to-end architecture allows the inference at the speed of 150ms per image. This makes our AffordanceNet well suitable for real-time robotic applications. Furthermore, we demonstrate the effectiveness of AffordanceNet in different testing environments and in real robotic applications. The source code is available at https://github.com/nqanh/affordance-net.", "title": "" } ]
scidocsrr
69d2c2ae770577899075325d74491589
Evaluation of Decision Tree Pruning Algorithms for Complexity and Classification Accuracy
[ { "docid": "4e71e0a47b9201b725d3726b6536b730", "text": "Induced decision trees are an extensively-researched solution to classiication tasks. For many practical tasks, the trees produced by tree-generation algorithms are not comprehensible to users due to their size and complexity. Although many tree induction algorithms have been shown to produce simpler, more comprehensi-ble trees (or data structures derived from trees) with good classiication accuracy, tree simpliication has usually been of secondary concern relative to accuracy and no attempt has been made to survey the literature from the perspective of simpliica-tion. We present a framework that organizes the approaches to tree simpliication and summarize and critique the approaches within this framework. The purpose of this survey is to provide researchers and practitioners with a concise overview of tree-simpliication approaches and insight into their relative capabilities. In our nal discussion, we brieey describe some empirical ndings and discuss the application of tree induction algorithms to case retrieval in case-based reasoning systems. 1 Context and Motivation The area of machine learning concerned with inducing classiiers from data focuses primarily on predictive accuracy, as measured by the classiiers' performance on unseen test cases. However, for many practical applications, it is desirable that the classiier \\provide insight and understanding into the predictive structure of the data\" (Breiman et al. 1984), as well as explanations of its individual predictions (Michie 1990). Decision tree induction has been extensively studied in the machine learning and statistics communities as a solution to classiication tasks (Breiman et al. 1984; Quinlan 1986; 1993a). Many tree-simpliication algorithms have been shown to yield simpler or smaller trees. The assumption is made that simpler, smaller trees are easier for humans to comprehend. Although this assumption has not been tested empirically, it will serve as a working assumption in what follows. While considerable evidence exists that trees can be simpliied, simpliication is generally of secondary concern relative to predictive accuracy. No eeort has been made to review the decision tree induction literature from the perspective of simpliication. This paper is intended to ll this gap. A key problem in summarizing the literature on tree simpliication is the diversity of approaches that have been introduced. To manage this diversity, we ooer a framework for categorizing the approaches, consisting of ve categories. Some of these categories are inspired by the view of tree induction as a heuristic state-space search in the space of possible trees. Approaches in the rst category directly control tree size, either by …", "title": "" } ]
[ { "docid": "7e917da1d35497ac36334e63c9f6b39a", "text": "The Technical Note series provides an outlet for a variety of NCAR manuscripts that contribute in specialized ways to the body of scientific knowledge but which are not suitable for journal, monograph, or book publication. Reports in this series are issued by the NCAR Scientific Divisions ; copies may be obtained on request from the Publications Office of NCAR. Designation symbols for the series include: Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the National Science Foundation.", "title": "" }, { "docid": "c90f67d8aabc24faf9f4fb15a4cfd5a2", "text": "The Internet of Things (IoT) will connect not only computers and mobile devices, but it will also interconnect smart buildings, houses, and cities, as well as electrical grids, gas plants, and water networks, automobiles, airplanes, etc. IoT will lead to the development of a wide range of advanced information services that are pervasive, cost-effective, and can be accessed from anywhere and at any time. However, due to the exponential number of interconnected devices, cyber-security in the IoT is a major challenge. It heavily relies on the digital identity concept to build security mechanisms such as authentication and authorization. Current centralized identity management systems are built around third party identity providers, which raise privacy concerns and present a single point of failure. In addition, IoT unconventional characteristics such as scalability, heterogeneity and mobility require new identity management systems to operate in distributed and trustless environments, and uniquely identify a particular device based on its intrinsic digital properties and its relation to its human owner. In order to deal with these challenges, we present a Blockchain-based Identity Framework for IoT (BIFIT). We show how to apply our BIFIT to IoT smart homes to achieve identity self-management by end users. In the context of smart home, the framework autonomously extracts appliances signatures and creates blockchain-based identifies for their appliance owners. It also correlates appliances signatures (low level identities) and owners identifies in order to use them in authentication credentials and to make sure that any IoT entity is behaving normally.", "title": "" }, { "docid": "817c64e272a744c00b46d2a98828dacb", "text": "Depression is highly prevalent in children and adolescents. Psychodynamic therapies are only insufficiently evaluated in this field although many children and adolescents suffering from depression are treated using this approach. Therefore, the aim of our study was to evaluate the efficacy of psychodynamic short-term psychotherapy (PSTP) for the treatment of depression in children and adolescents. In a waiting-list controlled study, 20 children and adolescents fulfilling diagnosis of major depression or dysthymia were included. The treatment group received 25 sessions of psychodynamic psychotherapy. Main outcome criterion was the Impairment-Score for Children and Adolescents (IS-CA) as well as the Psychic and Social-Communicative Findings Sheet for Children and Adolescents (PSCFS-CA) and the Child Behavior Checklist (CBCL), which were assessed at the beginning and the end of treatment. The statistical and clinical significance of changes in these measures were evaluated. There was a significant advantage of the treatment group compared to the waiting group for the IS-CA. The effect size of the IS-CA total score was 1,3. In contrast to the treatment group, where 20% of the children showed clinically significant and reliable improvement, no subject in the waiting-list control group met this criterion. Comparable results were found for the PSCFS-CA and for the internalising score assessed with the CBCL. The results show that psychodynamic short-term psychotherapy (PSTP) is an effective treatment for depressed children and adolescents. Still, some of the children surely require more intensive treatment.", "title": "" }, { "docid": "38a8f82247775ea51a31ea0a6e51f126", "text": "Materials with variable stiffness have the potential to provide a range of new functionalities, including system reconfiguration by tuning the location of rigid links and joints. In particular, wearable applications would benefit from variable stiffness materials in the context of active braces that may stiffen when necessary and soften when mobility is required. In this work, we present fibers capable of adjusting to provide variable stiffness in wearable fabrics. The variable stiffness fibers are made from shape memory materials, where shape memory alloy (SMA) is coated with a thin film of shape memory polymer (SMP). The fibers, which are fabricated via a continuous feed-through process, reduce in bending stiffness by an order of magnitude when the SMP goes through the glass transition. The transition between rubbery and glassy state is accomplished by direct joule heating of the embedded SMA wire. We employ a COMSOL model to relate the current input to the time required for the fibers to transition between stiffness states. Finally, we demonstrate how this device can be worn and act as a joint stability brace on human fingers.", "title": "" }, { "docid": "3e335d336d3c9bce4dbdf24402b8eb17", "text": "Unlike traditional database management systems which are organized around a single data model, a multi-model database (MMDB) utilizes a single, integrated back-end to support multiple data models, such as document, graph, relational, and key-value. As more and more platforms are proposed to deal with multi-model data, it becomes crucial to establish a benchmark for evaluating the performance and usability of MMDBs. Previous benchmarks, however, are inadequate for such scenario because they lack a comprehensive consideration for multiple models of data. In this paper, we present a benchmark, called UniBench, with the goal of facilitating a holistic and rigorous evaluation of MMDBs. UniBench consists of a mixed data model, a synthetic multi-model data generator, and a set of core workloads. Specifically, the data model simulates an emerging application: Social Commerce, a Web-based application combining E-commerce and social media. The data generator provides diverse data format including JSON, XML, key-value, tabular, and graph. The workloads are comprised of a set of multi-model queries and transactions, aiming to cover essential aspects of multi-model data management. We implemented all workloads on ArangoDB and OrientDB to illustrate the feasibility of our proposed benchmarking system and show the learned lessons through the evaluation of these two multi-model databases. The source code and data of this benchmark can be downloaded at http://udbms.cs.helsinki.fi/bench/.", "title": "" }, { "docid": "e364db9141c85b1f260eb3a9c1d42c5b", "text": "Ten US presidential elections ago in Chapel Hill, North Carolina, the agenda of issues that a small group of undecided voters regarded as the most important ones of the day was compared with the news coverage of public issues in the news media these voters used to follow the campaign (McCombs and Shaw, 1972). Since that election, the principal finding in Chapel Hill*/those aspects of public affairs that are prominent in the news become prominent among the public*/has been replicated in hundreds of studies worldwide. These replications include both election and non-election settings for a broad range of public issues and other aspects of political communication and extend beyond the United States to Europe, Asia, Latin America and Australia. Recently, as the news media have expanded to include online newspapers available on the Web, agenda-setting effects have been documented for these new media. All in all, this research has grown far beyond its original domain*/the transfer of salience from the media agenda to the public agenda*/and now encompasses five distinct stages of theoretical attention. Until very recently, the ideas and findings that detail these five stages of agenda-setting theory have been scattered in a wide variety of research journals, book chapters and books published in many different countries. As a result, knowledge of agenda setting has been very unevenly distributed. Scholars designing new studies often had incomplete knowledge of previous research, and graduate students entering the field of mass communication had difficulty learning in detail what we know about the agenda-setting role of the mass media. This situation was my incentive to write Setting the Agenda: the mass media and public opinion, which was published in England in late 2004 and in the United States early in 2005. My primary goal was to gather the principal ideas and empirical findings about agenda setting in one place. John Pavlik has described this integrated presentation as the Gray’s Anatomy of agenda setting (McCombs, 2004, p. xii). Shortly after the US publication of Setting the Agenda , I received an invitation from Journalism Studies to prepare an overview of agenda setting. The timing was wonderfully fortuitous because a book-length presentation of what we have learned in the years since Chapel Hill could be coupled with a detailed discussion in a major journal of current trends and future likely directions in agenda-setting research. Journals are the best venue for advancing the stepby-step accretion of knowledge because they typically reach larger audiences than books, generate more widespread discussion and offer more space for the focused presentation of a particular aspect of a research area. Books can then periodically distill this knowledge. Given the availability of a detailed overview in Setting the Agenda , the presentation here of the five stages of agenda-setting theory emphasizes current and near-future research questions in these areas. Moving beyond these specific Journalism Studies, Volume 6, Number 4, 2005, pp. 543 557", "title": "" }, { "docid": "09f2f2184cb064851238a10d1d661b9e", "text": "The rapid proliferation of information technologies especially the web 2.0 techniques have changed the fundamental ways how things can be done in many areas, including how researchers could communicate and collaborate with each other. The presence of the sheer volume of researcher and topical research information on the Web has led to the problem of information overload. There is a pressing need to develop researcher recommender systems such that users can be provided with personalized recommendations of the researchers they can potentially collaborate with for mutual research benefits. In an academic context, recommending suitable research partners to researchers can facilitate knowledge discovery and exchange, and ultimately improve the research productivity of both sides. Existing expertise recommendation research usually investigates into the expert finding problem from two independent dimensions, namely, the social relations and the common expertise. The main contribution of this paper is that we propose a novel researcher recommendation approach which combines the two dimensions of social relations and common expertise in a unified framework to improve the effectiveness of personalized researcher recommendation. Moreover, how our proposed framework can be applied to the real-world academic contexts is explained based on two case studies.", "title": "" }, { "docid": "7e33af6ec0924681d7d51373ca70b957", "text": "Total order broadcast is a fundamental communication primitive that plays a central role in bringing cheap software-based high availability to a wide range of services. This article studies the practical performance of such a primitive on a cluster of homogeneous machines.\n We present LCR, the first throughput optimal uniform total order broadcast protocol. LCR is based on a ring topology. It only relies on point-to-point inter-process communication and has a linear latency with respect to the number of processes. LCR is also fair in the sense that each process has an equal opportunity of having its messages delivered by all processes.\n We benchmark a C implementation of LCR against Spread and JGroups, two of the most widely used group communication packages. LCR provides higher throughput than the alternatives, over a large number of scenarios.", "title": "" }, { "docid": "bde9d5f80fd17155f1346ae88f221d77", "text": "Due to the sparseness of quality rating data, unsupervised recommender systems are used in many applications in Peer to Peer (P2P) rental marketplaces such as Airbnb, FlipKey, and HomeAway. We present an integer programming based recommender systems, where both accommodation benefits and community risks of lodging places are measured and incorporated into an objective function as utility measurements. More specifically, we first present an unsupervised fused scoring method for quantifying the accommodation benefits and community risks of a lodging with crowd-sourced geo-tagged data. In order to the utility of recommendations, we formulate the unsupervised P2P rental recommendations as a constrained integer programming problem, where the accommodation benefits of recommendations are maximized and the community risks of recommendations are minimized, while maintaining constraints on personalization. Furthermore, we provide an efficient solution for the optimization problem by developing a learning-to-integer-programming method for combining aggregated listwise learning to rank into branching variable selection. We apply the proposed approach to the Airbnb data of New York City and provide lodging recommendations to travelers. In our empirical experiments, we demonstrate both the efficiency and effectiveness of our method in terms of striving a trade-off between the user satisfaction, time on market, and the number of reviews, and achieving a balance between positive and negative sides.", "title": "" }, { "docid": "21a917abee792625539e7eabb3a81f4c", "text": "This paper investigates the power operation in information system development (ISD) processes. Due to the fact that key actors in different departments possess different professional knowledge, their different contexts lead to some employees supporting IS, while others resist it to achieve their goals. We aim to interpret these power operations in ISD from the theory of technological frames. This study is based on qualitative data collected from KaoKang (pseudonym), a port authority in Taiwan. We attempt to understand the situations of different key actors (e.g. top manager, MIS professionals, employees of DP-1 division, consultants of KaoKang, and customers (outside users)) who wield power in ISD in different situations. In this respect, we interpret the data using a technological frame. Finally, we aim to gain fresh insight into power operation in ISD from this perspective.", "title": "" }, { "docid": "dd9edd37ff5f4cb332fcb8a0ef86323e", "text": "This paper proposes several nonlinear control strategies for trajectory tracking of a quadcopter system based on the property of differential flatness. Its originality is twofold. Firstly, it provides a flat output for the quadcopter dynamics capable of creating full flat parametrization of the states and inputs. Moreover, B-splines characterizations of the flat output and their properties allow for optimal trajectory generation subject to way-point constraints. Secondly, several control strategies based on computed torque control and feedback linearization are presented and compared. The advantages of flatness within each control strategy are analyzed and detailed through extensive simulation results.", "title": "" }, { "docid": "a488509590cd496669bdcc3ce8cc5fe5", "text": "Ghrelin is an endogenous ligand for the growth hormone secretagogue receptor and a well-characterized food intake regulatory peptide. Hypothalamic ghrelin-, neuropeptide Y (NPY)-, and orexin-containing neurons form a feeding regulatory circuit. Orexins and NPY are also implicated in sleep-wake regulation. Sleep responses and motor activity after central administration of 0.2, 1, or 5 microg ghrelin in free-feeding rats as well as in feeding-restricted rats (1 microg dose) were determined. Food and water intake and behavioral responses after the light onset injection of saline or 1 microg ghrelin were also recorded. Light onset injection of ghrelin suppressed non-rapid-eye-movement sleep (NREMS) and rapid-eye-movement sleep (REMS) for 2 h. In the first hour, ghrelin induced increases in behavioral activity including feeding, exploring, and grooming and stimulated food and water intake. Ghrelin administration at dark onset also elicited NREMS and REMS suppression in hours 1 and 2, but the effect was not as marked as that, which occurred in the light period. In hours 3-12, a secondary NREMS increase was observed after some doses of ghrelin. In the feeding-restricted rats, ghrelin suppressed NREMS in hours 1 and 2 and REMS in hours 3-12. Data are consistent with the notion that ghrelin has a role in the integration of feeding, metabolism, and sleep regulation.", "title": "" }, { "docid": "e97c0bbb74534a16c41b4a717eed87d5", "text": "This paper is discussing about the road accident severity survey using data mining, where different approaches have been considered. We have collected research work carried out by different researchers based on road accidents. Article describing the review work in context of road accident case’s using data mining approach. The article is consisting of collections of methods in different scenario with the aim to resolve the road accident. Every method is somewhere seeming to productive in some ways to decrease the no of causality. It will give a better edge to different country where the no of accidents is leading to fatality of life.", "title": "" }, { "docid": "dd38dfd7214b4baafa8ecdf72dc8ca6f", "text": "Bottom-Up (BU) saliency models do not perform well in complex interactive environments where humans are actively engaged in tasks (e.g., sandwich making and playing the video games). In this paper, we leverage Reinforcement Learning (RL) to highlight task-relevant locations of input frames. We propose a soft attention mechanism combined with the Deep Q-Network (DQN) model to teach an RL agent how to play a game and where to look by focusing on the most pertinent parts of its visual input. Our evaluations on several Atari 2600 games show that the soft attention based model could predict fixation locations significantly better than bottom-up models such as Itti-Kochs saliency and Graph-Based Visual Saliency (GBVS) models.", "title": "" }, { "docid": "6fc9388ecbd862e36789250e99fde23d", "text": "Short Term Tra c Forecasting: Modeling and Learning Spatio Temporal Relations in Transportation Networks Using Graph Neural Networks", "title": "" }, { "docid": "a37631e21e17b220976c6801f8b97ab1", "text": "We present an algorithm which learns an online trajectory generator that can generalize over varying and uncertain dynamics. When the dynamics is certain, the algorithm generalizes across model parameters. When the dynamics is partially observable, the algorithm generalizes across different observations. To do this, we employ recent advances in supervised imitation learning to learn a trajectory generator from a set of example trajectories computed by a trajectory optimizer. In experiments in two simulated domains, it finds solutions that are nearly as good as, and sometimes better than, those obtained by calling the trajectory optimizer on line. The online execution time is dramatically decreased, and the off-line training time is reasonable.", "title": "" }, { "docid": "c381fdacde35fce7c8b869d512364a4f", "text": "IoT (Internet of Things) diversifies the future Internet, and has drawn much attention. As more and more gadgets (i.e. Things) connected to the Internet, the huge amount of data exchanged has reached an unprecedented level. As sensitive and private information exchanged between things, privacy becomes a major concern. Among many important issues, scalability, transparency, and reliability are considered as new challenges that differentiate IoT from the conventional Internet. In this paper, we enumerate the IoT communication scenarios and investigate the threats to the large-scale, unreliable, pervasive computing environment. To cope with these new challenges, the conventional security architecture will be revisited. In particular, various authentication schemes will be evaluated to ensure the confidentiality and integrity of the exchanged data.", "title": "" }, { "docid": "3bc897662b39bcd59b7c7831fb1df091", "text": "The proliferation of wearable devices has contributed to the emergence of mobile crowdsensing, which leverages the power of the crowd to collect and report data to a third party for large-scale sensing and collaborative learning. However, since the third party may not be honest, privacy poses a major concern. In this paper, we address this concern with a two-stage privacy-preserving scheme called RG-RP: the first stage is designed to mitigate maximum a posteriori (MAP) estimation attacks by perturbing each participant's data through a nonlinear function called repeated Gompertz (RG); while the second stage aims to maintain accuracy and reduce transmission energy by projecting high-dimensional data to a lower dimension, using a row-orthogonal random projection (RP) matrix. The proposed RG-RP scheme delivers better recovery resistance to MAP estimation attacks than most state-of-the-art techniques on both synthetic and real-world datasets. For collaborative learning, we proposed a novel LSTM-CNN model combining the merits of Long Short-Term Memory (LSTM) and Convolutional Neural Networks (CNN). Our experiments on two representative movement datasets captured by wearable sensors demonstrate that the proposed LSTM-CNN model outperforms standalone LSTM, CNN and Deep Belief Network. Together, RG+RP and LSTM-CNN provide a privacy-preserving collaborative learning framework that is both accurate and privacy-preserving.", "title": "" }, { "docid": "640af69086854b79257cbdeb4668830b", "text": "Traditionally traffic safety was addressed by traffic awareness and passive safety measures like solid chassis, seat belts, air bags etc. With the recent breakthroughs in the domain of mobile ad hoc networks, the concept of vehicular ad hoc networks (VANET) was realised. Safety messaging is the most important aspect of VANETs, where the passive safety (accident readiness) in vehicles was reinforced with the idea of active safety (accident prevention). In safety messaging vehicles will message each other over wireless media, updating each other on traffic conditions and hazards. Security is an important aspect of safety messaging, that aims to prevent participants spreading wrong information in the network that are likely to cause mishaps. Equally important is the fact that secure communication protocols should satisfy the communication constraints of VANETs. VANETs are delay intolerant. Features like high speeds, large network size, constant mobility etc. induce certain limitations in the way messaging can be carried out in VANETs. This thesis studies the impact of total message size on VANET messaging system performance, and conducts an analysis of secure communication protocols to measure how they perform in a VANET messaging system.", "title": "" }, { "docid": "24a0f441ff09e7a60a1e22e2ca3f1194", "text": "As an important information portal, online healthcare forum are playing an increasingly crucial role in disseminating information and offering support to people. It connects people with the leading medical experts and others who have similar experiences. During an epidemic outbreak, such as H1N1, it is critical for the health department to understand how the public is responding to the ongoing pandemic, which has a great impact on the social stability. In this case, identifying influential users in the online healthcare forum and tracking the information spreading in such online community can be an effective way to understand the public reaction toward the disease. In this paper, we propose a framework to monitor and identify influential users from online healthcare forum. We first develop a mechanism to identify and construct social networks from the discussion board of an online healthcare forum. We propose the UserRank algorithm which combines link analysis and content analysis techniques to identify influential users. We have also conducted an experiment to evaluate our approach on the Swine Flu forum which is a sub-community of a popular online healthcare community, MedHelp (www.medhelp.org). Experimental results show that our technique outperforms PageRank, in-degree and out-degree centrality in identifying influential user from an online healthcare forum.", "title": "" } ]
scidocsrr
0dfc9d2e74acbdfb344f8d5f54c486ee
Exploiting LIDAR-based features on pedestrian detection in urban scenarios
[ { "docid": "6c76fcf20405c6826060821ac7c662e8", "text": "A perception system for pedestrian detection in urban scenarios using information from a LIDAR and a single camera is presented. Two sensor fusion architectures are described, a centralized and a decentralized one. In the former, the fusion process occurs at the feature level, i.e., features from LIDAR and vision spaces are combined in a single vector for posterior classification using a single classifier. In the latter, two classifiers are employed, one per sensor-feature space, which were offline selected based on information theory and fused by a trainable fusion method applied over the likelihoods provided by the component classifiers. The proposed schemes for sensor combination, and more specifically the trainable fusion method, lead to enhanced detection performance and, in addition, maintenance of false-alarms under tolerable values in comparison with singlebased classifiers. Experimental results highlight the performance and effectiveness of the proposed pedestrian detection system and the related sensor data combination strategies.", "title": "" } ]
[ { "docid": "b40a6bceb64524aa28cdd668d5dd5900", "text": "For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.", "title": "" }, { "docid": "fabcb231006693d766e0a6861c14e057", "text": "Recent location-based social networking sites are attractively providing us with a novel capability of monitoring massive crowd lifelogs in the real-world space. In particular, they make it easier to collect publicly shared crowd lifelogs in a large scale of geographic area reflecting the crowd’s daily lives and even more characterizing urban space through what they have in minds and how they behave in the space. In this paper, we challenge to analyze urban characteristics in terms of crowd behavior by utilizing crowd lifelogs in urban area over the social networking sites. In order to collect crowd behavioral data, we exploit the most famous microblogging site, Twitter, where a great deal of geo-tagged micro lifelogs emitted by massive crowds can be easily acquired. We first present a model to deal with crowds’ behavioral logs on the social network sites as a representing feature of urban space’s characteristics, which will be used to conduct crowd-based urban characterization. Based on this crowd behavioral feature, we will extract significant crowd behavioral patterns in a period of time. In the experiment, we conducted the urban characterization by extracting the crowd behavioral patterns and examined the relation between the regions of common crowd activity patterns and the major categories of local facilities.", "title": "" }, { "docid": "31975dad000fa4dabf2b922876298aca", "text": "We introduce DeepNAT, a 3D Deep convolutional neural network for the automatic segmentation of NeuroAnaTomy in T1-weighted magnetic resonance images. DeepNAT is an end-to-end learning-based approach to brain segmentation that jointly learns an abstract feature representation and a multi-class classification. We propose a 3D patch-based approach, where we do not only predict the center voxel of the patch but also neighbors, which is formulated as multi-task learning. To address a class imbalance problem, we arrange two networks hierarchically, where the first one separates foreground from background, and the second one identifies 25 brain structures on the foreground. Since patches lack spatial context, we augment them with coordinates. To this end, we introduce a novel intrinsic parameterization of the brain volume, formed by eigenfunctions of the Laplace-Beltrami operator. As network architecture, we use three convolutional layers with pooling, batch normalization, and non-linearities, followed by fully connected layers with dropout. The final segmentation is inferred from the probabilistic output of the network with a 3D fully connected conditional random field, which ensures label agreement between close voxels. The roughly 2.7million parameters in the network are learned with stochastic gradient descent. Our results show that DeepNAT compares favorably to state-of-the-art methods. Finally, the purely learning-based method may have a high potential for the adaptation to young, old, or diseased brains by fine-tuning the pre-trained network with a small training sample on the target application, where the availability of larger datasets with manual annotations may boost the overall segmentation accuracy in the future.", "title": "" }, { "docid": "3813fd345ba9f3c19303c64db1b7e9b2", "text": "In recent years, statistical learning (SL) research has seen a growing interest in tracking individual performance in SL tasks, mainly as a predictor of linguistic abilities. We review studies from this line of research and outline three presuppositions underlying the experimental approach they employ: (i) that SL is a unified theoretical construct; (ii) that current SL tasks are interchangeable, and equally valid for assessing SL ability; and (iii) that performance in the standard forced-choice test in the task is a good proxy of SL ability. We argue that these three critical presuppositions are subject to a number of theoretical and empirical issues. First, SL shows patterns of modality- and informational-specificity, suggesting that SL cannot be treated as a unified construct. Second, different SL tasks may tap into separate sub-components of SL that are not necessarily interchangeable. Third, the commonly used forced-choice tests in most SL tasks are subject to inherent limitations and confounds. As a first step, we offer a methodological approach that explicitly spells out a potential set of different SL dimensions, allowing for better transparency in choosing a specific SL task as a predictor of a given linguistic outcome. We then offer possible methodological solutions for better tracking and measuring SL ability. Taken together, these discussions provide a novel theoretical and methodological approach for assessing individual differences in SL, with clear testable predictions.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'.", "title": "" }, { "docid": "879f5b5612367c65a220490cd2c11163", "text": "Organizations and businesses are rapidly shifting from goods-based economy to service-oriented economy [1]. This has led to the increased focus to IT service innovation. IT service management is a framework for enterprises to define, deliver, operate and govern the IT services to meet the business objective. Information Technology Infrastructure Library (ITIL) is a process driven best practices framework for implementing Information Technology Service Management (ITSM). ITIL was originally developed by the UK Government has been adopted by many hundreds of organizations. ITIL has the proven track record of delivering services at the optimal service level. Digital revolution is rapidly transforming the organization and industries. Managing growth and reducing cost out at the same time is a requirement for digital business, but it's not easy for IT infrastructure and operations. Automation is the best solution for meeting the demands of the digital business. Large number of IT organizations have started automating the repeatable and recurring processes. IT organizations need to move from opportunistic to systematic automation of IT processes [2]. In this study, we first systematically review the automation scope of ITSM from ITIL process perspective. Second, we identify the positive and negative impacts of automation to IT industry.", "title": "" }, { "docid": "bbb9412a61bb8497e1d8b6e955e0217b", "text": "There has been great interest in developing methodologies that are capable of dealing with imprecision and uncertainty. The large amount of research currently being carried out in fuzzy and rough sets is representative of this. Many deep relationships have been established, and recent studies have concluded as to the complementary nature of the two methodologies. Therefore, it is desirable to extend and hybridize the underlying concepts to deal with additional aspects of data imperfection. Such developments offer a high degree of flexibility and provide robust solutions and advanced tools for data analysis. Fuzzy-rough set-based feature (FS) selection has been shown to be highly useful at reducing data dimensionality but possesses several problems that render it ineffective for large datasets. This paper proposes three new approaches to fuzzy-rough FS-based on fuzzy similarity relations. In particular, a fuzzy extension to crisp discernibility matrices is proposed and utilized. Initial experimentation shows that the methods greatly reduce dimensionality while preserving classification accuracy.", "title": "" }, { "docid": "c4feca5e27cfecdd2913e18cc7b7a21a", "text": "one component of intelligent transportation systems, IV systems use sensing and intelligent algorithms to understand the vehicle’s immediate environment, either assisting the driver or fully controlling the vehicle. Following the success of information-oriented systems, IV systems will likely be the “next wave” for ITS, functioning at the control layer to enable the driver–vehicle “subsystem” to operate more effectively. This column provides a broad overview of applications and selected activities in this field. IV application areas", "title": "" }, { "docid": "1b20a2956e6c4c18b686ea8ab2b5d308", "text": "This thesis deals with the problem of anomaly detection for sequence data. Anomaly detection has been a widely researched problem in several application domains such as system health management, intrusion detection, healthcare, bioinformatics, fraud detection, and mechanical fault detection. Traditional anomaly detection techniques analyze each data instance (as a univariate or multivariate record) independently, and ignore the sequential aspect of the data. Often, anomalies in sequences can be detected only by analyzing data instances together as a sequence, and hence cannot detected by traditional anomaly detection techniques. The problem of anomaly detection for sequence data is a rich area of research because of two main reasons. First, sequences can be of different types, e.g., symbolic sequences, time series data, etc., and each type of sequence poses unique set of problems. Second, anomalies in sequences can be defined in multiple ways and hence there are different problem formulations. In this thesis we focus on solving one particular problem formulation called semi-supervised anomaly detection. We study the problem separately for symbolic sequences, univariate time series data, and multivariate time series data. The state of art on anomaly detection for sequences is limited and fragmented across application domains. For symbolic sequences, several techniques have been proposed within specific domains, but it is not well-understood as to how a technique developed for one domain would perform in a completely different domain. For univariate time series data, limited techniques exist, and are only evaluated for specific domains, while for multivariate time series data, anomaly detection research is relatively untouched. This thesis has two key goals. First goal is to develop novel anomaly detection techniques for different types of sequences which perform better than existing techniques across a variety of application domains. The second goal is to identify the best anomaly detection technique for a given application domain. By realizing the first goal we develop", "title": "" }, { "docid": "3fe9e38a41d422367da1fce31579eef2", "text": "While desktop virtual reality (VR) offers a way to visualize structure in large information sets, there have been relatively few empirical investigations of visualization designs in this domain. This thesis reports the development and testing of a series of prototype desktop vR worlds, which were designed to support navigation during information visudization and retrievd. Four rnethods were used for data collection: search task scoring, subjective questionnaires, navigationai activity logging ruid analysis, and administration of tests for spatid and structure-learning ability. The combination of these research methods revealed significant effects of user abilities, information environment designs, and task learning. The first of four studies compared three versions of a stmctured virtuai landscape, finding significant differences in sense of presence, ease of use, and overall enjoyment; there was, however, no significant difference in performance among the three landscape versions. The second study found a hypertext interface to be superior to a VR interface for task performance, ease of use, and rated efficiency; nevertheless, the VR interface was rated as more enjoyable. The third study used a new layout aigorithrn; the resulting prototype was rated as easier to use and more efficient than the previous VR version. In the fourth study, a zoomable, rnap-like view of the newest VR prototype was developed. Experimental participants found the map-view superior to the 3D-view for task performance and rated efficiency. Overall, this research did not find a performance advantage for using 3D versions of VR. In addition, the results of the fourth study found that people in the lowest quartile of spatial ability had significantly lower search performance (relative to the highest three quartiles) in a VR world. This finding suggests that individual differences for traits such as spatial ability may be important in detennining the usability and acceptability of VR environments. In addition to the experimental results summarized above, this thesis dso developed and refined a methodology for investigating tasks, users, and software in 3D environments. This methodology included tests for spatial and structure-learning abilities, as well as logging and analysis of a user's navigational activi-", "title": "" }, { "docid": "acb95dbe06b415335cfab42ff71458a0", "text": "The problem of E-waste has forced Environmental agencies of many countries to innovate, develop and adopt environmentally sound options and strategies for E-waste management, with a view to mitigate and control the ever growing threat of E-waste to the environment and human health. E-waste management is given the top priority in many developed countries, but in rapid developing countries like India, it is difficult to completely adopt or replicate the E-waste management system in developed countries due to many country specific issues viz. socio-economic conditions, lack of infrastructure, absence of appropriate legislations for E-waste, approach and commitments of the concerned, etc. This paper presents a review and assessment of the E-waste management system of developed as well as developing countries with a special emphasis on Switzerland, which is the first country in the world to have established and implemented a formal E-waste management system and has recycled 11kg/capita of WEEE against the target of 4kg/capita set by EU. And based on the discussions of various approaches, laws, legislations, practices of different countries, a road map for the development of sustainable and effective E-waste management system in India for ensuring environment, as well as, occupational safety and health, is proposed.", "title": "" }, { "docid": "2fdf6538c561e05741baafe43ec6f145", "text": "Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent are effective for tasks involving sequences, visual and otherwise. We describe a class of recurrent convolutional architectures which is end-to-end trainable and suitable for large-scale visual understanding tasks, and demonstrate the value of these models for activity recognition, image captioning, and video description. In contrast to previous models which assume a fixed visual representation or perform simple temporal averaging for sequential processing, recurrent convolutional models are “doubly deep” in that they learn compositional representations in space and time. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Differentiable recurrent models are appealing in that they can directly map variable-length inputs (e.g., videos) to variable-length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent sequence models are directly connected to modern visual convolutional network models and can be jointly trained to learn temporal dynamics and convolutional perceptual representations. Our results show that such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined or optimized.", "title": "" }, { "docid": "f1635d5cf51f0a4d70090f5f672de605", "text": "Enrichment analysis is a popular method for analyzing gene sets generated by genome-wide experiments. Here we present a significant update to one of the tools in this domain called Enrichr. Enrichr currently contains a large collection of diverse gene set libraries available for analysis and download. In total, Enrichr currently contains 180 184 annotated gene sets from 102 gene set libraries. New features have been added to Enrichr including the ability to submit fuzzy sets, upload BED files, improved application programming interface and visualization of the results as clustergrams. Overall, Enrichr is a comprehensive resource for curated gene sets and a search engine that accumulates biological knowledge for further biological discoveries. Enrichr is freely available at: http://amp.pharm.mssm.edu/Enrichr.", "title": "" }, { "docid": "19c7311bd71763ff246ac598c174c379", "text": "Contextual word representations derived from large-scale neural language models are successful across a diverse set of NLP tasks, suggesting that they encode useful and transferable features of language. To shed light on the linguistic knowledge they capture, we study the representations produced by several recent pretrained contextualizers (variants of ELMo, the OpenAI transformer LM, and BERT) with a suite of sixteen diverse probing tasks. We find that linear models trained on top of frozen contextual representations are competitive with state-of-the-art task-specific models in many cases, but fail on tasks requiring fine-grained linguistic knowledge (e.g., conjunct identification). To investigate the transferability of contextual word representations, we quantify differences in the transferability of individual layers within contextualizers, especially between RNNs and transformers. For instance, higher layers of RNNs are more task-specific, while transformer layers do not exhibit the same monotonic trend. In addition, to better understand what makes contextual word representations transferable, we compare language model pretraining with eleven supervised pretraining tasks. For any given task, pretraining on a closely related task yields better performance than language model pretraining (which is better on average) when the pretraining dataset is fixed. However, language model pretraining on more data gives the best results.", "title": "" }, { "docid": "4be145a84e532712826a2cf00adf065a", "text": "An integrated design of a planar source-fed folded reflectarray antenna (FRA) with low profile is proposed to achieve high gain for Q-band millimeter wave high data rate wireless communication applications. A planar substrate integrated waveguide (SIW) slot array antenna is adopted as the primary source to illuminate the proposed FRA. Considerations of the off-focus loss and the asymmetry E- and H-plane beamwidths loss of the planar source are thoroughly investigated to improve the aperture efficiency of the reflectarray antenna. A prototype is implemented and the experimental results are in good agreement with the predictions, which exhibit a measured |S11| less than -10 dB over the frequency band of 41.6-44 GHz, a maximum boresight gain of 31.9 dBi at 44 GHz with the aperture efficiency of 49% and a -3 dB gain drop bandwidth of 41.8-44.9 GHz. By integrating with the planar feed, the proposed folded reflectarray antenna benefits from its low-profile, low cost, and the ability of coplanar integration with millimeter wave planar circuits while maintaining the good performance in terms of gain and efficiency.", "title": "" }, { "docid": "68c988688a772b35b014700cd2d1d906", "text": "In today’s new economy characterized by industrial change, globalization, increased intensive competition, knowledge sharing and transfer, and information technology revolution, traditional classroom education or training does not always satisfy all the needs of the new world of lifelong learning. Learning is shifting from instructor-centered to learner-centered, and is undertaken anywhere, from classrooms to homes and offices. E-Learning, referring to learning via the Internet, provides people with a flexible and personalized way to learn. It offers learning-on-demand opportunities and reduces learning cost. This paper describes the demands for e-Learning and related research, and presents a variety of enabling technologies that can facilitate the design and implementation of e-Learning systems. Armed with the advanced information and communication technologies, e-Learning is having a far-reaching impact on learning in the new millennium.", "title": "" }, { "docid": "1b649e6f28063b62da7179c13ebb61c0", "text": "This paper describes the design and implementation of IceBound, an indie game that uses dynamic story techniques. Storygames with high narrative process intensity (where story is generated or recombined in algorithmically interesting ways, rather than being simply pre-authored) are still rare, in part because story creators are hesitant to cede control over output quality to a system. As a result, most readers still encounter digital fiction in the form of linear e-books or simple branching-path models of interactive story. Getting more dynamic models into the public consciousness requires exploring new frontiers of design space driven by the twin concerns of fiction authors (for high-quality story realization) and game designers (for frequent, high-impact player decisions). Our design for Ice-Bound rejects both branching-path models of interactive story as well as overly simulationist approaches, targeting a middle-road aesthetic of sculptural construction that marries a focus on quality output with the player’s exploration of both an emergent expressive space and an AR-enabled art book. Through implementing Ice-Bound, we developed design strategies and practical lessons useful to other interactive narrative designers, including three high-level lessons. First, quantifiable metrics and tools for content authoring for a combinatorial system are essential to maintain control over such a system. Second, authors should consider embedding highly dynamic but less narratively cohesive mechanics within layers of less dynamic but more tightly authored storytelling, so each part of the story can be told within a framework to which it is best suited. Finally, iterating the design of all story layers at once leads to tighter coupling between ludic and narrative levels and a stronger narrative experience overall.", "title": "" }, { "docid": "c7435dedf3733e3dd2285b1b04533b1c", "text": "Deciding whether a claim is true or false often requires a deeper understanding of the evidence supporting and contradicting the claim. However, when presented with many evidence documents, users do not necessarily read and trust them uniformly. Psychologists and other researchers have shown that users tend to follow and agree with articles and sources that hold viewpoints similar to their own, a phenomenon known as confirmation bias. This suggests that when learning about a controversial topic, human biases and viewpoints about the topic may affect what is considered “trustworthy” or credible. It is an interesting challenge to build systems that can help users overcome this bias and help them decide the truthfulness of claims. In this article, we study various factors that enable humans to acquire additional information about controversial claims in an unbiased fashion. Specifically, we designed a user study to understand how presenting evidence with contrasting viewpoints and source expertise ratings affect how users learn from the evidence documents. We find that users do not seek contrasting viewpoints by themselves, but explicitly presenting contrasting evidence helps them get a well-rounded understanding of the topic. Furthermore, explicit knowledge of the credibility of the sources and the context in which the source provides the evidence document not only affects what users read but also whether they perceive the document to be credible. Introduction", "title": "" }, { "docid": "e2ee26af1fb425f8591b5b8689080fff", "text": "In this paper, we focus on a recent Web trend called microblogging, and in particular a site called Twitter. The content of such a site is an extraordinarily large number of small textual messages, posted by millions of users, at random or in response to perceived events or situations. We have developed an algorithm that takes a trending phrase or any phrase specified by a user, collects a large number of posts containing the phrase, and provides an automatically created summary of the posts related to the term. We present examples of summaries we produce along with initial evaluation.", "title": "" }, { "docid": "d16e579aadf2e9c871c76a201fa5cc29", "text": "Worldwide, buildings account for ca. 40% of the total energy consumption and ca. 20% of the total CO2 emissions. While most of the energy goes into primary building use, a significant amount of energy is wasted due to malfunctioning building system equipment and wrongly configured Building Management Systems (BMS). For example, wrongly configured setpoints or building equipment, or misplaced sensors and actuators, can contribute to deviations of the real energy consumption from the predicted one. Our paper is motivated by these posed challenges and aims at pinpointing the types of problems in the BMS components that can affect the energy efficiency of a building, as well as review the methods that can be utilized for their discovery and diagnosis. The goal of the paper is to highlight the challenges that lie in this problem domain, as well as provide a strategy how to counterfeit them.", "title": "" }, { "docid": "03983bc20975ce83e21607582038d397", "text": "This paper addresses the problem of human pose estimation, given images taken from multiple dynamic but calibrated cameras. We consider solving this task using a part-based model and focus on the part appearance component of such a model. We use a random forest classifier to capture the variation in appearance of body parts in 2D images. The result of these 2D part detectors are then aggregated across views to produce consistent 3D hypotheses for parts. We solve correspondences across views for mirror symmetric parts by introducing a latent variable. We evaluate our part detectors qualitatively and quantitatively on a dataset gathered from a professional football game.", "title": "" } ]
scidocsrr
00ffbff8b7fad3d1bf38eda92d5af517
Research using qualitative, quantitative or mixed methods and choice based on the research.
[ { "docid": "7eec9c40d8137670a88992d40ef52101", "text": "Nowadays, most nurses, pre- and post-qualification, will be required to undertake a literature review at some point, either as part of a course of study, as a key step in the research process, or as part of clinical practice development or policy. For student nurses and novice researchers it is often seen as a difficult undertaking. It demands a complex range of skills, such as learning how to define topics for exploration, acquiring skills of literature searching and retrieval, developing the ability to analyse and synthesize data as well as becoming adept at writing and reporting, often within a limited time scale. The purpose of this article is to present a step-by-step guide to facilitate understanding by presenting the critical elements of the literature review process. While reference is made to different types of literature reviews, the focus is on the traditional or narrative review that is undertaken, usually either as an academic assignment or part of the research process.", "title": "" } ]
[ { "docid": "3b3343f757e5be54fd36dbd3ffaf2d10", "text": "The C++ package ADOL-C described here facilitates the evaluation of first and higher derivatives of vector functions that are defined by computer programs written in C or C++. The resulting derivative evaluation routines may be called from C/C++, Fortran, or any other language that can be linked with C. The numerical values of derivative vectors are obtained free of truncation errors at a small multiple of the run-time and randomly accessed memory of the given function evaluation program. Derivative matrices are obtained by columns or rows. For solution curves defined by ordinary differential equations, special routines are provided that evaluate the Taylor coefficient vectors and their Jacobians with respect to the current state vector. The derivative calculations involve a possibly substantial (but always predictable) amount of data that are accessed strictly sequentially and are therefore automatically paged out to external files.", "title": "" }, { "docid": "07ef9eece7de49ee714d4a2adf9bb078", "text": "Vegetable oil has been proven to be advantageous as a non-toxic, cost-effective and biodegradable solvent to extract polycyclic aromatic hydrocarbons (PAHs) from contaminated soils for remediation purposes. The resulting vegetable oil contained PAHs and therefore required a method for subsequent removal of extracted PAHs and reuse of the oil in remediation processes. In this paper, activated carbon adsorption of PAHs from vegetable oil used in soil remediation was assessed to ascertain PAH contaminated oil regeneration. Vegetable oils, originating from lab scale remediation, with different PAH concentrations were examined to study the adsorption of PAHs on activated carbon. Batch adsorption tests were performed by shaking oil-activated carbon mixtures in flasks. Equilibrium data were fitted with the Langmuir and Freundlich isothermal models. Studies were also carried out using columns packed with activated carbon. In addition, the effects of initial PAH concentration and activated carbon dosage on sorption capacities were investigated. Results clearly revealed the effectiveness of using activated carbon as an adsorbent to remove PAHs from the vegetable oil. Adsorption equilibrium of PAHs on activated carbon from the vegetable oil was successfully evaluated by the Langmuir and Freundlich isotherms. The initial PAH concentrations and carbon dosage affected adsorption significantly. The results indicate that the reuse of vegetable oil was feasible.", "title": "" }, { "docid": "d30f40e879ae7c5b49b4be94679c7424", "text": "Java offers the basic infrastructure needed to integrate computers connected to the Internet into a seamless parallel computational resource: a flexible, easily-installed infrastructure for running coarsegrained parallel applications on numerous, anonymous machines. Ease of participation is seen as a key property for such a resource to realize the vision of a multiprocessing environment comprising thousands of computers. We present Javelin, a Java-based infrastructure for global computing. The system is based on Internet software technology that is essentially ubiquitous: Web technology. Its architecture and implementation require participants to have access only to a Java-enabled Web browser. The security constraints implied by this, the resulting architecture, and current implementation are presented. The Javelin architecture is intended to be a substrate on which various programming models may be implemented. Several such models are presented: A Linda Tuple Space, an SPMD programming model with barriers, as well as support for message passing. Experimental results are given in the form of micro-benchmarks and a Mersenne Prime application that runs on a heterogeneous network of several parallel machines, workstations, and PCs.", "title": "" }, { "docid": "58984ddb8d4c28dc63caa29bc245e259", "text": "OpenCL is an open standard to write parallel applications for heterogeneous computing systems. Since its usage is restricted to a single operating system instance, programmers need to use a mix of OpenCL and MPI to program a heterogeneous cluster. In this paper, we introduce an MPI-OpenCL implementation of the LINPACK benchmark for a cluster with multi-GPU nodes. The LINPACK benchmark is one of the most widely used benchmark applications for evaluating high performance computing systems. Our implementation is based on High Performance LINPACK (HPL) and uses the blocked LU decomposition algorithm. We address that optimizations aimed at reducing the overhead of CPUs are necessary to overcome the performance gap between the CPUs and the multiple GPUs. Our LINPACK implementation achieves 93.69 Tflops (46 percent of the theoretical peak) on the target cluster with 49 nodes, each node containing two eight-core CPUs and four GPUs.", "title": "" }, { "docid": "172f206c8b3b0bc0d75793a13fa9ef88", "text": "Knowledge bases are important resources for a variety of natural language processing tasks but suffer from incompleteness. We propose a novel embedding model, ITransF, to perform knowledge base completion. Equipped with a sparse attention mechanism, ITransF discovers hidden concepts of relations and transfer statistical strength through the sharing of concepts. Moreover, the learned associations between relations and concepts, which are represented by sparse attention vectors, can be interpreted easily. We evaluate ITransF on two benchmark datasets— WN18 and FB15k for knowledge base completion and obtains improvements on both the mean rank and Hits@10 metrics, over all baselines that do not use additional information.", "title": "" }, { "docid": "fb8fc8a881ff11d997e9bb763234aa78", "text": "To determine the elasticity characteristics of focal liver lesions (FLLs) by shearwave elastography (SWE). We used SWE in 108 patients with 161 FLLs and in the adjacent liver for quantitative and qualitative FLLs stiffness assessment. The Mann–Whitney test was used to assess the difference between the groups of lesions where a P value less than 0.05 was considered significant. SWE acquisitions failed in 22 nodules (14 %) in 13 patients. For the 139 lesions successfully evaluated, SWE values were (in kPa), for the 3 focal fatty sparings (FFS) 6.6 ± 0.3, for the 10 adenomas 9.4 ± 4.3, for the 22 haemangiomas 13.8 ± −5.5, for the 16 focal nodular hyperplasias (FNHs) 33 ± −14.7, for the 2 scars 53.7 ± 4.7, for the 26 HCCs 14.86 ± 10, for the 53 metastasis 28.8 ± 16, and for the 7 cholangiocarcinomas 56.9 ± 25.6. FNHs had significant differences in stiffness compared with adenomas (P = 0.0002). Fifty percent of the FNHs had a radial pattern of elevated elasticity. A significant difference was also found between HCCs and cholangiocarcinomas elasticity (P = 0.0004). SWE could be useful in differentiating FNHs and adenomas, or HCCs and cholangiocarcinomas by ultrasound. • Elastography is becoming quite widely used as an adjunct to conventional ultrasound • Shearwave elastography (SWE) could help differentiate adenomas from fibrous nodular hyperplasia • SWE could also be helpful in distinguishing between hepatocellular carcinomas and cholangiocarcinomas • SWE could improve the identify hepatocellular carcinomas in cirrhotic livers", "title": "" }, { "docid": "e1001ebf3a30bcb2599fae6dae8f83e9", "text": "The notion of the \"stakeholders\" of the firm has drawn ever-increasing attention since Freeman published his seminal book on Strategic Management: A Stakeholder Approach in 1984. In the understanding of most scholars in the field, stakeholder theory is not a special theory on a firm's constituencies but sets out to replace today's prevailing neoclassical economic concept of the firm. As such, it is seen as the superior theory of the firm. Though stakeholder theory explicitly is a theory on the firm, that is, on a private sector entity, some scholars try to apply it to public sector organizations, and, in particular, to e-government settings. This paper summarizes stakeholder theory, discusses its premises and justifications, compares its tracks, sheds light on recent attempts to join the two tracks, and discusses the benefits and limits of its practical applicability to the public sector using the case of a recent e-government initiative in New York State.", "title": "" }, { "docid": "70aba7669b0d3d17dd43eb570042b769", "text": "The most critical step in the production of diphtheria vaccines is the inactivation of the toxin by formaldehyde. Diphtheria toxoid (DTx) is produced during this inactivation process through partly unknown, chemical modifications of the toxin. Consequently, diphtheria vaccines are difficult to characterise completely and the quality of the toxoids is routinely determined with potency and safety tests. This article describes the possibility of monitoring the quality in diphtheria vaccine production with a selection of physicochemical and immunochemical tests as an alternative to established in vivo tests. To this end, diphtheria toxin was treated with increasing formaldehyde concentrations resulting in toxoid products varying in potency and residual toxicity. Differences in the quality of the experimental toxoids were also assessed with physicochemical and immunochemical techniques. The results obtained with several of these analyses, including SDS-PAGE, primary amino group determination, fluorescence spectroscopy, circular dichroism (CD) and biosensor analysis, showed a clear correlation with the potency and safety tests. A set of criteria is proposed that a diphtheria toxoid must comply with, i.e. an apparent shift of the B-fragment on SDS-PAGE, a reduction of primary amino groups in a diphtheria molecule, an increased resistance to denaturation, an increased circular dichroism signal in the near-UV region and a reduced binding to selected monoclonal antibodies. In principle, a selected set of in vitro analyses can replace the classical in vivo tests to evaluate the quality of diphtheria toxoid vaccines, provided that the validity of these tests is demonstrated in extensive validation studies and regulatory acceptance is obtained.", "title": "" }, { "docid": "71819107f543aa2b20b070e322cf1bbb", "text": "Despite the recent success of end-to-end learned representations, hand-crafted optical flow features are still widely used in video analysis tasks. To fill this gap, we propose TVNet, a novel end-to-end trainable neural network, to learn optical-flow-like features from data. TVNet subsumes a specific optical flow solver, the TV-L1 method, and is initialized by unfolding its optimization iterations as neural layers. TVNet can therefore be used directly without any extra learning. Moreover, it can be naturally concatenated with other task-specific networks to formulate an end-to-end architecture, thus making our method more efficient than current multi-stage approaches by avoiding the need to pre-compute and store features on disk. Finally, the parameters of the TVNet can be further fine-tuned by end-to-end training. This enables TVNet to learn richer and task-specific patterns beyond exact optical flow. Extensive experiments on two action recognition benchmarks verify the effectiveness of the proposed approach. Our TVNet achieves better accuracies than all compared methods, while being competitive with the fastest counterpart in terms of features extraction time.", "title": "" }, { "docid": "19ab6d7d30cd27f97b948674575efe2a", "text": "We present a user-friendly image editing system that supports a drag-and-drop object insertion (where the user merely drags objects into the image, and the system automatically places them in 3D and relights them appropriately), postprocess illumination editing, and depth-of-field manipulation. Underlying our system is a fully automatic technique for recovering a comprehensive 3D scene model (geometry, illumination, diffuse albedo, and camera parameters) from a single, low dynamic range photograph. This is made possible by two novel contributions: an illumination inference algorithm that recovers a full lighting model of the scene (including light sources that are not directly visible in the photograph), and a depth estimation algorithm that combines data-driven depth transfer with geometric reasoning about the scene layout. A user study shows that our system produces perceptually convincing results, and achieves the same level of realism as techniques that require significant user interaction.", "title": "" }, { "docid": "ee15c7152a2e2b9f372ca97283a3c114", "text": "Essential oil (EO) of the leaves of Eugenia uniflora L. (Brazilian cherry tree) was evaluated for its antioxidant, antibacterial and antifungal properties. The acute toxicity of the EO administered by oral route was also evaluated in mice. The EO exhibited antioxidant activity in the DPPH, ABTS and FRAP assays and reduced lipid peroxidation in the kidney of mice. The EO also showed antimicrobial activity against two important pathogenic bacteria, Staphylococcus aureus and Listeria monocytogenes, and against two fungi of the Candida species, C. lipolytica and C. guilliermondii. Acute administration of the EO by the oral route did not cause lethality or toxicological effects in mice. These findings suggest that the EO of the leaves of E. uniflora may have the potential for use in the pharmaceutical industry.", "title": "" }, { "docid": "eda61384789ccc3cbe50aaec0df92321", "text": "Increasing evidence suggests that cultural influences on brain activity are associated with multiple cognitive and affective processes. These findings prompt an integrative framework to account for dynamic interactions between culture, behavior, and the brain. We put forward a culture-behavior-brain (CBB) loop model of human development that proposes that culture shapes the brain by contextualizing behavior, and the brain fits and modifies culture via behavioral influences. Genes provide a fundamental basis for, and interact with, the CBB loop at both individual and population levels. The CBB loop model advances our understanding of the dynamic relationships between culture, behavior, and the brain, which are crucial for human phylogeny and ontogeny. Future brain changes due to cultural influences are discussed based on the CBB loop model.", "title": "" }, { "docid": "2d4cb6980cf8716699bdffca6cfed274", "text": "Advances in laser technology have progressed so rapidly during the past decade that successful treatment of many cutaneous concerns and congenital defects, including vascular and pigmented lesions, tattoos, scars and unwanted haircan be achieved. The demand for laser surgery has increased as a result of the relative ease with low incidence of adverse postoperative sequelae. In this review, the currently available laser systems with cutaneous applications are outlined to identify the various types of dermatologic lasers available, to list their clinical indications and to understand the possible side effects.", "title": "" }, { "docid": "70c8caf1bdbdaf29072903e20c432854", "text": "We show that the topological modular functor from Witten–Chern–Simons theory is universal for quantum computation in the sense that a quantum circuit computation can be efficiently approximated by an intertwining action of a braid on the functor’s state space. A computational model based on Chern–Simons theory at a fifth root of unity is defined and shown to be polynomially equivalent to the quantum circuit model. The chief technical advance: the density of the irreducible sectors of the Jones representation has topological implications which will be considered elsewhere.", "title": "" }, { "docid": "dd9567f20e8d0c44d10c48bf7a73e787", "text": "The dispersion relation and confinement of terahertz surface plasmon modes propagating along planar Goubau lines are studied using guided-wave time domain spectroscopy. We demonstrate the radial nature of the surface plasmon mode known as the Goubau mode and the transverse confinement of the electric field over a few tenths of microns (~l/10). We experimentally and computationally observed a transition of the shape of the THz pulses from unipolar to bipolar as the propagation distance increases, indicating that the Goubau line acts as a high-pass filter. The deviation of the dispersion relation curve from a linear law above 600 GHz is discussed.", "title": "" }, { "docid": "5325672f176fd572f7be68a466538d95", "text": "The successful execution of location-based and feature-based queries on spatial databases requires the construction of spatial indexes on the spatial attributes. This is not simple when the data is unstructured as is the case when the data is a collection of documents such as news articles, which is the domain of discourse, where the spatial attribute consists of text that can be (but is not required to be) interpreted as the names of locations. In other words, spatial data is specified using text (known as a toponym) instead of geometry, which means that there is some ambiguity involved. The process of identifying and disambiguating references to geographic locations is known as geotagging and involves using a combination of internal document structure and external knowledge, including a document-independent model of the audience's vocabulary of geographic locations, termed its spatial lexicon. In contrast to previous work, a new spatial lexicon model is presented that distinguishes between a global lexicon of locations known to all audiences, and an audience-specific local lexicon. Generic methods for inferring audiences' local lexicons are described. Evaluations of this inference method and the overall geotagging procedure indicate that establishing local lexicons cannot be overlooked, especially given the increasing prevalence of highly local data sources on the Internet, and will enable the construction of more accurate spatial indexes.", "title": "" }, { "docid": "773d90c215b4c04cf713b1c1266f88d9", "text": "Electromyography (EMG) is the study of muscles function through analysis of electrical activity produced from muscles. This electrical activity which is displayed in the form of signal is the result of neuromuscular activation associated with muscle contraction. The most common techniques of EMG signal recording are by using surface and needle/wire electrode where the latter is usually used for interest in deep muscle. This paper will focus on surface electromyogram (SEMG) signal. During SEMG recording, several problems had to been countered such as noise, motion artifact and signal instability. Thus, various signal processing techniques had been implemented to produce a reliable signal for analysis. SEMG signal finds broad application particularly in biomedical field. It had been analyzed and studied for various interests such as neuromuscular disease, enhancement of muscular function and human-computer interface. Keywords—Evolvable hardware (EHW), Functional Electrical Simulation (FES), Hidden Markov Model (HMM), Hjorth Time Domain (HTD).", "title": "" }, { "docid": "64c06c6669df3e500df0d3b7fe792160", "text": "New questions about microbial ecology and diversity combined with significant improvement in the resolving power of molecular tools have helped the reemergence of the field of prokaryotic biogeography. Here, we show that biogeography may constitute a cornerstone approach to study diversity patterns at different taxonomic levels in the prokaryotic world. Fundamental processes leading to the formation of biogeographic patterns are examined in an evolutionary and ecological context. Based on different evolutionary scenarios, biogeographic patterns are thus posited to consist of dramatic range expansion or regression events that would be the results of evolutionary and ecological forces at play at the genotype level. The deterministic or random nature of those underlying processes is, however, questioned in light of recent surveys. Such scenarios led us to predict the existence of particular genes whose presence or polymorphism would be associated with cosmopolitan taxa. Furthermore, several conceptual and methodological pitfalls that could hamper future developments of the field are identified, and future approaches and new lines of investigation are suggested.", "title": "" }, { "docid": "9a4bdfe80a949ec1371a917585518ae4", "text": "This article presents the event calculus, a logic-based formalism for representing actions and their effects. A circumscriptive solution to the frame problem is deployed which reduces to monotonic predicate completion. Using a number of benchmark examples from the literature, the formalism is shown to apply to a variety of domains, including those featuring actions with indirect effects, actions with non-deterministic effects, concurrent actions, and continuous change.", "title": "" }, { "docid": "05db68275cffc5f1d035fd73f9b0f29b", "text": "Names are important in many societies, even in technologically oriented ones which use e.g. ID systems to identify individual people. Names such as surnames are the most important as they are used in many processes, such as identifying of people and genealogical research. On the other hand variation of names can be a major problem for the identification and search for people, e.g. web search or security reasons. Name matching presumes a-priori that the recorded name written in one alphabet reflects the phonetic identity of two samples or some transcription error in copying a previously recorded name. We add to this the lode that the two names imply the same person. This paper describes name variations and some basic description of various name matching algorithms developed to overcome name variation and to find reasonable variants of names which can be used to further increasing mismatches for record linkage and name search. The implementation contains algorithms for computing a range of fuzzy matching based on different types of algorithms, e.g. composite and hybrid methods and allowing us to test and measure algorithms for accuracy. NYSIIS, LIG2 and Phonex have been shown to perform well and provided sufficient flexibility to be included in the linkage/matching process for optimising name searching. Keywords—Data mining, name matching algorithm, nominal data, searching system.", "title": "" } ]
scidocsrr
e15f5d4a08e7ff2e8e26b647aa30119b
An approximation algorithm for the generalized assignment problem
[ { "docid": "353bbc5e68ec1d53b3cd0f7c352ee699", "text": "• A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.", "title": "" } ]
[ { "docid": "95d24478b92f8e5d096481bac0622d53", "text": "We present MultiPoint, a set of perspective-based remote pointing techniques that allows users to perform bimanual and multi-finger remote manipulation of graphical objects on large displays. We conducted two empirical studies that compared remote pointing techniques performed using fingers and laser pointers, in single and multi-finger pointing interactions. We explored three types of manual selection gestures: squeeze, breach and trigger. The fastest and most preferred technique was the trigger gesture in the single point experiment and the unimanual breach gesture in the multi-finger pointing study. The laser pointer obtained mixed results: it is fast, but inaccurate in single point, and it obtained the lowest ranking and performance in the multipoint experiment. Our results suggest MultiPoint interaction techniques are superior in performance and accuracy to traditional laser pointers for interacting with graphical objects on a large display from a distance. & 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "3297cc79e8b43ee9d00283cbb7788c85", "text": "State-of-the-art NLG systems have a mostly static perspective on language generation, i.e. they have a fixed knowledge representation from which they generate language. This is fine for offline working systems, where the language is read only after it has been completely generated. However, systems set in a dynamic environment, e.g. those interacting with a human user, need a dynamic way of processing and, therefore, dynamic representations to exhibit appropriate response behaviour. For such settings we suggest to use an incremental mode of processing in conjunction with underspecified semantic representations. We especially focus on the case where the content to be verbalised changes during language generation.", "title": "" }, { "docid": "18aa13e95a26f0cb82d257c3913b6203", "text": "The ability to learn from incrementally arriving data is essential for any life-long learning system. However, standard deep neural networks forget the knowledge about the old tasks, a phenomenon called catastrophic forgetting, when trained on incrementally arriving data. We discuss the biases in current Generative Adversarial Networks (GAN) based approaches that learn the classifier by knowledge distillation from previously trained classifiers. These biases cause the trained classifier to perform poorly. We propose an approach to remove these biases by distilling knowledge from the classifier of AC-GAN. Experiments on MNIST and CIFAR10 show that this method is comparable to current state of the art rehearsal based approaches. The code for this paper is available at this link.", "title": "" }, { "docid": "54e7bf2aa21a539f5a0ddcfd0bbc8be1", "text": "We consider the problem of grasping concave objects, i.e., objects whose surface includes regions with negative curvature. When a multifingered hand is used to restrain these objects, these areas can be advantageously used to determine grasps capable of more robustly resisting to external disturbance wrenches. We propose a new grasp quality metric specifically suited for this case, and we use it to inform a grasp planner searching the space of possible grasps. Our findings are validated both in simulation and on a real robot system executing a bin picking task. Experimental validation shows that our method is more effective than those not explicitly considering negative curvature.", "title": "" }, { "docid": "5e340ad23f37830384b77ed45813771f", "text": "Current-mode algorithmic pipelined analog-to-digital converters (ADC) are suitable for sensor applications due to their area and power advantage at low resolutions. In applications of distributed sensing using redundant sensors, the speed and the resolution of the ADC is less important than the energy per bit conversion. Such performance was achieved by using the current-mode technique with transistors operating in the sub-threshold region. Both sub-threshold and current mode techniques allow for low-voltage and low-power operation. An improved current-mode, 6-bit, 125 kHz algorithmic ADC was designed for an integrated sensor in 0.18 /spl mu/m technology. The power consumption of the ADC is under 6 /spl mu/W and less than 8 pJ per bit.", "title": "" }, { "docid": "47897fc364551338fcaee76d71568e2e", "text": "As Internet traffic continues to grow in size and complexity, it has become an increasingly challenging task to understand behavior patterns of end-hosts and network applications. This paper presents a novel approach based on behavioral graph analysis to study the behavior similarity of Internet end-hosts. Specifically, we use bipartite graphs to model host communications from network traffic and build one-mode projections of bipartite graphs for discovering social-behavior similarity of end-hosts. By applying simple and efficient clustering algorithms on the similarity matrices and clustering coefficient of one-mode projection graphs, we perform network-aware clustering of end-hosts in the same network prefixes into different end-host behavior clusters and discover inherent clustered groups of Internet applications. Our experiment results based on real datasets show that end-host and application behavior clusters exhibit distinct traffic characteristics that provide improved interpretations on Internet traffic. Finally, we demonstrate the practical benefits of exploring behavior similarity in profiling network behaviors, discovering emerging network applications, and detecting anomalous traffic patterns.", "title": "" }, { "docid": "baa70e5df451e8bc7354fcf00349f53b", "text": "This paper investigates object categorization according to function, i.e., learning the affordances of objects from human demonstration. Object affordances (functionality) are inferred from observations of humans using the objects in different types of actions. The intended application is learning from demonstration, in which a robot learns to employ objects in household tasks, from observing a human performing the same tasks with the objects. We present a method for categorizing manipulated objects and human manipulation actions in context of each other. The method is able to simultaneously segment and classify human hand actions, and detect and classify the objects involved in the action. This can serve as an initial step in a learning from demonstration method. Experiments show that the contextual information improves the classification of both objects and actions. 2010 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "9760e3676a7df5e185ec35089d06525e", "text": "This paper examines the sufficiency of existing e-Learning standards for facilitating and supporting the introduction of adaptive techniques in computer-based learning systems. To that end, the main representational and operational requirements of adaptive learning environments are examined and contrasted against current eLearning standards. The motivation behind this preliminary analysis is attainment of: interoperability between adaptive learning systems; reuse of adaptive learning materials; and, the facilitation of adaptively supported, distributed learning activities.", "title": "" }, { "docid": "cd3d046fc4aa9af3730e76163fb2ae0a", "text": "Blockchain has emerged as one of the most promising and revolutionary technologies in the past years. Companies are exploring implementation of use cases in hope of significant gains in efficiencies. However, to achieve the impact hoped for, it is not sufficient to merely replace existing technologies. The current business processes must also be redesigned and innovated to enable realization of hoped for benefits. This conceptual paper provides a theoretical contribution on how blockchain technology and smart contracts potentially can, within the framework of the seven principles of business process re-engineering (BPR), enable process innovations. In this paper, we analyze the BPR principles in light of their applicability to blockchain-based solutions. We find these principles to be applicable and helpful in understanding how blockchain technology could enable transformational redesign of current processes. However, the viewpoint taken, should be expanded from intrato inter-organizational processes operating within an ecosystem of separate organizational entities. In such a blockchain powered ecosystem, smart contracts take on a pivotal role, both as repositories of data and executioner of activities.", "title": "" }, { "docid": "76070cda75614ae4b1e3fe53703e7a43", "text": "‘Emotion in Motion’ is an experiment designed to understand the emotional reaction of people to a variety of musical excerpts, via self-report questionnaires and the recording of electrodermal response (EDR) and pulse oximetry (HR) signals. The experiment ran for 3 months as part of a public exhibition, having nearly 4000 participants and over 12000 listening samples. This paper presents the methodology used by the authors to approach this research, as well as preliminary results derived from the self-report data and the physiology.", "title": "" }, { "docid": "70e88fe5fc43e0815a1efa05e17f7277", "text": "Smoke detection is a crucial task in many video surveillance applications and could have a great impact to raise the level of safety of urban areas. Many commercial smoke detection sensors exist but most of them cannot be applied in open space or outdoor scenarios. With this aim, the paper presents a smoke detection system that uses a common CCD camera sensor to detect smoke in images and trigger alarms. First, a proper background model is proposed to reliably extract smoke regions and avoid over-segmentation and false positives in outdoor scenarios where many distractors are present, such as moving trees or light reflexes. A novel Bayesian approach is adopted to detect smoke regions in the scene analyzing image energy by means of the Wavelet Transform coefficients and Color Information. A statistical model of image energy is built, using a temporal Gaussian Mixture, to analyze the energy decay that typically occurs when smoke covers the scene then the detection is strengthen evaluating the color blending between a reference smoke color and the input frame. The proposed system is capable of detecting rapidly smoke events both in night and in day conditions with a reduced number of false alarms hence is particularly suitable for monitoring large outdoor scenarios where common sensors would fail. An extensive experimental campaign both on recorded videos and live cameras evaluates the efficacy and efficiency of the system in many real world scenarios, such as outdoor storages and forests.", "title": "" }, { "docid": "8a5f7db855fc125880cf2fe881019174", "text": "Control of an industrial robot includes nonlinearities, uncertainties and external perturbations that should be considered in the design of control laws. Proportional-integral-derivative (PID)-type fuzzy controller is a well-known conventional motion control strategy for manipulators which ensures global asymptotic stability. To enhance the PID-type fuzzy controller performance for the control of rigid planar robot manipulators, in this paper, a fuzzy pre-compensation of a fuzzy self tuning fuzzy PID controller is proposed. The proposed control scheme consists of a fuzzy logic-based pre-compensator followed by a fuzzy self tuning fuzzy PID controller. In the fuzzy self tuning fuzzy PID controller, a supervisory hierarchical fuzzy controller (SHFC) is used for tuning the input scaling factors of the fuzzy PID controller according to the actual tracking position error and the actual tracking velocity error. Numerical simulations using the dynamic model of a three DOF planar rigid robot manipulator with uncertainties show the effectiveness of the approach in set point tracking problems. Our results show that the proposed controller has superior performance compared to a conventional fuzzy PID controller.", "title": "" }, { "docid": "4a08c16c5e091e1c6212fc606ccd854a", "text": "The problem of predicting the position of a freely foraging rat based on the ensemble firing patterns of place cells recorded from the CA1 region of its hippocampus is used to develop a two-stage statistical paradigm for neural spike train decoding. In the first, or encoding stage, place cell spiking activity is modeled as an inhomogeneous Poisson process whose instantaneous rate is a function of the animal's position in space and phase of its theta rhythm. The animal's path is modeled as a Gaussian random walk. In the second, or decoding stage, a Bayesian statistical paradigm is used to derive a nonlinear recursive causal filter algorithm for predicting the position of the animal from the place cell ensemble firing patterns. The algebra of the decoding algorithm defines an explicit map of the discrete spike trains into the position prediction. The confidence regions for the position predictions quantify spike train information in terms of the most probable locations of the animal given the ensemble firing pattern. Under our inhomogeneous Poisson model position was a three to five times stronger modulator of the place cell spiking activity than theta phase in an open circular environment. For animal 1 (2) the median decoding error based on 34 (33) place cells recorded during 10 min of foraging was 8.0 (7.7) cm. Our statistical paradigm provides a reliable approach for quantifying the spatial information in the ensemble place cell firing patterns and defines a generally applicable framework for studying information encoding in neural systems.", "title": "" }, { "docid": "353142680c25840f270250e4258b443b", "text": "In this letter, a low-impedance substrate integrated waveguide (SIW) bias line is proposed to suppress second and third harmonics in power amplifier. Such a bias line consists of a 4 low-impedance microstrip line and a shorted SIW that can operate as a radio frequency block and suppress any harmonic signals from entering the dc source. Since the frequency of the second harmonic components is lower than the inherent cutoff frequency of the SIW, second harmonic components are blocked. At the same time, third harmonic components are shorted by the shorted SIW. Measured results show that second and third harmonic components can be reduced by 25 dB and 13 dB, respectively, compared to the outcomes using conventional 4 high-impedance microstrip bias line. Also, both 1 dB compression point and power added efficiency are improved slightly.", "title": "" }, { "docid": "c2fb00bf50e19c3a6ce39b65d7e940ae", "text": "IEEE- and ASTM-adopted dedicated short-range communications (DSRC) standards are key enabling technologies for the next generation of vehicular safety communications. Vehicle-safety-related communication services, which require reliable and fast message delivery, usually demand broadcast communications in vehicular ad hoc networks (VANETs). In this paper, we propose and justify a distributive cross-layer scheme for the design of the control channel in DSRC with three levels of broadcast services that are critical to most potential vehicle-safety-related applications. The new scheme for enhancing broadcast reliability includes preemptive priority in safety services, dynamic receiver-oriented packet repetitions for one-hop emergency warning message dissemination, a multifrequency busy tone and minislot within the distributed interframe space (DIFS) in IEEE 802.11, and robust distance-based relay selection for multihop broadcast of emergency notification messages. Compared with a current draft of IEEE 802.11p and other schemes for DSRC safety-related services, the scheme proposed in this paper is more robust and scalable and easy to implement. Additionally, we investigate the reliability and performance of the proposed broadcast scheme for DSRC VANET safety-related services on the highway analytically and by simulations. The analytic model accounts for the impact of the hidden terminal problem, the fading channel conditions, varied message arrival intervals, and the backoff counter process on reliability and performance.", "title": "" }, { "docid": "6784e31e2ec313698a622a7e78288f68", "text": "Web-based technology is often the technology of choice for distance education given the ease of use of the tools to browse the resources on the Web, the relative affordability of accessing the ubiquitous Web, and the simplicity of deploying and maintaining resources on the WorldWide Web. Many sophisticated web-based learning environments have been developed and are in use around the world. The same technology is being used for electronic commerce and has become extremely popular. However, while there are clever tools developed to understand on-line customer’s behaviours in order to increase sales and profit, there is very little done to automatically discover access patterns to understand learners’ behaviour on web-based distance learning. Educators, using on-line learning environments and tools, have very little support to evaluate learners’ activities and discriminate between different learners’ on-line behaviours. In this paper, we discuss some data mining and machine learning techniques that could be used to enhance web-based learning environments for the educator to better evaluate the leaning process, as well as for the learners to help them in their learning endeavour.", "title": "" }, { "docid": "9ddc79e1693deb9cdcdc15c139c10d4e", "text": "The goal of this work is to develop a computation-based account of higher-dimensional type theory for which canonicity at observable types is true by construction. Types are considered as descriptions of the computational behavior of terms, rather than as formal syntax to which meaning is attached separately. Types are structured as collections of terms of each finite dimension. At dimension zero the terms of a type are its ordinary members; at higher dimension terms are lines between terms of the next lower dimension. The terms of each dimension satisfy coherence conditions ensuring that the terms may be seen as abstract cubes. Each line is to be interpreted as an identification of two cubes in that it provides evidence for their exchangeability in all contexts. It is required that there be sufficiently many lines that this interpretation is tenable. For example, lines must be reversible and closed under concatenation, so that the identifications present the structure of a pre-groupoid. Moreover, there must be further lines witnessing the unit, inverse, and associativity laws of concatention, the structure of an ∞-groupoid. In this paper we give a “meaning explanation” of a computational higher type theory in the style of Martin-Löf and of Constable and Allen, et al. [Martin-Löf, 1984; Martin-Löf, 1984; Constable, et al., 1985; Allen et al., 2006]. Such an explanation starts with a dimension-stratified collection of terms endowed with a deterministic operational semantics defining what it means to evaluate closed terms of any dimension to canonical form. The dimension of a term is the finite set of dimension names it contains; these dimension names may be thought of as variables ranging over an abstract interval, in which case terms may be thought of as tracing out lines in a type. The end points, 0 and 1, of the interval may be substituted to obtain the end points of such lines. Dimension names may be substituted for one another without restriction, allowing dimensions to be renamed, identified, or duplicated. The semantics of types is given by specifying, at each dimension, when canonical elements are equal, when general elements are equal, and when these", "title": "" }, { "docid": "4b8cd508689eb4cfe4423bf1b30bce3e", "text": "A two-dimensional (2D) periodic leaky-wave antenna consisting of a periodic distribution of rectangular patches on a grounded dielectric substrate, excited by a narrow slot in the ground plane, is studied here. The TM0 surface wave that is normally supported by a grounded dielectric substrate is perturbed by the presence of the periodic patches to produce radially-propagating leaky waves. In addition to making a novel microwave antenna structure, this design is motivated by the phenomena of directive beaming and enhanced transmission observed in plasmonic structures in the optical regime.", "title": "" }, { "docid": "4c165c15a3c6f069f702a54d0dab093c", "text": "We propose a simple method for improving the security of hashed passwords: the maintenance of additional ``honeywords'' (false passwords) associated with each user's account. An adversary who steals a file of hashed passwords and inverts the hash function cannot tell if he has found the password or a honeyword. The attempted use of a honeyword for login sets off an alarm. An auxiliary server (the ``honeychecker'') can distinguish the user password from honeywords for the login routine, and will set off an alarm if a honeyword is submitted.", "title": "" }, { "docid": "0a679a92522ee47286e00fde142913d8", "text": "As participants in Fake News Challenge 1 (FNC-1), we approach the problem of fake news via stance detection. Given an article as ”ground truth”, we attempt to classify whether a headline discusses, agrees, disagrees, or is unrelated to a given article. In this paper, we first leverage an SVM trained on TF-IDF cosine similarity features to discern whether a headline-article pairing is related or unrelated. If we classify the pairing as the former, we then employ various neural network architectures built on top of Long-Short-Term-Memory Models (LSTMs) to label the pairing as agree, disagree, or discuss. Ultimately, our best performing neural network architecture proved to be a pair of Bidirectional Conditionally Encoded LSTMs with Bidirectional Global Attention. Using our linear SVM for the unrelated/related subproblem and our best neural network for the agree/disagree/discuss subproblem, we scored .8658 according to the FNC-1’s performance metric.", "title": "" } ]
scidocsrr
dee135fac565818d821fc267fc7485d5
Towards QoS-Oriented SLA Guarantees for Online Cloud Services
[ { "docid": "c0ba7119eaf77c6815f43ff329457e5e", "text": "In Utility Computing business model, the owners of the computing resources negotiate with their potential clients to sell computing power. The terms of the Quality of Service (QoS) and the economic conditions are established in a Service-Level Agreement (SLA). There are many scenarios in which the agreed QoS cannot be provided because of errors in the service provisioning or failures in the system. Since providers have usually different types of clients, according to their relationship with the provider or by the fee that they pay, it is important to minimize the impact of the SLA violations in preferential clients. This paper proposes a set of policies to provide better QoS to preferential clients in such situations. The criterion to classify clients is established according to the relationship between client and provider (external user, internal or another privileged relationship) and the QoS that the client purchases (cheap contracts or extra QoS by paying an extra fee). Most of the policies use key features of virtualization: Selective Violation of the SLAs, Dynamic Scaling of the Allocated Resources, and Runtime Migration of Tasks. The validity of the policies is demonstrated through exhaustive experiments.", "title": "" } ]
[ { "docid": "4a572df21f3a8ebe3437204471a1fd10", "text": "Whilst studies on emotion recognition show that genderdependent analysis can improve emotion classification performance, the potential differences in the manifestation of depression between male and female speech have yet to be fully explored. This paper presents a qualitative analysis of phonetically aligned acoustic features to highlight differences in the manifestation of depression. Gender-dependent analysis with phonetically aligned gender-dependent features are used for speech-based depression recognition. The presented experimental study reveals gender differences in the effect of depression on vowel-level features. Considering the experimental study, we also show that a small set of knowledge-driven gender-dependent vowel-level features can outperform state-of-the-art turn-level acoustic features when performing a binary depressed speech recognition task. A combination of these preselected gender-dependent vowel-level features with turn-level standardised openSMILE features results in additional improvement for depression recognition.", "title": "" }, { "docid": "d35515299b37b5eb936986d33aca66e1", "text": "This paper describes an Ada framework called Cheddar which provides tools to check if a real time application meets its temporal constraints. The framework is based on the real time scheduling theory and is mostly written for educational purposes. With Cheddar, an application is defined by a set of processors, tasks, buffers, shared resources and messages. Cheddar provides feasibility tests in the cases of monoprocessor, multiprocessor and distributed systems. It also provides a flexible simulation engine which allows the designer to describe and run simulations of specific systems. The framework is open and has been designed to be easily connected to CASE tools such as editors, design tools, simulators, ...", "title": "" }, { "docid": "0ea98e6c60a64a0d5ffdb669da598dfd", "text": "A wideband multiple-input-multiple-output (MIMO) antenna system with common elements suitable for WiFi/2.4 GHz and Long Term Evolution (LTE)/2.6 GHz wireless access point (WAP) applications is presented. The proposed MIMO antenna system consists of four wideband microstrip feedline printed monopole antennas with common radiating element and a ring-shaped ground plane. The radiator of the MIMO antenna system is designed as the shape of a modified rectangle with a four-stepped line at the corners to enhance the impedance bandwidth. According to the common elements structure of the MIMO antenna system, isolation between the antennas (ports) can be challenging. Therefore, the ground plane is modified by introducing four slots in each corner to reduce the mutual coupling. For an antenna efficiency of more than 60%, the measured impedance bandwidth for reflection coefficients below -10 dB was observed to be 1100 MHz from 1.8 to 2.9 GHz. Measured isolation is achieved greater than 15 dB by using a modified ground plane. Also, a low envelope correlation coefficient (ECC) less than 0.1 and polarization diversity gain of about 10 dB with the orthogonal mode of linear polarization and quasi-omnidirectional pattern during the analysis of radiation characteristic are achieved. Therefore, the proposed design is a good candidate for indoor WiFi and LTE WAP applications due to the obtained results.", "title": "" }, { "docid": "a38cf37fc60e1322e391680037ff6d4e", "text": "Robot-aided gait training is an emerging clinical tool for gait rehabilitation of neurological patients. This paper deals with a novel method of offering gait assistance, using an impedance controlled exoskeleton (LOPES). The provided assistance is based on a recent finding that, in the control of walking, different modules can be discerned that are associated with different subtasks. In this study, a Virtual Model Controller (VMC) for supporting one of these subtasks, namely the foot clearance, is presented and evaluated. The developed VMC provides virtual support at the ankle, to increase foot clearance. Therefore, we first developed a new method to derive reference trajectories of the ankle position. These trajectories consist of splines between key events, which are dependent on walking speed and body height. Subsequently, the VMC was evaluated in twelve healthy subjects and six chronic stroke survivors. The impedance levels, of the support, were altered between trials to investigate whether the controller allowed gradual and selective support. Additionally, an adaptive algorithm was tested, that automatically shaped the amount of support to the subjects’ needs. Catch trials were introduced to determine whether the subjects tended to rely on the support. We also assessed the additional value of providing visual feedback. With the VMC, the step height could be selectively and gradually influenced. The adaptive algorithm clearly shaped the support level to the specific needs of every stroke survivor. The provided support did not result in reliance on the support for both groups. All healthy subjects and most patients were able to utilize the visual feedback to increase their active participation. The presented approach can provide selective control on one of the essential subtasks of walking. This module is the first in a set of modules to control all subtasks. This enables the therapist to focus the support on the subtasks that are impaired, and leave the other subtasks up to the patient, encouraging him to participate more actively in the training. Additionally, the speed-dependent reference patterns provide the therapist with the tools to easily adapt the treadmill speed to the capabilities and progress of the patient.", "title": "" }, { "docid": "b42037d4a491c9fb9cd756d11411d95b", "text": "Control of Induction Motor (IM) is well known to be difficult owing to the fact the mathematical models of IM are highly nonlinear and time variant. The advent of vector control techniques has solved induction motor control problems. The most commonly used controller for the speed control of induction motor is traditional Proportional plus Integral (PI) controller. However, the conventional PI controller has some demerits such as: the high starting overshoot in speed, sensitivity to controller gains and sluggish response due to sudden change in load torque. To overcome these problems, replacement of PI controller by Integral plus Proportional (IP) controller is proposed in this paper. The goal is to determine which control strategy delivers better performance with respect to induction motor’s speed. Performance of these controllers has been verified through simulation using MATLAB/SIMULINK software package for different operating conditions. According to the simulation results, IP controller creates better performance in terms of overshoot, settling time, and steady state error compared to conventional PI controller. This shows the superiority of IP controller over conventional PI controller.", "title": "" }, { "docid": "ab4e2ab6b206fece59f40945c82d5cd7", "text": "Knowledge distillation is effective to train small and generalisable network models for meeting the low-memory and fast running requirements. Existing offline distillation methods rely on a strong pre-trained teacher, which enables favourable knowledge discovery and transfer but requires a complex two-phase training procedure. Online counterparts address this limitation at the price of lacking a highcapacity teacher. In this work, we present an On-the-fly Native Ensemble (ONE) learning strategy for one-stage online distillation. Specifically, ONE trains only a single multi-branch network while simultaneously establishing a strong teacher onthe-fly to enhance the learning of target network. Extensive evaluations show that ONE improves the generalisation performance a variety of deep neural networks more significantly than alternative methods on four image classification dataset: CIFAR10, CIFAR100, SVHN, and ImageNet, whilst having the computational efficiency advantages.", "title": "" }, { "docid": "873c2e7774791417d6cb4f5904cde74c", "text": "This article discusses empirical findings and conceptual elaborations of the last 10 years in strategic niche management research (SNM). The SNM approach suggests that sustainable innovation journeys can be facilitated by creating technological niches, i.e. protected spaces that allow the experimentation with the co-evolution of technology, user practices, and regulatory structures. The assumption was that if such niches were constructed appropriately, they would act as building blocks for broader societal changes towards sustainable development. The article shows how concepts and ideas have evolved over time and new complexities were introduced. Research focused on the role of various niche-internal processes such as learning, networking, visioning and the relationship between local projects and global rule sets that guide actor behaviour. The empirical findings showed that the analysis of these niche-internal dimensions needed to be complemented with attention to niche external processes. In this respect, the multi-level perspective proved useful for contextualising SNM. This contextualisation led to modifications in claims about the dynamics of sustainable innovation journeys. Niches are to be perceived as crucial for bringing about regime shifts, but they cannot do this on their own. Linkages with ongoing external processes are also important. Although substantial insights have been gained, the SNM approach is still an unfinished research programme. We identify various promising research directions, as well as policy implications.", "title": "" }, { "docid": "ffb87dc7922fd1a3d2a132c923eff57d", "text": "It has been suggested that pulmonary artery pressure at the end of ejection is close to mean pulmonary artery pressure, thus contributing to the optimization of external power from the right ventricle. We tested the hypothesis that dicrotic notch and mean pulmonary artery pressures could be of similar magnitude in 15 men (50 +/- 12 yr) referred to our laboratory for diagnostic right and left heart catheterization. Beat-to-beat relationships between dicrotic notch and mean pulmonary artery pressures were studied 1) at rest over 10 consecutive beats and 2) in 5 patients during the Valsalva maneuver (178 beats studied). At rest, there was no difference between dicrotic notch and mean pulmonary artery pressures (21.8 +/- 12.0 vs. 21.9 +/- 11.1 mmHg). There was a strong linear relationship between dicrotic notch and mean pressures 1) over the 10 consecutive beats studied in each patient (mean r = 0.93), 2) over the 150 resting beats (r = 0.99), and 3) during the Valsalva maneuver in each patient (r = 0.98-0.99) and in the overall beats (r = 0.99). The difference between dicrotic notch and mean pressures was -0.1 +/- 1.7 mmHg at rest and -1.5 +/- 2.3 mmHg during the Valsalva maneuver. Substitution of the mean pulmonary artery pressure by the dicrotic notch pressure in the standard formula of the pulmonary vascular resistance (PVR) resulted in an equation relating linearly end-systolic pressure and stroke volume. The slope of this relation had the dimension of a volume elastance (in mmHg/ml), a simple estimate of volume elastance being obtained as 1.06(PVR/T), where T is duration of the cardiac cycle. In conclusion, dicrotic notch pressure was of similar magnitude as mean pulmonary artery pressure. These results confirmed our primary hypothesis and indicated that human pulmonary artery can be treated as if it is an elastic chamber with a volume elastance of 1.06(PVR/T).", "title": "" }, { "docid": "6e73ea43f02dc41b96e5d46bafe3541d", "text": "Learning discriminative representations for unseen person images is critical for person re-identification (ReID). Most of the current approaches learn deep representations in classification tasks, which essentially minimize the empirical classification risk on the training set. As shown in our experiments, such representations easily get over-fitted on a discriminative human body part on the training set. To gain the discriminative power on unseen person images, we propose a deep representation learning procedure named part loss network, to minimize both the empirical classification risk on training person images and the representation learning risk on unseen person images. The representation learning risk is evaluated by the proposed part loss, which automatically detects human body parts and computes the person classification loss on each part separately. Compared with traditional global classification loss, simultaneously considering part loss enforces the deep network to learn representations for different body parts and gain the discriminative power on unseen persons. Experimental results on three person ReID datasets, i.e., Market1501, CUHK03, and VIPeR, show that our representation outperforms existing deep representations.", "title": "" }, { "docid": "ffbab4b090448de06ff5237d43c5e293", "text": "Motivated by a project to create a system for people who are deaf or hard-of-hearing that would use automatic speech recognition (ASR) to produce real-time text captions of spoken English during in-person meetings with hearing individuals, we have augmented a transcript of the Switchboard conversational dialogue corpus with an overlay of word-importance annotations, with a numeric score for each word, to indicate its importance to the meaning of each dialogue turn. Further, we demonstrate the utility of this corpus by training an automatic word importance labeling model; our best performing model has an F-score of 0.60 in an ordinal 6-class word-importance classification task with an agreement (concordance correlation coefficient) of 0.839 with the human annotators (agreement score between annotators is 0.89). Finally, we discuss our intended future applications of this resource, particularly for the task of evaluating ASR performance, i.e. creating metrics that predict ASR-output caption text usability for DHH users better than Word Error Rate (WER).", "title": "" }, { "docid": "47afea1e95f86bb44a1cf11e020828fc", "text": "Document clustering is generally the first step for topic identification. Since many clustering methods operate on the similarities between documents, it is important to build representations of these documents which keep their semantics as much as possible and are also suitable for efficient similarity calculation. As we describe in Koopman et al. (Proceedings of ISSI 2015 Istanbul: 15th International Society of Scientometrics and Informetrics Conference, Istanbul, Turkey, 29 June to 3 July, 2015. Bogaziçi University Printhouse. http://www.issi2015.org/files/downloads/all-papers/1042.pdf , 2015), the metadata of articles in the Astro dataset contribute to a semantic matrix, which uses a vector space to capture the semantics of entities derived from these articles and consequently supports the contextual exploration of these entities in LittleAriadne. However, this semantic matrix does not allow to calculate similarities between articles directly. In this paper, we will describe in detail how we build a semantic representation for an article from the entities that are associated with it. Base on such semantic representations of articles, we apply two standard clustering methods, K-Means and the Louvain community detection algorithm, which leads to our two clustering solutions labelled as OCLC-31 (standing for K-Means) and OCLC-Louvain (standing for Louvain). In this paper, we will give the implementation details and a basic comparison with other clustering solutions that are reported in this special issue.", "title": "" }, { "docid": "e6cc803406516eaec8b9cf66201cad45", "text": "This paper draws together theories from organisational and neo-institutional literatures to address the evolution of supply chain contracts. Using a longitudinal case study of the Norwegian State Railways, we examine how firms move through the stages in an inter-organisational process of supply chain contract evolution and how they can cooperate to ensure efficiency and equity in their contractual relationship. The findings suggest that inefficient and inequitable initial contracts can occur in part, because of the cognitive shortcomings in human decision-making processes that reveal themselves early in the arrangement before learning and trust building can accumulate. We then reveal how parties can renegotiate towards a more equitable and efficient supply chain contract.", "title": "" }, { "docid": "6140255e69aa292bf8c97c9ef200def7", "text": "Food production requires application of fertilizers containing phosphorus, nitrogen and potassium on agricultural fields in order to sustain crop yields. However modern agriculture is dependent on phosphorus derived from phosphate rock, which is a non-renewable resource and current global reserves may be depleted in 50–100 years. While phosphorus demand is projected to increase, the expected global peak in phosphorus production is predicted to occur around 2030. The exact timing of peak phosphorus production might be disputed, however it is widely acknowledged within the fertilizer industry that the quality of remaining phosphate rock is decreasing and production costs are increasing. Yet future access to phosphorus receives little or no international attention. This paper puts forward the case for including long-term phosphorus scarcity on the priority agenda for global food security. Opportunities for recovering phosphorus and reducing demand are also addressed together with institutional challenges. 2009 Published by Elsevier Ltd.", "title": "" }, { "docid": "12a5fb7867cddaca43c3508b0c1a1ed2", "text": "The class scheduling problem can be modeled by a graph where the vertices and edges represent the courses and the common students, respectively. The problem is to assign the courses a given number of time slots (colors), where each time slot can be used for a given number of class rooms. The Vertex Coloring (VC) algorithm is a polynomial time algorithm which produces a conflict free solution using the least number of colors [9]. However, the VC solution may not be implementable because it uses a number of time slots that exceed the available ones with unbalanced use of class rooms. We propose a heuristic approach VC* to (1) promote uniform distribution of courses over the colors and to (2) balance course load for each time slot over the available class rooms. The performance function represents the percentage of students in all courses that could not be mapped to time slots or to class rooms. A randomized simulation of registration of four departments with up to 1200 students is used to evaluate the performance of proposed heuristic.", "title": "" }, { "docid": "9f76ca13fd4e61905f82a1009982adb9", "text": "Image segmentation is an important processing step in many image, video and computer vision applications. Extensive research has been done in creating many different approaches and algorithms for image segmentation, but it is still difficult to assess whether one algorithm produces more accurate segmentations than another, whether it be for a particular image or set of images, or more generally, for a whole class of images. To date, the most common method for evaluating the effectiveness of a segmentation method is subjective evaluation, in which a human visually compares the image segmentation results for separate segmentation algorithms, which is a tedious process and inherently limits the depth of evaluation to a relatively small number of segmentation comparisons over a predetermined set of images. Another common evaluation alternative is supervised evaluation, in which a segmented image is compared against a manuallysegmented or pre-processed reference image. Evaluation methods that require user assistance, such as subjective evaluation and supervised evaluation, are infeasible in many vision applications, so unsupervised methods are necessary. Unsupervised evaluation enables the objective comparison of both different segmentation methods and different parameterizations of a single method, without requiring human visual comparisons or comparison with a manually-segmented or pre-processed reference image. Additionally, unsupervised methods generate results for individual images and images whose characteristics may not be known until evaluation time. Unsupervised methods are crucial to real-time segmentation evaluation, and can furthermore enable self-tuning of algorithm parameters based on evaluation results. In this paper, we examine the unsupervised objective evaluation methods that have been proposed in the literature. An extensive evaluation of these methods are presented. The advantages and shortcomings of the underlying design mechanisms in these methods are discussed and analyzed through analytical evaluation and empirical evaluation. Finally, possible future directions for research in unsupervised evaluation are proposed. 2007 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "a7708e8af4ece273666478ebfdecc6bd", "text": "Event summarization based on crowdsourced microblog data is a promising research area, and several researchers have recently focused on this field. However, these previous works fail to characterize the fine-grained evolution of an event and the rich correlations among posts. The semantic associations among the multi-modal data in posts are also not investigated as a means to enhance the summarization performance. To address these issues, this study presents CrowdStory, which aims to characterize an event as a fine-grained, evolutionary, and correlation-rich storyline. A crowd-powered event model and a generic event storyline generation framework are first proposed, based on which a multi-clue--based approach to fine-grained event summarization is presented. The implicit human intelligence (HI) extracted from visual contents and community interactions is then used to identify inter-clue associations. Finally, a cross-media mining approach to selective visual story presentation is proposed. The experiment results indicate that, compared with the state-of-the-art methods, CrowdStory enables fine-grained event summarization (e.g., dynamic evolution) and correctly identifies up to 60% strong correlations (e.g., causality) of clues. The cross-media approach shows diversity and relevancy in visual data selection.", "title": "" }, { "docid": "77af12d87cd5827f35d92968d1888162", "text": "Many image-to-image translation problems are ambiguous, as a single input image may correspond to multiple possible outputs. In this work, we aim to model a distribution of possible outputs in a conditional generative modeling setting. The ambiguity of the mapping is distilled in a low-dimensional latent vector, which can be randomly sampled at test time. A generator learns to map the given input, combined with this latent code, to the output. We explicitly encourage the connection between output and the latent code to be invertible. This helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse, and produces more diverse results. We explore several variants of this approach by employing different training objectives, network architectures, and methods of injecting the latent code. Our proposed method encourages bijective consistency between the latent encoding and output modes. We present a systematic comparison of our method and other variants on both perceptual realism and diversity.", "title": "" }, { "docid": "502cae1daa2459ed0f826ed3e20c44e4", "text": "Recurrent neural networks (RNNs) have drawn interest from machine learning researchers because of their effectiveness at preserving past inputs for time-varying data processing tasks. To understand the success and limitations of RNNs, it is critical that we advance our analysis of their fundamental memory properties. We focus on echo state networks (ESNs), which are RNNs with simple memoryless nodes and random connectivity. In most existing analyses, the short-term memory (STM) capacity results conclude that the ESN network size must scale linearly with the input size for unstructured inputs. The main contribution of this paper is to provide general results characterizing the STM capacity for linear ESNs with multidimensional input streams when the inputs have common low-dimensional structure: sparsity in a basis or significant statistical dependence between inputs. In both cases, we show that the number of nodes in the network must scale linearly with the information rate and poly-logarithmically with the input dimension. The analysis relies on advanced applications of random matrix theory and results in explicit non-asymptotic bounds on the recovery error. Taken together, this analysis provides a significant step forward in our understanding of the STM properties in RNNs.", "title": "" }, { "docid": "3e97e8be1ab2f2a056fdccbcd350f522", "text": "Backchannel responses like “uh-huh”, “yeah”, “right” are used by the listener in a social dialog as a way to provide feedback to the speaker. In the context of human-computer interaction, these responses can be used by an artificial agent to build rapport in conversations with users. In the past, multiple approaches have been proposed to detect backchannel cues and to predict the most natural timing to place those backchannel utterances. Most of these are based on manually optimized fixed rules, which may fail to generalize. Many systems rely on the location and duration of pauses and pitch slopes of specific lengths. In the past, we proposed an approach by training artificial neural networks on acoustic features such as pitch and power and also attempted to add word embeddings via word2vec. In this work, we refined this approach by evaluating different methods to add timed word embeddings via word2vec. Comparing the performance using various feature combinations, we could show that adding linguistic features improves the performance over a prediction system that only uses acoustic features.", "title": "" } ]
scidocsrr
cca8e1229bd785d28d72ca397d8091dd
3 D Object Detection with a Deformable 3 D Cuboid Model
[ { "docid": "cc4c58f1bd6e5eb49044353b2ecfb317", "text": "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.", "title": "" }, { "docid": "89297a4aef0d3251e8d947ccc2acacc7", "text": "We propose a novel probabilistic framework for learning visual models of 3D object categories by combining appearance information and geometric constraints. Objects are represented as a coherent ensemble of parts that are consistent under 3D viewpoint transformations. Each part is a collection of salient image features. A generative framework is used for learning a model that captures the relative position of parts within each of the discretized viewpoints. Contrary to most of the existing mixture of viewpoints models, our model establishes explicit correspondences of parts across different viewpoints of the object class. Given a new image, detection and classification are achieved by determining the position and viewpoint of the model that maximize recognition scores of the candidate objects. Our approach is among the first to propose a generative probabilistic framework for 3D object categorization. We test our algorithm on the detection task and the viewpoint classification task by using “car” category from both the Savarese et al. 2007 and PASCAL VOC 2006 datasets. We show promising results in both the detection and viewpoint classification tasks on these two challenging datasets.", "title": "" } ]
[ { "docid": "ebfdbe749b267f09edc11d60c2b0772a", "text": "Power System State Estimation—Theory and Implementations by Ali Abur and Antonio Gómez Expósito, Marcel Dekker, Inc., 2004. ISBN: 0-8247-5570-7. This is a comprehensive book on state estimation intended for students and practitioners in power engineering. It is also of high interest for applied mathematicians working in statistics and operations research. Easy-to-read and enjoyable in most chapters, the book is well structured and most theoretical developments are illustrated with insightful examples. State-of-the-art developments as well as historical references are provided. Certainly, this book constitutes a fundamental addition to the state estimation literature by two well-known experts on the subject. No such comprehensive, detailed, and readable book was previously available. The book assumes basic knowledge of power system analysis, calculus, algebra, and statistics. Chapter 1 provides an insightful overview of the security framework where the state estimation tool is a key requirement. The energy management system of a modern electric energy system is briefly described and the need for state estimation naturally established. Chapter 2 is devoted to the formulation and the basic solution technique for the state estimation problem, i.e., the problem of finding the most likely values for voltage magnitudes and angles (system state) throughout a power network, given an appropriate set of measurements. This set of measurements typically includes voltage magnitudes and active and reactive power injections in nodes and flows in lines. The state is normally estimated by minimizing measurement errors in a least squares sense and weighting measurements according to quality; hence, the denomination weighted least squares algorithm. First, component modeling is reviewed and measurement assumptions are established. Then, technicalities pertaining to the efficient solution of the resulting system of nonlinear equations, denominated normal equations, are orderly considered and include: characterization of the measurement Jacobian, the gain matrix, Cholesky factorization, and the efficient solution of sparse linear systems of equation. The chapter concludes with the decoupled formulation of the weighted least squares algorithm and the dc formulation. Chapter 3 addresses alternative weighted least squares formulations to the state estimation problem. Those alternative formulations are justified for the potential problems encountered when solving the normal equations. These alternative methods either consider a different factorization procedure to solve the normal equations or propose a modified formulation. Alternative factorizations include: orthogonal factorization, hybrid Choleskyorthogonal factorization, and others. Particularly relevant among the modified formulations is the equality constrained weighted least squares model, which implicitly includes equality constraints for all or a subset of measurements. Variations of this method are discussed. The chapter closes with a discussion on the advantages and disadvantages of the different formulations. Chapter 4 is devoted to observability; i.e., the problem of identifying if the set of available measurements is enough to estimate the state of the system. Note that observability is related to not only the number of measurements but also to their types and locations. Two numerical techniques are provided, one based on branch variables and the other on nodal variables. Procedures to identify observable islands are provided if the system under analysis is not observable. Furthermore, a technique bo ok re vi ew", "title": "" }, { "docid": "87b969368332c8f1ad4ddeb4c98c1867", "text": "A comprehensive understanding of individual customer value is crucial to any successful customer relationship management strategy. It is also the key to building products for long-term value returns. Modeling customer lifetime value (CLTV) can be fraught with technical difficulties, however, due to both the noisy nature of user-level behavior and the potentially large customer base. Here we describe a new CLTV system that solves these problems. This was built at Groupon, a large global e-commerce company, where confronting the unique challenges of local commerce means quickly iterating on new products and the optimal inventory to appeal to a wide and diverse audience. Given current purchaser frequency we need a faster way to determine the health of individual customers, and given finite resources we need to know where to focus our energy. Our CLTV system predicts future value on an individual user basis with a random forest model which includes features that account for nearly all aspects of each customer's relationship with our platform. This feature set includes those quantifying engagement via email and our mobile app, which give us the ability to predict changes in value far more quickly than models based solely on purchase behavior. We further model different customer types, such as one-time buyers and power users, separately so as to allow for different feature weights and to enhance the interpretability of our results. Additionally, we developed an economical scoring framework wherein we re-score a user when any trigger events occur and apply a decay function otherwise, to enable frequent scoring of a large customer base with a complex model. This system is deployed, predicting the value of hundreds of millions of users on a daily cadence, and is actively being used across our products and business initiatives.", "title": "" }, { "docid": "d80e354533a2ee6e472904eae4fc8ffe", "text": "BACKGROUND\nKeloid and hypertrophic scars represent an aberrant response to the wound healing process. These scars are characterized by dysregulated growth with excessive collagen formation, and can be cosmetically and functionally disruptive to patients.\n\n\nOBJECTIVE\nObjectives are to describe the pathophysiology of keloid and hypertrophic scar, and to compare differences with the normal wound healing process. The classification of keloids and hypertrophic scars are then discussed. Finally, various treatment options including prevention, conventional therapies, surgical therapies, and adjuvant therapies are described in detail.\n\n\nMATERIALS AND METHODS\nLiterature review was performed identifying relevant publications pertaining to the pathophysiology, classification, and treatment of keloid and hypertrophic scars.\n\n\nRESULTS\nThough the pathophysiology of keloid and hypertrophic scars is not completely known, various cytokines have been implicated, including interleukin (IL)-6, IL-8, and IL-10, as well as various growth factors including transforming growth factor-beta and platelet-derived growth factor. Numerous treatments have been studied for keloid and hypertrophic scars,which include conventional therapies such as occlusive dressings, compression therapy, and steroids; surgical therapies such as excision and cryosurgery; and adjuvant and emerging therapies including radiation therapy, interferon, 5-fluorouracil, imiquimod, tacrolimus, sirolimus, bleomycin, doxorubicin, transforming growth factor-beta, epidermal growth factor, verapamil, retinoic acid, tamoxifen, botulinum toxin A, onion extract, silicone-based camouflage, hydrogel scaffold, and skin tension offloading device.\n\n\nCONCLUSION\nKeloid and hypertrophic scars remain a challenging condition, with potential cosmetic and functional consequences to patients. Several therapies exist which function through different mechanisms. Better understanding into the pathogenesis will allow for development of newer and more targeted therapies in the future.", "title": "" }, { "docid": "b986dfc42547b64dd2ed0f86cd4e203d", "text": "A deep learning approach to reinforcement learning led to a general learner able to train on visual input to play a variety of arcade games at the human and superhuman levels. Its creators at the Google DeepMind’s team called the approach: Deep Q-Network (DQN). We present an extension of DQN by “soft” and “hard” attention mechanisms. Tests of the proposed Deep Attention Recurrent Q-Network (DARQN) algorithm on multiple Atari 2600 games show level of performance superior to that of DQN. Moreover, built-in attention mechanisms allow a direct online monitoring of the training process by highlighting the regions of the game screen the agent is focusing on when making decisions.", "title": "" }, { "docid": "9f37aaf96b8c56f0397b63a7b53776ec", "text": "The Histogram of Oriented Gradient (HOG) descriptor has led to many advances in computer vision over the last decade and is still part of many state of the art approaches. We realize that the associated feature computation is piecewise differentiable and therefore many pipelines which build on HOG can be made differentiable. This lends to advanced introspection as well as opportunities for end-to-end optimization. We present our implementation of ΔHOG based on the auto-differentiation toolbox Chumpy [18] and show applications to pre-image visualization and pose estimation which extends the existing differentiable renderer OpenDR [19] pipeline. Both applications improve on the respective state-of-the-art HOG approaches.", "title": "" }, { "docid": "fe194d00c129e05f17e7926d15f37c37", "text": "Synthesis, simulation and experiment of unequally spaced resonant slotted-waveguide antenna arrays based on the infinite wavelength propagation property of composite right/left-handed (CRLH) waveguide has been demonstrated in this paper. Both the slot element spacing and excitation amplitude of the antenna array can be adjusted to tailor the radiation pattern. A specially designed shorted CRLH waveguide, as the feed structure of the antenna array, is to work at the infinite wavelength propagation frequency. This ensures that all unequally spaced slot elements along the shorted CRLH waveguide wall can be excited either inphase or antiphase. Four different unequally spaced resonant slotted-waveguide antenna arrays are designed to form pencil, flat-topped and difference beam patterns. Through the synthesis, simulation and experiment, it proves that the proposed arrays are able to exhibit better radiation performances than conventional resonant slotted-waveguide antenna arrays.", "title": "" }, { "docid": "30596d0edee0553117c5109eb948e1b6", "text": "Spatial relationships between objects provide important information for text-based image retrieval. As users are more likely to describe a scene from a real world perspective, using 3D spatial relationships rather than 2D relationships that assume a particular viewing direction, one of the main challenges is to infer the 3D structure that bridges images with users text descriptions. However, direct inference of 3D structure from images requires learning from large scale annotated data. Since interactions between objects can be reduced to a limited set of atomic spatial relations in 3D, we study the possibility of inferring 3D structure from a text description rather than an image, applying physical relation models to synthesize holistic 3D abstract object layouts satisfying the spatial constraints present in a textual description. We present a generic framework for retrieving images from a textual description of a scene by matching images with these generated abstract object layouts. Images are ranked by matching object detection outputs (bounding boxes) to 2D layout candidates (also represented by bounding boxes) which are obtained by projecting the 3D scenes with sampled camera directions. We validate our approach using public indoor scene datasets and show that our method outperforms baselines built upon object occurrence histograms and learned 2D pairwise relations.", "title": "" }, { "docid": "5531b3b4269018903e74680aaf9a5d39", "text": "Voting is a fundamental part of democratic systems; it gives individuals in a community the faculty to voice their opinion. In recent years, voter turnout has diminished while concerns regarding integrity, security, and accessibility of current voting systems have escalated. E-voting was introduced to address those concerns; however, it is not cost-effective and still requires full supervision by a central authority. The blockchain is an emerging, decentralized, and distributed technology that promises to enhance different aspects of many industries. Expanding e-voting into blockchain technology could be the solution to alleviate the present concerns in e-voting. In this paper, we propose a blockchain-based voting system, named BroncoVote, that preserves voter privacy and increases accessibility, while keeping the voting system transparent, secure, and cost-effective. BroncoVote implements a university-scaled voting framework that utilizes Ethereum’s blockchain and smart contracts to achieve voter administration and auditable voting records. In addition, BroncoVote utilizes a few cryptographic techniques, including homomorphic encryption, to promote voter privacy. Our implementation was deployed on Ethereum’s Testnet to demonstrate usability, scalability, and efficiency.", "title": "" }, { "docid": "53ae229e708297bf73cf3a33b32e42da", "text": "Signal-dependent phase variation, AM/PM, along with amplitude variation, AM/AM, are known to determine nonlinear distortion characteristics of current-mode PAs. However, these distortion effects have been treated separately, putting more weight on the amplitude distortion, while the AM/PM generation mechanisms are yet to be fully understood. Hence, the aim of this work is to present a large-signal physical model that can describe both the AM/AM and AM/PM PA nonlinear distortion characteristics and their internal relationship.", "title": "" }, { "docid": "59d49392710d5fe66b9bc4e4ec311025", "text": "A low profile slot antenna array is designed to operate over a wide bandwidth with low sidelobe levels. Taylor synthesis is used to taper the power distribution among array aperture. An efficient approach to design an equal-phase but unequal-power symmetric waveguide divider is proposed for constructing an amplitude-tapering feed-network for the array. Phase differences are balanced by adjusting the waveguide phase velocities. The basic radiator is a 2 × 2 slot subarray, uniformly fed by such a waveguide divider. A 16 × 16 array is then constructed by 8 × 8 of such subarrays. A 1-to-64 way corporate-feed waveguide network is designed to excite all subarrays instead of individual slots, and the required power distribution is obtained by adopting a 30 dB Taylor N̅ = 4 synthesis. Measured results indicate that the array can achieve a 13.8% bandwidth and a gain of more than 29.5 dBi. The first sidelobe level is -26.5 dB in the E-plane and -30.4 dB in the H-plane. The holistic sidelobe levels in both planes are better than -25 dB with a better than -40 dB cross-polarization.", "title": "" }, { "docid": "6bae81e837f4a498ae4c814608aac313", "text": "person’s ability to focus on his or her primary task. Distractions occur especially in mobile environments, because walking, driving, or other real-world interactions often preoccupy the user. A pervasivecomputing environment that minimizes distraction must be context aware, and a pervasive-computing system must know the user’s state to accommodate his or her needs. Context-aware applications provide at least two fundamental services: spatial awareness and temporal awareness. Spatially aware applications consider a user’s relative and absolute position and orientation. Temporally aware applications consider the time schedules of public and private events. With an interdisciplinary class of Carnegie Mellon University (CMU) students, we developed and implemented a context-aware, pervasive-computing environment that minimizes distraction and facilitates collaborative design.", "title": "" }, { "docid": "6db790d4d765b682fab6270c5930bead", "text": "Geophysical applications of radar interferometry to measure changes in the Earth's surface have exploded in the early 1990s. This new geodetic technique calculates the interference pattern caused by the difference in phase between two images acquired by a spaceborne synthetic aperture radar at two distinct times. The resulting interferogram is a contour map of the change in distance between the ground and the radar instrument. These maps provide an unsurpassed spatial sampling density (---100 pixels km-2), a competitive precision (---1 cm), and a useful observation cadence (1 pass month-•). They record movements in the crust, perturbations in the atmosphere, dielectric modifications in the soil, and relief in the topography. They are also sensitive to technical effects, such as relative variations in the radar's trajectory or variations in its frequency standard. We describe how all these phenomena contribute to an interferogram. Then a practical summary explains the techniques for calculating and manipulating interferograms from various radar instruments, including the four satellites currently in orbit: ERS-1, ERS-2, JERS-1, and RADARSAT. The next chapter suggests some guidelines for interpreting an interferogram as a geophysical measurement: respecting the limits of the technique, assessing its uncertainty, recognizing artifacts, and discriminating different types of signal. We then review the geophysical applications published to date, most of which study deformation related to earthquakes, volcanoes, and glaciers using ERS-1 data. We also show examples of monitoring natural hazards and environmental alterations related to landslides, subsidence, and agriculture. In addition, we consider subtler geophysical signals such as postseismic relaxation, tidal loading of coastal areas, and interseismic strain accumulation. We conclude with our perspectives on the future of radar interferometry. The objective of the review is for the reader to develop the physical understanding necessary to calculate an interferogram and the geophysical intuition necessary to interpret it.", "title": "" }, { "docid": "1de3364e104a85af05f4a910ede83109", "text": "Activity theory holds that the human mind is the product of our interaction with people and artifacts in the context of everyday activity. Acting with Technology makes the case for activity theory as a basis for...", "title": "" }, { "docid": "63c3e74f2d26dde9a0cdbd7161348197", "text": "We assessed brain activation of nine normal right-handed volunteers in a positron emission tomography study designed to differentiate the functional anatomy of the two major components of auditory comprehension of language, namely phonological versus lexico-semantic processing. The activation paradigm included three tasks. In the reference task, subjects were asked to detect rising pitch within a series of pure tones. In the phonological task, they had to monitor the sequential phonemic organization of non-words. In the lexico-semantic task, they monitored concrete nouns according to semantic criteria. We found highly significant and different patterns of activation. Phonological processing was associated with activation in the left superior temporal gyrus (mainly Wernicke's area) and, to a lesser extent, in Broca's area and in the right superior temporal regions. Lexico-semantic processing was associated with activity in the left middle and inferior temporal gyri, the left inferior parietal region and the left superior prefrontal region, in addition to the superior temporal regions. A comparison of the pattern of activation obtained with the lexico-semantic task to that obtained with the phonological task was made in order to account for the contribution of lower stage components to semantic processing. No difference in activation was found in Broca's area and superior temporal areas which suggests that these areas are activated by the phonological component of both tasks, but activation was noted in the temporal, parietal and frontal multi-modal association areas. These constitute parts of a large network that represent the specific anatomic substrate of the lexico-semantic processing of language.", "title": "" }, { "docid": "140fd854c8564b75609f692229ac616e", "text": "Modern search systems are based on dozens or even hundreds of ranking features. The dueling bandit gradient descent (DBGD) algorithm has been shown to effectively learn combinations of these features solely from user interactions. DBGD explores the search space by comparing a possibly improved ranker to the current production ranker. To this end, it uses interleaved comparison methods, which can infer with high sensitivity a preference between two rankings based only on interaction data. A limiting factor is that it can compare only to a single exploratory ranker. We propose an online learning to rank algorithm called multileave gradient descent (MGD) that extends DBGD to learn from so-called multileaved comparison methods that can compare a set of rankings instead of merely a pair. We show experimentally that MGD allows for better selection of candidates than DBGD without the need for more comparisons involving users. An important implication of our results is that orders of magnitude less user interaction data is required to find good rankers when multileaved comparisons are used within online learning to rank. Hence, fewer users need to be exposed to possibly inferior rankers and our method allows search engines to adapt more quickly to changes in user preferences.", "title": "" }, { "docid": "48ad56eb4b866806bc99e941fbde49b9", "text": "Mosaic trisomy 8 is a relatively common chromosomal abnormality, which shows a great variability in clinical expression, however cases with phenotypic abnormalities tend to present with a distinct, recognizable clinical syndrome with a characteristic facial appearance, a long, slender trunk, limitation of movement in multiple joints, and mild-to-moderate mental retardation; the deep plantar furrows are a typical finding, the agenesis of the corpus callosum occurs frequently. We report a case, which in addition to certain characteristic features of mosaic trisomy 8, presented with craniofacial midline defects, including notched nasal tip, cleft maxillary alveolar ridge, bifid tip of tongue, grooved uvula and left choanal atresia, previously not described in this chromosomal disorder and a severe delay in psychomotor development, uncommon in trisomy 8 mosaicism.", "title": "" }, { "docid": "a7fa5171308a566a19da39ee6d7b74f6", "text": "Machine learning approaches to coreference resolution vary greatly in the modeling of the problem: while early approaches operated on the mention pair level, current research focuses on ranking architectures and antecedent trees. We propose a unified representation of different approaches to coreference resolution in terms of the structure they operate on. We represent several coreference resolution approaches proposed in the literature in our framework and evaluate their performance. Finally, we conduct a systematic analysis of the output of these approaches, highlighting differences and similarities.", "title": "" }, { "docid": "cf94d312bb426e64e364dfa33b09efeb", "text": "The attractiveness of a face is a highly salient social signal, influencing mate choice and other social judgements. In this study, we used event-related functional magnetic resonance imaging (fMRI) to investigate brain regions that respond to attractive faces which manifested either a neutral or mildly happy face expression. Attractive faces produced activation of medial orbitofrontal cortex (OFC), a region involved in representing stimulus-reward value. Responses in this region were further enhanced by a smiling facial expression, suggesting that the reward value of an attractive face as indexed by medial OFC activity is modulated by a perceiver directed smile.", "title": "" }, { "docid": "43f9e6edee92ddd0b9dfff885b69f64d", "text": "In this paper, we present a scalable and exact solution for probabilistic linear discriminant analysis (PLDA). PLDA is a probabilistic model that has been shown to provide state-of-the-art performance for both face and speaker recognition. However, it has one major drawback: At training time estimating the latent variables requires the inversion and storage of a matrix whose size grows quadratically with the number of samples for the identity (class). To date, two approaches have been taken to deal with this problem, to 1) use an exact solution that calculates this large matrix and is obviously not scalable with the number of samples or 2) derive a variational approximation to the problem. We present a scalable derivation which is theoretically equivalent to the previous nonscalable solution and thus obviates the need for a variational approximation. Experimentally, we demonstrate the efficacy of our approach in two ways. First, on labeled faces in the wild, we illustrate the equivalence of our scalable implementation with previously published work. Second, on the large Multi-PIE database, we illustrate the gain in performance when using more training samples per identity (class), which is made possible by the proposed scalable formulation of PLDA.", "title": "" }, { "docid": "fe2bc36e704b663c8b9a72e7834e6c7e", "text": "Driven by deep learning, there has been a surge of specialized processors for matrix multiplication, referred to as Tensor Core Units (TCUs). These TCUs are capable of performing matrix multiplications on small matrices (usually 4× 4 or 16×16) to accelerate the convolutional and recurrent neural networks in deep learning workloads. In this paper we leverage NVIDIA’s TCU to express both reduction and scan with matrix multiplication and show the benefits — in terms of program simplicity, efficiency, and performance. Our algorithm exercises the NVIDIA TCUs which would otherwise be idle, achieves 89%− 98% of peak memory copy bandwidth, and is orders of magnitude faster (up to 100× for reduction and 3× for scan) than state-of-the-art methods for small segment sizes — common in machine learning and scientific applications. Our algorithm achieves this while decreasing the power consumption by up to 22% for reduction and 16% for scan.", "title": "" } ]
scidocsrr
c2e6eb3a978630ab916731fb2cfa0e8c
Combining strengths, emotions and polarities for boosting Twitter sentiment analysis
[ { "docid": "ebc107147884d89da4ef04eba2d53a73", "text": "Twitter sentiment analysis (TSA) has become a hot research topic in recent years. The goal of this task is to discover the attitude or opinion of the tweets, which is typically formulated as a machine learning based text classification problem. Some methods use manually labeled data to train fully supervised models, while others use some noisy labels, such as emoticons and hashtags, for model training. In general, we can only get a limited number of training data for the fully supervised models because it is very labor-intensive and time-consuming to manually label the tweets. As for the models with noisy labels, it is hard for them to achieve satisfactory performance due to the noise in the labels although it is easy to get a large amount of data for training. Hence, the best strategy is to utilize both manually labeled data and noisy labeled data for training. However, how to seamlessly integrate these two different kinds of data into the same learning framework is still a challenge. In this paper, we present a novel model, called emoticon smoothed language model (ESLAM), to handle this challenge. The basic idea is to train a language model based on the manually labeled data, and then use the noisy emoticon data for smoothing. Experiments on real data sets demonstrate that ESLAM can effectively integrate both kinds of data to outperform those methods using only one of them.", "title": "" }, { "docid": "57666e9d9b7e69c38d7530633d556589", "text": "In this paper, we investigate the utility of linguistic features for detecting the sentiment of Twitter messages. We evaluate the usefulness of existing lexical resources as well as features that capture information about the informal and creative language used in microblogging. We take a supervised approach to the problem, but leverage existing hashtags in the Twitter data for building training data.", "title": "" } ]
[ { "docid": "6ac231de51b69685fcb45d4ef2b32051", "text": "This paper deals with a design and motion planning algorithm of a caterpillar-based pipeline robot that can be used for inspection of 80-100-mm pipelines in an indoor pipeline environment. The robot system uses a differential drive to steer the robot and spring loaded four-bar mechanisms to assure that the robot expands to grip the pipe walls. Unique features of this robot are the caterpillar wheels, the analysis of the four-bar mechanism supporting the treads, a closed-form kinematic approach, and an intuitive user interface. In addition, a new motion planning approach is proposed, which uses springs to interconnect two robot modules and allows the modules to cooperatively navigate through difficult segments of the pipes. Furthermore, an analysis method of selecting optimal compliance to assure functionality and cooperation is suggested. Simulation and experimental results are used throughout the paper to highlight algorithms and approaches.", "title": "" }, { "docid": "91f20c48f5a4329260aadb87a0d8024c", "text": "In this paper, we survey key design for manufacturing issues for extreme scaling with emerging nanolithography technologies, including double/multiple patterning lithography, extreme ultraviolet lithography, and electron-beam lithography. These nanolithography and nanopatterning technologies have different manufacturing processes and their unique challenges to very large scale integration (VLSI) physical design, mask synthesis, and so on. It is essential to have close VLSI design and underlying process technology co-optimization to achieve high product quality (power/performance, etc.) and yield while making future scaling cost-effective and worthwhile. Recent results and examples will be discussed to show the enablement and effectiveness of such design and process integration, including lithography model/analysis, mask synthesis, and lithography friendly physical design.", "title": "" }, { "docid": "14e8006ae1fc0d97e737ff2a5a4d98dd", "text": "Building dialogue systems that can converse naturally with humans is a challenging yet intriguing problem of artificial intelligence. In open-domain human-computer conversation, where the conversational agent is expected to respond to human utterances in an interesting and engaging way, commonsense knowledge has to be integrated into the model effectively. In this paper, we investigate the impact of providing commonsense knowledge about the concepts covered in the dialogue. Our model represents the first attempt to integrating a large commonsense knowledge base into end-toend conversational models. In the retrieval-based scenario, we propose a model to jointly take into account message content and related commonsense for selecting an appropriate response. Our experiments suggest that the knowledgeaugmented models are superior to their knowledge-free counterparts.", "title": "" }, { "docid": "e81a3846c69634d2c06f09575f6a7081", "text": "Much research has been conducted on both face identification and face verification, with greater focus on the latter. Research on face identification has mostly focused on using closed-set protocols, which assume that all probe images used in evaluation contain identities of subjects that are enrolled in the gallery. Real systems, however, where only a fraction of probe sample identities are enrolled in the gallery, cannot make this closed-set assumption. Instead, they must assume an open set of probe samples and be able to reject/ignore those that correspond to unknown identities. In this paper, we address the widespread misconception that thresholding verification-like scores is a good way to solve the open-set face identification problem, by formulating an open-set face identification protocol and evaluating different strategies for assessing similarity. Our open-set identification protocol is based on the canonical labeled faces in the wild (LFW) dataset. Additionally to the known identities, we introduce the concepts of known unknowns (known, but uninteresting persons) and unknown unknowns (people never seen before) to the biometric community. We compare three algorithms for assessing similarity in a deep feature space under an open-set protocol: thresholded verification-like scores, linear discriminant analysis (LDA) scores, and an extreme value machine (EVM) probabilities. Our findings suggest that thresholding EVM probabilities, which are open-set by design, outperforms thresholding verification-like scores.", "title": "" }, { "docid": "722bd824e55d6b5221470e3dd35d96e6", "text": "The increasing popularity of JavaScript has led to a variety of JavaScript frameworks that aim to help developers to address programming tasks. However, the number of JavaScript frameworks has risen rapidly to thousands of versions. It is challenging for practitioners to identify the frameworks that best fit their needs and to develop new ones which fit such needs. Furthermore, there is a lack of knowledge regarding what drives developers toward the choice. This paper explores the factors and actors that lead to the choice of a JavaScript framework. We conducted a qualitative interpretive study of semi-structured interviews. We interviewed 18 decision makers regarding the JavaScript framework selection, up to reaching theoretical saturation. Through coding of the interview responses, we offer a model of desirable JavaScript framework adoption factors. The factors are grouped into categories that are derived via the Unified Theory of Acceptance and Use of Technology. The factors are performance expectancy (performance, size), effort expectancy (automatization, learnability, complexity, understandability), social influence (competitor analysis, collegial advice, community size, community responsiveness), facilitating conditions (suitability, updates, modularity, isolation, extensibility), and price value. A combination of four actors, which are customer, developer, team, and team leader, leads to the choice. Our model contributes to the body of knowledge related to the adoption of technology by software engineers. As a practical implication, our model is useful for decision makers when evaluating JavaScript frameworks, as well as for developers for producing desirable frameworks.", "title": "" }, { "docid": "94a73ee25927c9acef132b71604bf5f9", "text": "Existing object proposal approaches use primarily bottom-up cues to rank proposals, while we believe that \"objectness\" is in fact a high level construct. We argue for a data-driven, semantic approach for ranking object proposals. Our framework, which we call DeepBox, uses convolutional neural networks (CNNs) to rerank proposals from a bottom-up method. We use a novel four-layer CNN architecture that is as good as much larger networks on the task of evaluating objectness while being much faster. We show that DeepBox significantly improves over the bottom-up ranking, achieving the same recall with 500 proposals as achieved by bottom-up methods with 2000. This improvement generalizes to categories the CNN has never seen before and leads to a 4.5-point gain in detection mAP. Our implementation achieves this performance while running at 260 ms per image.", "title": "" }, { "docid": "2e0d4680cf5953d81f7e8bf8e932e64d", "text": "Ontological Semantics is an approach to automatically extracting the meaning of natural language texts. The OntoSem text analysis system, developed according to this approach, generates ontologically grounded, disambiguated text meaning representations that can serve as input to intelligent agent reasoning. This article focuses on two core subtasks of overall semantic analysis: lexical disambiguation and the establishment of the semantic dependency structure. In addition to describing the knowledge bases and processors used to carry out these tasks, we introduce a novel evaluation suite suited specifically to knowledge-based systems. To situate this contribution in the field, we critically compare the goals, methods and tasks of Ontological Semantics with those of the currently dominant paradigm of natural language processing, which relies on machine learning.", "title": "" }, { "docid": "23c2ea4422ec6057beb8fa0be12e57b3", "text": "This study applied logistic regression to model urban growth in the Atlanta Metropolitan Area of Georgia in a GIS environment and to discover the relationship between urban growth and the driving forces. Historical land use/cover data of Atlanta were extracted from the 1987 and 1997 Landsat TM images. Multi-resolution calibration of a series of logistic regression models was conducted from 50 m to 300 m at intervals of 25 m. A fractal analysis pointed to 225 m as the optimal resolution of modeling. The following two groups of factors were found to affect urban growth in different degrees as indicated by odd ratios: (1) population density, distances to nearest urban clusters, activity centers and roads, and high/low density urban uses (all with odds ratios < 1); and (2) distance to the CBD, number of urban cells within a 7 · 7 cell window, bare land, crop/grass land, forest, and UTM northing coordinate (all with odds ratios > 1). A map of urban growth probability was calculated and used to predict future urban patterns. Relative operating characteristic (ROC) value of 0.85 indicates that the probability map is valid. It was concluded that despite logistic regression’s lack of temporal dynamics, it was spatially explicit and suitable for multi-scale analysis, and most importantly, allowed much deeper understanding of the forces driving the growth and the formation of the urban spatial pattern. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "51383980e6b5af33e5ebc7ba7123d4d9", "text": "BACKGROUND\nWhereas organized trauma care systems have decreased trauma mortality in the United States, trauma system design has not been well addressed in developing nations. We sought to determine areas in greatest need of improvement in the trauma systems of developing nations.\n\n\nMETHODS\nWe compared outcome of all seriously injured (Injury Severity Score > or = 9 or dead), nontransferred, adults managed over 1 year in three cities in nations at different economic levels: (1) Kumasi, Ghana: low income, gross national product (GNP) per capita of $310, no emergency medical service (EMS); (2) Monterrey, Mexico: middle income, GNP $3,900, basic EMS; and (3) Seattle, Washington: high income, GNP $25,000, advanced EMS. Each city had one main trauma hospital, from which hospital data were obtained. Annual budgets (in US$) per bed for these hospitals were as follows: Kumasi, $4,100; Monterrey, $68,000; and Seattle, $606,000. Data on prehospital deaths were obtained from vital statistics registries in Monterrey and Seattle, and by an epidemiologic survey in Kumasi.\n\n\nRESULTS\nMean age (34 years) and injury mechanisms (79% blunt) were similar in all locations. Mortality declined with increased economic level: Kumasi (63% of all seriously injured persons died), Monterrey (55%), and Seattle (35%). This decline was primarily due to decreases in prehospital deaths. In Kumasi, 51% of all seriously injured persons died in the field; in Monterrey, 40%; and in Seattle, 21%. Mean prehospital time declined progressively: Kumasi (102 +/- 126 minutes) > Monterrey (73 +/- 38 minutes) > Seattle (31 +/- 10 minutes). Percent of trauma patients dying in the emergency room was higher for Monterrey (11%) than for either Kumasi (3%) or Seattle (6%).\n\n\nCONCLUSIONS\nThe majority of deaths occur in the prehospital setting, indicating the importance of injury prevention in nations at all economic levels. Additional efforts for trauma care improvement in both low-income and middle-income developing nations should focus on prehospital and emergency room care. Improved emergency room care is especially important in middle-income nations which have already established a basic EMS.", "title": "" }, { "docid": "b5c65533fd768b9370d8dc3aba967105", "text": "Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity.", "title": "" }, { "docid": "dad0c9ce47334ca6133392322068dd68", "text": "A monolithic 64Gb MLC NAND flash based on 21nm process technology has been developed for the first time. The device consists of 4-plane arrays and provides page size of up to 32KB. It also features a newly developed DDR interface that can support up to the maximum bandwidth of 400MB/s. To address performance and reliability, on-chip randomizer, soft data readout, and incremental bit line precharge scheme have been developed.", "title": "" }, { "docid": "cdee55e977d5809b87f3e8be98acaaa3", "text": "Proximity effects caused by uneven distribution of current among the insulated wire strands of stator multi-strand windings can contribute significant bundle-level proximity losses in permanent magnet (PM) machines operating at high speeds. Three-dimensional finite element analysis is used to investigate the effects of transposition of the insulated strands in stator winding bundles on the copper losses in high-speed machines. The investigation confirms that the bundle proximity losses must be considered in the design of stator windings for high-speed machines, and the amplitude of these losses decreases monotonically as the level of transposition is increased from untransposed to fully-transposed (360°) wire bundles. Analytical models are introduced to estimate the currents in strands in a slot for a high-speed machine.", "title": "" }, { "docid": "98502b68821d19cc170355545ae82749", "text": "Many prediction problems, such as those that arise in the context of robotics, have a simplifying underlying structure that, if known, could accelerate learning. In this paper, we present a strategy for learning a set of neural network modules that can be combined in different ways. We train different modular structures on a set of related tasks and generalize to new tasks by composing the learned modules in new ways. By reusing modules to generalize we achieve combinatorial generalization, akin to the ”infinite use of finite means” displayed in language. Finally, we show this improves performance in two robotics-related problems.", "title": "" }, { "docid": "c02ca1d9717f225f372e43c639c8403a", "text": "A new wave of collaborative robots designed to work alongside humans is bringing the automation historically seen in large-scale industrial settings to new, diverse contexts. However, the ability to program these machines often requires years of training, making them inaccessible or impractical for many. This project rethinks what robot programming interfaces could be in order to make them accessible and intuitive for adult novice programmers. We created a block-based interface for programming a one-armed industrial robot and conducted a study with 67 adult novices comparing it to two programming approaches in widespread use in industry. The results show participants using the block-based interface successfully implemented robot programs faster with no loss in accuracy while reporting higher scores for usability, learnability, and overall satisfaction. The contribution of this work is showing the potential for using block-based programming to make powerful technologies accessible to a wider audience.", "title": "" }, { "docid": "358e4c55233f3837cf95b8c269447cd2", "text": "In this correspondence, the construction of low-density parity-check (LDPC) codes from circulant permutation matrices is investigated. It is shown that such codes cannot have a Tanner graph representation with girth larger than 12, and a relatively mild necessary and sufficient condition for the code to have a girth of 6, 8,10, or 12 is derived. These results suggest that families of LDPC codes with such girth values are relatively easy to obtain and, consequently, additional parameters such as the minimum distance or the number of redundant check sums should be considered. To this end, a necessary condition for the codes investigated to reach their maximum possible minimum Hamming distance is proposed.", "title": "" }, { "docid": "3cb0232cd4b75a8691f9aa4f1d663e9a", "text": "We introduce an approach for realtime segmentation of a scene into foreground objects, background, and object shadows superimposed on the background. To segment foreground objects, we use an adaptive thresholding method, which is able to deal with rapid changes of the overall brightness. The segmented image usually includes shadows cast by the objects onto the background. Our approach is able to robustly remove the shadow from the background while preserving the silhouette of the foreground object. We discuss a similarity measure for comparing color pixels, which improves the quality of shadow removal significantly. As the image segmentation is part of a real-time interaction environment, real-time processing is needed. Our implementation allows foreground segmentation and robust shadow removal with 15 Hz.", "title": "" }, { "docid": "5ed9fde132f44ff2f2354b5d9f5b14ab", "text": "An issue in microfabrication of the fluidic channels in glass/poly (dimethyl siloxane) (PDMS) is the absence of a well-defined study of the bonding strength between the surfaces making up these channels. Although most of the research papers mention the use of oxygen plasma for developing chemical (siloxane) bonds between the participating surfaces, yet they only define a certain set of parameters, tailored to a specific setup. An important requirement of all the microfluidics/biosensors industry is the development of a general regime, which defines a systematic method of gauging the bond strength between the participating surfaces in advance by correlation to a common parameter. This enhances the reliability of the devices and also gives a structured approach to its future large-scale manufacturing. In this paper, we explore the possibility of the existence of a common scale, which can be used to gauge bond strength between various surfaces. We find that the changes in wettability of surfaces owing to various levels of plasma exposure can be a useful parameter to gauge the bond strength. We obtained a good correlation between contact angle of deionized water (a direct measure of wettability) on the PDMS and glass surfaces based on various dosages or oxygen plasma treatment. The exposure was done first in an inductively coupled high-density (ICP) plasma system and then in plasma enhanced chemical vapor deposition (PECVD) system. This was followed by the measurement of bond strength by use or the standardized blister test.", "title": "" }, { "docid": "824fbd2fe175b4b179226d249792b87a", "text": "While historically software validation focused on the functional requirements, recent approaches also encompass the validation of quality requirements; for example, system reliability, performance or usability. Application development for mobile platforms opens an additional area of qual i ty-power consumption. In PDAs or mobile phones, power consumption varies depending on the hardware resources used, making it possible to specify and validate correct or incorrect executions. Consider an application that downloads a video stream from the network and displays it on the mobile device's display. In the test scenario the viewing of the video is paused at a certain point. If the specification does not allow video prefetching, the user expects the network card activity to stop when video is paused. How can a test engineer check this expectation? Simply running a test suite or even tracing the software execution does not detect the network activity. However, the extraneous network activity can be detected by power measurements and power model application (Figure 1). Tools to find the power inconsistencies and to validate software from the energy point of view are needed.", "title": "" }, { "docid": "85693811a951a191d573adfe434e9b18", "text": "Diagnosing problems in data centers has always been a challenging problem due to their complexity and heterogeneity. Among recent proposals for addressing this challenge, one promising approach leverages provenance, which provides the fundamental functionality that is needed for performing fault diagnosis and debugging—a way to track direct and indirect causal relationships between system states and their changes. This information is valuable, since it permits system operators to tie observed symptoms of a faults to their potential root causes. However, capturing provenance in a data center is challenging because, at high data rates, it would impose a substantial cost. In this paper, we introduce techniques that can help with this: We show how to reduce the cost of maintaining provenance by leveraging structural similarities for compression, and by offloading expensive but highly parallel operations to hardware. We also discuss our progress towards transforming provenance into compact actionable diagnostic decisions to repair problems caused by misconfigurations and program bugs.", "title": "" }, { "docid": "16709c54458167634803100605a4f4a5", "text": "Automatic Web page segmentation is the basis to adaptive Web browsing on mobile devices. It breaks a large page into smaller blocks, in which contents with coherent semantics are keeping together. Then, various adaptations like single column and thumbnail view can be developed. However, page segmentation remains a challenging task, and its poor result directly yields a frustrating user experience. As human usually understand the Web page well, in this paper, we start from Gestalt theory, a psychological theory that can explain human's visual perceptive processes. Four basic laws, proximity, similarity, closure, and simplicity, are drawn from Gestalt theory and then implemented in a program to simulate how human understand the layout of Web pages. The experiments show that this method outperforms existing methods.", "title": "" } ]
scidocsrr
d297c561c9f538630d8d930e53bb6fc2
Introduction: digital literacies: concepts, policies and practices
[ { "docid": "39cc52cd5ba588e9d4799c3b68620f18", "text": "Using data from a popular online social network site, this paper explores the relationship between profile structure (namely, which fields are completed) and number of friends, giving designers insight into the importance of the profile and how it works to encourage connections and articulated relationships between users. We describe a theoretical framework that draws on aspects of signaling theory, common ground theory, and transaction costs theory to generate an understanding of why certain profile fields may be more predictive of friendship articulation on the site. Using a dataset consisting of 30,773 Facebook profiles, we determine which profile elements are most likely to predict friendship links and discuss the theoretical and design implications of our findings.", "title": "" } ]
[ { "docid": "23a77ef19b59649b50f168b1cb6cb1c5", "text": "A novel interleaved high step-up converter with voltage multiplier cell is proposed in this paper to avoid the extremely narrow turn-off period and to reduce the current ripple, which flows through the power devices compared with the conventional interleaved boost converter in high step-up applications. Interleaved structure is employed in the input side to distribute the input current, and the voltage multiplier cell is adopted in the output side to achieve a high step-up gain. The voltage multiplier cell is composed of the secondary windings of the coupled inductors, a series capacitor, and two diodes. Furthermore, the switch voltage stress is reduced due to the transformer function of the coupled inductors, which makes low-voltage-rated MOSFETs available to reduce the conduction losses. Moreover, zero-current-switching turn- on soft-switching performance is realized to reduce the switching losses. In addition, the output diode turn-off current falling rate is controlled by the leakage inductance of the coupled inductors, which alleviates the diode reverse recovery problem. Additional active device is not required in the proposed converter, which makes the presented circuit easy to design and control. Finally, a 1-kW 40-V-input 380-V-output prototype operating at 100 kHz switching frequency is built and tested to verify the effectiveness of the presented converter.", "title": "" }, { "docid": "23cc8b190e9de5177cccf2f918c1ad45", "text": "NFC is a standardised technology providing short-range RFID communication channels for mobile devices. Peer-to-peer applications for mobile devices are receiving increased interest and in some cases these services are relying on NFC communication. It has been suggested that NFC systems are particularly vulnerable to relay attacks, and that the attacker’s proxy devices could even be implemented using off-the-shelf NFC-enabled devices. This paper describes how a relay attack can be implemented against systems using legitimate peer-to-peer NFC communication by developing and installing suitable MIDlets on the attacker’s own NFC-enabled mobile phones. The attack does not need to access secure program memory nor use any code signing, and can use publicly available APIs. We go on to discuss how relay attack countermeasures using device location could be used in the mobile environment. These countermeasures could also be applied to prevent relay attacks on contactless applications using ‘passive’ NFC on mobile phones.", "title": "" }, { "docid": "450aee5811484932e8542eb4f0eefa4d", "text": "Natural Language Generation systems in interactive settings often face a multitude of choices, given that the communicative effect of each utterance they generate depends crucially on the interplay between its physical circumstances, addressee and interaction history. This is particularly true in interactive and situated settings. In this paper we present a novel approach for situated Natural Language Generation in dialogue that is based on hierarchical reinforcement learning and learns the best utterance for a context by optimisation through trial and error. The model is trained from human–human corpus data and learns particularly to balance the trade-off between efficiency and detail in giving instructions: the user needs to be given sufficient information to execute their task, but without exceeding their cognitive load. We present results from simulation and a task-based human evaluation study comparing two different versions of hierarchical reinforcement learning: One operates using a hierarchy of policies with a large state space and local knowledge, and the other additionally shares knowledge across generation subtasks to enhance performance. Results show that sharing knowledge across subtasks achieves better performance than learning in isolation, leading to smoother and more successful interactions that are better perceived by human users.", "title": "" }, { "docid": "a007343ab01487e2aa0356534545946b", "text": "Large Internet companies like Amazon, Netflix, and LinkedIn are using the microservice architecture pattern to deploy large applications in the cloud as a set of small services that can be developed, tested, deployed, scaled, operated and upgraded independently. However, aside from gaining agility, independent development, and scalability, infrastructure costs are a major concern for companies adopting this pattern. This paper presents a cost comparison of a web application developed and deployed using the same scalable scenarios with three different approaches: 1) a monolithic architecture, 2) a microservice architecture operated by the cloud customer, and 3) a microservice architecture operated by the cloud provider. Test results show that microservices can help reduce infrastructure costs in comparison to standard monolithic architectures. Moreover, the use of services specifically designed to deploy and scale microservices reduces infrastructure costs by 70% or more. Lastly, we also describe the challenges we faced while implementing and deploying microservice applications.", "title": "" }, { "docid": "37845c0912d9f1b355746f41c7880c3a", "text": "Deep neural networks are commonly trained using stochastic non-convex optimization procedures, which are driven by gradient information estimated on fractions (batches) of the dataset. While it is commonly accepted that batch size is an important parameter for offline tuning, the benefits of online selection of batches remain poorly understood. We investigate online batch selection strategies for two state-of-the-art methods of stochastic gradient-based optimization, AdaDelta and Adam. As the loss function to be minimized for the whole dataset is an aggregation of loss functions of individual datapoints, intuitively, datapoints with the greatest loss should be considered (selected in a batch) more frequently. However, the limitations of this intuition and the proper control of the selection pressure over time are open questions. We propose a simple strategy where all datapoints are ranked w.r.t. their latest known loss value and the probability to be selected decays exponentially as a function of rank. Our experimental results on the MNIST dataset suggest that selecting batches speeds up both AdaDelta and Adam by a factor of about 5.", "title": "" }, { "docid": "be68f44aca9f8c88c2757a6910d7e5a5", "text": "Creative computational systems have often been largescale endeavors, based on elaborate models of creativity and sometimes featuring an accumulation of heuristics and numerous subsystems. An argument is presented for facilitating the exploration of creativity through small-scale systems, which can be more transparent, reusable, focused, and easily generalized across domains and languages. These systems retain the ability, however, to model important aspects of aesthetic and creative processes. Examples of extremely simple story generators are presented along with their implications for larger-scale systems. A case study focuses on a system that implements the simplest possible model of ellipsis.", "title": "" }, { "docid": "d0f9bf7511bcaced02838aa1c2d8785b", "text": "A folksonomy consists of three basic entities, namely users, tags and resources. This kind of social tagging system is a good way to index information, facilitate searches and navigate resources. The main objective of this paper is to present a novel method to improve the quality of tag recommendation. According to the statistical analysis, we find that the total number of tags used by a user changes over time in a social tagging system. Thus, this paper introduces the concept of user tagging status, namely the growing status, the mature status and the dormant status. Then, the determining user tagging status algorithm is presented considering a user’s current tagging status to be one of the three tagging status at one point. Finally, three corresponding strategies are developed to compute the tag probability distribution based on the statistical language model in order to recommend tags most likely to be used by users. Experimental results show that the proposed method is better than the compared methods at the accuracy of tag recommendation.", "title": "" }, { "docid": "86feba94dcc3e89097af2e50e5b7e908", "text": "Concerned about the Turing test’s ability to correctly evaluate if a system exhibits human-like intelligence, the Winograd Schema Challenge (WSC) has been proposed as an alternative. A Winograd Schema consists of a sentence and a question. The answers to the questions are intuitive for humans but are designed to be difficult for machines, as they require various forms of commonsense knowledge about the sentence. In this paper we demonstrate our progress towards addressing the WSC. We present an approach that identifies the knowledge needed to answer a challenge question, hunts down that knowledge from text repositories, and then reasons with them to come up with the answer. In the process we develop a semantic parser (www.kparser.org). We show that our approach works well with respect to a subset of Winograd schemas.", "title": "" }, { "docid": "91fbf465741c6a033a00a4aa982630b4", "text": "This paper presents an integrated functional link interval type-2 fuzzy neural system (FLIT2FNS) for predicting the stock market indices. The hybrid model uses a TSK (Takagi–Sugano–Kang) type fuzzy rule base that employs type-2 fuzzy sets in the antecedent parts and the outputs from the Functional Link Artificial Neural Network (FLANN) in the consequent parts. Two other approaches, namely the integrated FLANN and type-1 fuzzy logic system and Local Linear Wavelet Neural Network (LLWNN) are also presented for a comparative study. Backpropagation and particle swarm optimization (PSO) learning algorithms have been used independently to optimize the parameters of all the forecasting models. To test the model performance, three well known stock market indices like the Standard’s & Poor’s 500 (S&P 500), Bombay stock exchange (BSE), and Dow Jones industrial average (DJIA) are used. The mean absolute percentage error (MAPE) and root mean square error (RMSE) are used to find out the performance of all the three models. Finally, it is observed that out of three methods, FLIT2FNS performs the best irrespective of the time horizons spanning from 1 day to 1 month. © 2011 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "1e645b6134fb5ef80f89e6d10b1cb734", "text": "This paper analyzes the effect of replay attacks on a control system. We assume an attacker wishes to disrupt the operation of a control system in steady state. In order to inject an exogenous control input without being detected the attacker will hijack the sensors, observe and record their readings for a certain amount of time and repeat them afterwards while carrying out his attack. This is a very common and natural attack (we have seen numerous times intruders recording and replaying security videos while performing their attack undisturbed) for an attacker who does not know the dynamics of the system but is aware of the fact that the system itself is expected to be in steady state for the duration of the attack. We assume the control system to be a discrete time linear time invariant gaussian system applying an infinite horizon Linear Quadratic Gaussian (LQG) controller. We also assume that the system is equipped with a χ2 failure detector. The main contributions of the paper, beyond the novelty of the problem formulation, consist in 1) providing conditions on the feasibility of the replay attack on the aforementioned system and 2) proposing a countermeasure that guarantees a desired probability of detection (with a fixed false alarm rate) by trading off either detection delay or LQG performance, either by decreasing control accuracy or increasing control effort.", "title": "" }, { "docid": "0781a718ebf950eb0196885c9a75549c", "text": "Context: Knowledge management technologies have been employed across software engineering activities for more than two decades. Knowledge-based approaches can be used to facilitate software architecting activities (e.g., architectural evaluation). However, there is no comprehensive understanding on how various knowledge-based approaches (e.g., knowledge reuse) are employed in software architecture. Objective: This work aims to collect studies on the application of knowledge-based approaches in software architecture and make a classification and thematic analysis on these studies, in order to identify the gaps in the existing application of knowledge-based approaches to various architecting activities, and promising research directions. Method: A systematic mapping study is conducted for identifying and analyzing the application of knowledge-based approaches in software architecture, covering the papers from major databases, journals, conferences, and workshops, published between January 2000 and March 2011. Results: Fifty-five studies were selected and classified according to the architecting activities they contribute to and the knowledge-based approaches employed. Knowledge capture and representation (e.g., using an ontology to describe architectural elements and their relationships) is the most popular approach employed in architecting activities. Knowledge recovery (e.g., documenting past architectural design decisions) is an ignored approach that is seldom used in software architecture. Knowledge-based approaches are mostly used in architectural evaluation, while receive the least attention in architecture impact analysis and architectural implementation. Conclusions: The study results show an increased interest in the application of knowledge-based approaches in software architecture in recent years. A number of knowledge-based approaches, including knowledge capture and representation, reuse, sharing, recovery, and reasoning, have been employed in a spectrum of architecting activities. Knowledge-based approaches have been applied to a wide range of application domains, among which ‘‘Embedded software’’ has received the most attention. 2012 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "d47fe2f028b03b9b10a81d1a71c466ab", "text": "This paper investigates the system-level performance of downlink non-orthogonal multiple access (NOMA) with power-domain user multiplexing at the transmitter side and successive interference canceller (SIC) on the receiver side. The goal is to clarify the performance gains of NOMA for future LTE (Long-Term Evolution) enhancements, taking into account design aspects related to the LTE radio interface such as, frequency-domain scheduling with adaptive modulation and coding (AMC), and NOMA specific functionalities such as error propagation of SIC receiver, multi-user pairing and transmit power allocation. In particular, a pre-defined user grouping and fixed per-group power allocation are proposed to reduce the overhead associated with power allocation signalling. Based on computer simulations, we show that for both wideband and subband scheduling and both low and high mobility scenarios, NOMA can still provide a hefty portion of its expected gains even with error propagation, and also when the proposed simplified user grouping and power allocation are used.", "title": "" }, { "docid": "3291f56f3052fe50a3064ad25f47f08a", "text": "Tricaine methane-sulfonate (MS-222) application in fish anaesthesia By N. Topic Popovic, I. Strunjak-Perovic, R. Coz-Rakovac, J. Barisic, M. Jadan, A. Persin Berakovic and R. Sauerborn Klobucar Laboratory of Ichthyopathology – Biological Materials, Division for Materials Chemistry, Rudjer Boskovic Institute, Zagreb, Croatia; Department of Anaesthesiology, University Hospital Clinic, Zagreb, Croatia", "title": "" }, { "docid": "fc65af24f5c53715a39ecba0a62d3b78", "text": "Visual Domain Adaptation is a problem of immense importance in computer vision. Previous approaches showcase the inability of even deep neural networks to learn informative representations across domain shift. This problem is more severe for tasks where acquiring hand labeled data is extremely hard and tedious. In this work, we focus on adapting the representations learned by segmentation networks across synthetic and real domains. Contrary to previous approaches that use a simple adversarial objective or superpixel information to aid the process, we propose an approach based on Generative Adversarial Networks (GANs) that brings the embeddings closer in the learned feature space. To showcase the generality and scalability of our approach, we show that we can achieve state of the art results on two challenging scenarios of synthetic to real domain adaptation. Additional exploratory experiments show that our approach: (1) generalizes to unseen domains and (2) results in improved alignment of source and target dis-", "title": "" }, { "docid": "c04dd7ccb0426ef5d44f0420d321904d", "text": "In this paper, we introduce a new convolutional layer named the Temporal Gaussian Mixture (TGM) layer and present how it can be used to efficiently capture temporal structure in continuous activity videos. Our layer is designed to allow the model to learn a latent hierarchy of sub-event intervals. Our approach is fully differentiable while relying on a significantly less number of parameters, enabling its end-to-end training with standard backpropagation. We present our convolutional video models with multiple TGM layers for activity detection. Our experiments on multiple datasets including Charades and MultiTHUMOS confirm the benefit of our TGM layers, illustrating that it outperforms other models and temporal convolutions.", "title": "" }, { "docid": "f1977e5f8fbc0df4df0ac6bf1715c254", "text": "Instabilities in MOS-based devices with various substrates ranging from Si, SiGe, IIIV to 2D channel materials, can be explained by defect levels in the dielectrics and non-radiative multi-phonon (NMP) barriers. However, recent results obtained on single defects have demonstrated that they can show a highly complex behaviour since they can transform between various states. As a consequence, detailed physical models are complicated and computationally expensive. As will be shown here, as long as only lifetime predictions for an ensemble of defects is needed, considerable simplifications are possible. We present and validate an oxide defect model that captures the essence of full physical models while reducing the complexity substantially. We apply this model to investigate the improvement in positive bias temperature instabilities due to a reliability anneal. Furthermore, we corroborate the simulated defect bands with prior defect-centric studies and perform lifetime projections.", "title": "" }, { "docid": "f4bf4be69ea3f3afceca056e2b5b8102", "text": "In this paper we present a conversational dialogue system, Ch2R (Chinese Chatter Robot) for online shopping guide, which allows users to inquire about information of mobile phone in Chinese. The purpose of this paper is to describe our development effort in terms of the underlying human language technologies (HLTs) as well as other system issues. We focus on a mixed-initiative conversation mechanism for interactive shopping guide combining initiative guiding and question understanding. We also present some evaluation on the system in mobile phone shopping guide domain. Evaluation results demonstrate the efficiency of our approach.", "title": "" }, { "docid": "8620c228a0a686788b53d9c766b5b6bf", "text": "Projects combining agile methods with CMMI combine adaptability with predictability to better serve large customer needs. The introduction of Scrum at Systematic, a CMMI Level 5 company, doubled productivity and cut defects by 40% compared to waterfall projects in 2006 by focusing on early testing and time to fix builds. Systematic institutionalized Scrum across all projects and used data driven tools like story process efficiency to surface Product Backlog impediments. This allowed them to systematically develop a strategy for a second doubling in productivity. Two teams have achieved a sustainable quadrupling of productivity compared to waterfall projects. We discuss here the strategy to bring the entire company to that level. Our experiences shows that Scrum and CMMI together bring a more powerful combination of adaptability and predictability than either one alone and suggest how other companies can combine them to achieve Toyota level performance – 4 times the productivity and 12 times the quality of waterfall teams.", "title": "" }, { "docid": "a16b9bbb9675a14952527fb4de583d00", "text": "Adaptations in resistance training are focused on the development and maintenance of the neuromuscular unit needed for force production [97, 136]. The effects of training, when using this system, affect many other physiological systems of the body (e.g., the connective tissue, cardiovascular, and endocrine systems) [16, 18, 37, 77, 83]. Training programs are highly specific to the types of adaptation that occur. Activation of specific patterns of motor units in training dictate what tissue and how other physiological systems will be affected by the exercise training. The time course of the development of the neuromuscular system appears to be dominated in the early phase by neural factors with associated changes in the types of contractile proteins. In the later adaptation phase, muscle protein increases, and the contractile unit begins to contribute the most to the changes in performance capabilities. A host of other factors can affect the adaptations, such as functional capabilities of the individual, age, nutritional status, and behavioral factors (e.g., sleep and health habits). Optimal adaptation appears to be related to the use of specific resistance training programs to meet individual training objectives.", "title": "" }, { "docid": "7b2ef4e81c8827389eeb025ae686210e", "text": "This paper presents a novel framework for generating texture mosaics with convolutional neural networks. Our method is called GANosaic and performs optimization in the latent noise space of a generative texture model, which allows the transformation of a content image into a mosaic exhibiting the visual properties of the underlying texture manifold. To represent that manifold, we use a state-of-the-art generative adversarial method for texture synthesis [1], which can learn expressive texture representations from data and produce mosaic images with very high resolution. This fully convolutional model generates smooth (without any visible borders) mosaic images which morph and blend different textures locally. In addition, we develop a new type of differentiable statistical regularization appropriate for optimization over the prior noise space of the PSGAN model.", "title": "" } ]
scidocsrr
7f8815e74dd80a82d73676e29c5799de
The relationship between organization strategy, total quality management (TQM), and organization performance--the mediating role of TQM
[ { "docid": "5971934855f9d4dde2a7fc91e757606c", "text": "The use of total quality management (TQM), which creates a system of management procedures that focuses on customer satisfaction and transforms the corporate culture so as to guarantee continual improvement, is discussed. The team approach essential to its implementation is described. Two case studies of applying TQM at AT&T are presented.<<ETX>>", "title": "" }, { "docid": "8eeb8fba948b37b4e9489c472cb1506a", "text": "Total Quality Management (TQM) has become, according to one source, 'as pervasive a part of business thinking as quarterly financial results,' and yet TQM's role as a strategic resource remains virtually unexamined in strategic management research. Drawing on the resource approach and other theoretical perspectives, this article examines TQM as a potential source of sustainable competitive advantage, reviews existing empirical evidence, and reports findings from a new empirical study of TQM's performance consequences. The findings suggest that most features generally associated with TQM—such as quality training, process improvement, and benchmarking—do not generally produce advantage, but that certain tacit, behavioral, imperfectly imitable features—such as open culture, employee empowerment, and executive commitment—can produce advantage. The author concludes that these tacit resources, and not TQM tools and techniques, drive TQM success, and that organizations that acquire them can outperform competitors with or without the accompanying TQM ideology.", "title": "" } ]
[ { "docid": "0e57945ae40e8c0f08e92396c2592a78", "text": "Frequent or contextually predictable words are often phonetically reduced, i.e. shortened and produced with articulatory undershoot. Explanations for phonetic reduction of predictable forms tend to take one of two approaches: Intelligibility-based accounts hold that talkers maximize intelligibility of words that might otherwise be difficult to recognize; production-based accounts hold that variation reflects the speed of lexical access and retrieval in the language production system. Here we examine phonetic variation as a function of phonological neighborhood density, capitalizing on the fact that words from dense phonological neighborhoods tend to be relatively difficult to recognize, yet easy to produce. We show that words with many phonological neighbors tend to be phonetically reduced (shortened in duration and produced with more centralized vowels) in connected speech, when other predictors of phonetic variation are brought under statistical control. We argue that our findings are consistent with the predictions of production-based accounts of pronunciation variation. 2011 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "932934a4362bd671427954d0afb61459", "text": "On the basis of the similarity between spinel and rocksalt structures, it is shown that some spinel oxides (e.g., MgCo2O4, etc) can be cathode materials for Mg rechargeable batteries around 150 °C. The Mg insertion into spinel lattices occurs via \"intercalation and push-out\" process to form a rocksalt phase in the spinel mother phase. For example, by utilizing the valence change from Co(III) to Co(II) in MgCo2O4, Mg insertion occurs at a considerably high potential of about 2.9 V vs. Mg2+/Mg, and similarly it occurs around 2.3 V vs. Mg2+/Mg with the valence change from Mn(III) to Mn(II) in MgMn2O4, being comparable to the ab initio calculation. The feasibility of Mg insertion would depend on the phase stability of the counterpart rocksalt XO of MgO in Mg2X2O4 or MgX3O4 (X = Co, Fe, Mn, and Cr). In addition, the normal spinel MgMn2O4 and MgCr2O4 can be demagnesiated to some extent owing to the robust host structure of Mg1-xX2O4, where the Mg extraction/insertion potentials for MgMn2O4 and MgCr2O4 are both about 3.4 V vs. Mg2+/Mg. Especially, the former \"intercalation and push-out\" process would provide a safe and stable design of cathode materials for polyvalent cations.", "title": "" }, { "docid": "493b22055a1b9bda564c2c1ae6727cba", "text": "Earlier studies have introduced a list of high-level evaluation criteria to assess how well a language supports generic programming. Since each language that meets all criteria is considered generic, those criteria are not fine-grained enough to differentiate between languages for generic programming. We refine these criteria into a taxonomy that captures differences between type classes in Haskell and concepts in C++, and discuss which differences are incidental and which ones are due to other language features. The taxonomy allows for an improved understanding of language support for generic programming, and the comparison is useful for the ongoing discussions among language designers and users of both languages.", "title": "" }, { "docid": "93ca1c371b2ecf8fa8c7962e704ce953", "text": "We tested the applicability and signal quality of a 16 channel dry electroencephalography (EEG) system in a laboratory environment and in a car under controlled, realistic conditions. The aim of our investigation was an estimation how well a passive Brain-Computer Interface (pBCI) can work in an autonomous driving scenario. The evaluation considered speed and accuracy of self-applicability by an untrained person, quality of recorded EEG data, shifts of electrode positions on the head after driving-related movements, usability, and complexity of the system as such and wearing comfort over time. An experiment was conducted inside and outside of a stationary vehicle with running engine, air-conditioning, and muted radio. Signal quality was sufficient for standard EEG analysis in the time and frequency domain as well as for the use in pBCIs. While the influence of vehicle-induced interferences to data quality was insignificant, driving-related movements led to strong shifts in electrode positions. In general, the EEG system used allowed for a fast self-applicability of cap and electrodes. The assessed usability of the system was still acceptable while the wearing comfort decreased strongly over time due to friction and pressure to the head. From these results we conclude that the evaluated system should provide the essential requirements for an application in an autonomous driving context. Nevertheless, further refinement is suggested to reduce shifts of the system due to body movements and increase the headset's usability and wearing comfort.", "title": "" }, { "docid": "62b2daec701f43a3282076639d01e475", "text": "Several hundred plant and herb species that have potential as novel antiviral agents have been studied, with surprisingly little overlap. A wide variety of active phytochemicals, including the flavonoids, terpenoids, lignans, sulphides, polyphenolics, coumarins, saponins, furyl compounds, alkaloids, polyines, thiophenes, proteins and peptides have been identified. Some volatile essential oils of commonly used culinary herbs, spices and herbal teas have also exhibited a high level of antiviral activity. However, given the few classes of compounds investigated, most of the pharmacopoeia of compounds in medicinal plants with antiviral activity is still not known. Several of these phytochemicals have complementary and overlapping mechanisms of action, including antiviral effects by either inhibiting the formation of viral DNA or RNA or inhibiting the activity of viral reproduction. Assay methods to determine antiviral activity include multiple-arm trials, randomized crossover studies, and more compromised designs such as nonrandomized crossovers and pre- and post-treatment analyses. Methods are needed to link antiviral efficacy/potency- and laboratory-based research. Nevertheless, the relative success achieved recently using medicinal plant/herb extracts of various species that are capable of acting therapeutically in various viral infections has raised optimism about the future of phyto-antiviral agents. As this review illustrates, there are innumerable potentially useful medicinal plants and herbs waiting to be evaluated and exploited for therapeutic applications against genetically and functionally diverse viruses families such as Retroviridae, Hepadnaviridae and Herpesviridae", "title": "" }, { "docid": "cc3dd8ad390d0e7ce157d24d1aeed997", "text": "Algorithms are presented for detecting errors and anomalies in programs which use synchronization constructs to implement concurrency. The algorithms employ data flow analysis techniques. First used in compiler object code optimization, the techniques have more recently been used in the detection of variable usage errors in dngle process programs. By adapting these existing algorithms, the sane classes of variable usage errors can be detected in concurrent process programs. Important classes of errors unique to concurrent process programs are also described, and algorithms for their detection are presented.", "title": "" }, { "docid": "bdefafd4277c1f71e9f4c8d7769e0645", "text": "In many applications, one has to actively select among a set of expensive observations before making an informed decision. For example, in environmental monitoring, we want to select locations to measure in order to most effectively predict spatial phenomena. Often, we want to select observations which are robust against a number of possible objective functions. Examples include minimizing the maximum posterior variance in Gaussian Process regression, robust experimental design, and sensor placement for outbreak detection. In this paper, we present the Submodular Saturation algorithm, a simple and efficient algorithm with strong theoretical approximation guarantees for cases where the possible objective functions exhibit submodularity, an intuitive diminishing returns property. Moreover, we prove that better approximation algorithms do not exist unless NP-complete problems admit efficient algorithms. We show how our algorithm can be extended to handle complex cost functions (incorporating non-unit observation cost or communication and path costs). We also show how the algorithm can be used to near-optimally trade off expected-case (e.g., the Mean Square Prediction Error in Gaussian Process regression) and worst-case (e.g., maximum predictive variance) performance. We show that many important machine learning problems fit our robust submodular observation selection formalism, and provide extensive empirical evaluation on several real-world problems. For Gaussian Process regression, our algorithm compares favorably with state-of-the-art heuristics described in the geostatistics literature, while being simpler, faster and providing theoretical guarantees. For robust experimental design, our algorithm performs favorably compared to SDP-based algorithms. ∗ School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA † Google Inc., Pittsburgh, PA, USA.", "title": "" }, { "docid": "6f734301a698a54177265815189a2bb9", "text": "Online image sharing in social media sites such as Facebook, Flickr, and Instagram can lead to unwanted disclosure and privacy violations, when privacy settings are used inappropriately. With the exponential increase in the number of images that are shared online every day, the development of effective and efficient prediction methods for image privacy settings are highly needed. The performance of models critically depends on the choice of the feature representation. In this paper, we present an approach to image privacy prediction that uses deep features and deep image tags as feature representations. Specifically, we explore deep features at various neural network layers and use the top layer (probability) as an auto-annotation mechanism. The results of our experiments show that models trained on the proposed deep features and deep image tags substantially outperform baselines such as those based on SIFT and GIST as well as those that use “bag of tags” as features.", "title": "" }, { "docid": "14b6b544144d6c14cb283fd0ac8308d8", "text": "Disrupted daily or circadian rhythms of lung function and inflammatory responses are common features of chronic airway diseases. At the molecular level these circadian rhythms depend on the activity of an autoregulatory feedback loop oscillator of clock gene transcription factors, including the BMAL1:CLOCK activator complex and the repressors PERIOD and CRYPTOCHROME. The key nuclear receptors and transcription factors REV-ERBα and RORα regulate Bmal1 expression and provide stability to the oscillator. Circadian clock dysfunction is implicated in both immune and inflammatory responses to environmental, inflammatory, and infectious agents. Molecular clock function is altered by exposomes, tobacco smoke, lipopolysaccharide, hyperoxia, allergens, bleomycin, as well as bacterial and viral infections. The deacetylase Sirtuin 1 (SIRT1) regulates the timing of the clock through acetylation of BMAL1 and PER2 and controls the clock-dependent functions, which can also be affected by environmental stressors. Environmental agents and redox modulation may alter the levels of REV-ERBα and RORα in lung tissue in association with a heightened DNA damage response, cellular senescence, and inflammation. A reciprocal relationship exists between the molecular clock and immune/inflammatory responses in the lungs. Molecular clock function in lung cells may be used as a biomarker of disease severity and exacerbations or for assessing the efficacy of chronotherapy for disease management. Here, we provide a comprehensive overview of clock-controlled cellular and molecular functions in the lungs and highlight the repercussions of clock disruption on the pathophysiology of chronic airway diseases and their exacerbations. Furthermore, we highlight the potential for the molecular clock as a novel chronopharmacological target for the management of lung pathophysiology.", "title": "" }, { "docid": "7d9f003bcce3f99b096e3dcd5d849f6d", "text": "Anti-Money Laundering (AML) can be seen as a central problem for financial institutions because of the need to detect compliance violations in various customer contexts. Changing regulations and the strict supervision of financial authorities create an even higher pressure to establish an effective working compliance program. To support financial institutions in building a simple but efficient compliance program we develop a reference model that describes the process and data view for one key process of AML based on literature analysis and expert interviews. Therefore, this paper describes the customer identification process (CIP) as a part of an AML program using reference modeling techniques. The contribution of this work is (i) the application of multi-perspective reference modeling resulting in (ii) a reference model for AML customer identification. Overall, the results help to understand the complexity of AML processes and to establish a sustainable compliance program.", "title": "" }, { "docid": "cdff0e2d4c0d91ed360569bd28422a1a", "text": "An antipodal Vivaldi antenna (AVA) with novel symmetric two-layer double-slot structure is proposed. When excited with equiamplitude and opposite phase, the two slots will have the sum vector of their E-field vectors parallel to the antenna’s plane, which is uniform to the E-field vector in the slot of a balanced AVA with three-layer structure. Compared with a typical AVA with the same size, the proposed antenna has better impedance characteristics because of the amelioration introduced by the coupling between the two slots, as well as the more symmetric radiation patterns and the remarkably lowered cross-polarization level at the endfire direction. For validating the analysis, an UWB balun based on the double-sided parallel stripline is designed for realizing the excitation, and a sample of the proposed antenna is fabricated. The measured results reveal that the proposed has an operating frequency range from 2.8 to 15 GHz, in which the cross-polarization level is less than −24.8 dB. Besides, the group delay of two face-to-face samples has a variation less than 0.62 ns, which exhibits the ability of the novel structure for transferring pulse signal with high fidelity. The simple two-layer structure, together with the improvement both in impedance and radiation characteristics, makes the proposed antenna much desirable for the UWB applications.", "title": "" }, { "docid": "f1bd28aba519845b3a6ea8ef92695e79", "text": "Web 2.0 communities are a quite recent phenomenon which involve large numbers of users and where communication between members is carried out in real time. Despite of those good characteristics, there is still a necessity of developing tools to help users to reach decisions with a high level of consensus in those new virtual environments. In this contribution a new consensus reaching model is presented which uses linguistic preferences and is designed to minimize the main problems that this kind of organization", "title": "" }, { "docid": "13bdee231c9361f5359c50adc688cfd4", "text": "We propose a novel deep network called Mancs that solves the person re-identification problem from the following aspects: fully utilizing the attention mechanism for the person misalignment problem and properly sampling for the ranking loss to obtain more stable person representation. Technically, we contribute a novel fully attentional block which is deeply supervised and can be plugged into any CNN, and a novel curriculum sampling method which is effective for training ranking losses. The learning tasks are integrated into a unified framework and jointly optimized. Experiments have been carried out on Market1501, CUHK03 and DukeMTMC. All the results show that Mancs can significantly outperform the previous state-of-the-arts. In addition, the effectiveness of the newly proposed ideas has been confirmed by extensive ablation studies.", "title": "" }, { "docid": "babac76166921edd1f29a2818380cc5c", "text": "Content-Centric Networking (CCN) is an emerging (inter-)networking architecture with the goal of becoming an alternative to the IP-based Internet. To be considered a viable candidate, CCN must at least have parity with existing solutions for confidential and anonymous communication, e.g., TLS, tcpcrypt, and Tor. ANDa̅NA (Anonymous Named Data Networking Application) was the first proposed solution that addressed the lack of anonymous communication in Named Data Networking (NDN)-a variant of CCN. However, its design and implementation led to performance issues that hinder practical use. In this paper we introduce AC3N: Anonymous Communication for Content-Centric Networking. AC3N is an evolution of the ANDa̅NA system that supports high-throughput and low-latency anonymous content retrieval. We discuss the design and initial performance results of this new system.", "title": "" }, { "docid": "21393a1c52b74517336ef3e08dc4d730", "text": "The technical part of these Guidelines and Recommendations, produced under the auspices of EFSUMB, provides an introduction to the physical principles and technology on which all forms of current commercially available ultrasound elastography are based. A difference in shear modulus is the common underlying physical mechanism that provides tissue contrast in all elastograms. The relationship between the alternative technologies is considered in terms of the method used to take advantage of this. The practical advantages and disadvantages associated with each of the techniques are described, and guidance is provided on optimisation of scanning technique, image display, image interpretation and some of the known image artefacts.", "title": "" }, { "docid": "eae289c213d5b67d91bb0f461edae7af", "text": "China has made remarkable progress in its war against poverty since the launching of economic reform in the late 1970s. This paper examines some of the major driving forces of poverty reduction in China. Based on time series and cross-sectional provincial data, the determinants of rural poverty incidence are estimated. The results show that economic growth is an essential and necessary condition for nationwide poverty reduction. It is not, however, a sufficient condition. While economic growth played a dominant role in reducing poverty through the mid-1990s, its impacts has diminished since that time. Beyond general economic growth, growth in specific sectors of the economy is also found to reduce poverty. For example, the growth the agricultural sector and other pro-rural (vs urban-biased) development efforts can also have significant impacts on rural poverty. Notwithstanding the record of the past, our paper is consistent with the idea that poverty reduction in the future will need to rely on more than broad-based growth and instead be dependent on pro-poor policy interventions (such as national poverty alleviation programs) that can be targeted at the poor, trying to directly help the poor to increase their human capital and incomes. Determinants of Rural Poverty Reduction and Pro-poor Economic Growth in China", "title": "" }, { "docid": "0d83203e0002c0342c2378d3e32502d4", "text": "In a crisis ridden business environment, customers have become very averse to surprises. Business windows have become smaller; there is a heightened need for shorter development cycles and higher visibility. All this is translating into more and more customers specifically asking for agile. Service organizations such as Wipro Technologies need to adopt lean and agile methodologies to support the transition. As agile coaches, the biggest challenge we face is in transitioning the mindset of the team from that of a waterfall model to an agile thought pattern. Our experience in converting a waterfall team to agile is shared in this report.", "title": "" }, { "docid": "c7d17145605864aa28106c14954dcae5", "text": "Person re-identification (ReID) is to identify pedestrians observed from different camera views based on visual appearance. It is a challenging task due to large pose variations, complex background clutters and severe occlusions. Recently, human pose estimation by predicting joint locations was largely improved in accuracy. It is reasonable to use pose estimation results for handling pose variations and background clutters, and such attempts have obtained great improvement in ReID performance. However, we argue that the pose information was not well utilized and hasn't yet been fully exploited for person ReID. In this work, we introduce a novel framework called Attention-Aware Compositional Network (AACN) for person ReID. AACN consists of two main components: Pose-guided Part Attention (PPA) and Attention-aware Feature Composition (AFC). PPA is learned and applied to mask out undesirable background features in pedestrian feature maps. Furthermore, pose-guided visibility scores are estimated for body parts to deal with part occlusion in the proposed AFC module. Extensive experiments with ablation analysis show the effectiveness of our method, and state-of-the-art results are achieved on several public datasets, including Market-1501, CUHK03, CUHK01, SenseReID, CUHK03-NP and DukeMTMC-reID.", "title": "" }, { "docid": "587f58f291732bfb8954e34564ba76fd", "text": "Blood pressure oscillometric waveforms behave as amplitude modulated nonlinear signals with frequency fluctuations. Their oscillating nature can be better analyzed by the digital Taylor-Fourier transform (DTFT), recently proposed for phasor estimation in oscillating power systems. Based on a relaxed signal model that includes Taylor components greater than zero, the DTFT is able to estimate not only the oscillation itself, as does the digital Fourier transform (DFT), but also its derivatives included in the signal model. In this paper, an oscillometric waveform is analyzed with the DTFT, and its zeroth and first oscillating harmonics are illustrated. The results show that the breathing activity can be separated from the cardiac one through the critical points of the first component, determined by the zero crossings of the amplitude derivatives estimated from the third Taylor order model. On the other hand, phase derivative estimates provide the fluctuations of the cardiac frequency and its derivative, new parameters that could improve the precision of the systolic and diastolic blood pressure assignment. The DTFT envelope estimates uniformly converge from K=3, substantially improving the harmonic separation of the DFT.", "title": "" }, { "docid": "84b018fa45e06755746309014854bb9a", "text": "For years, ontologies have been known in computer science as consensual models of domains of discourse, usually implemented as formal definitions of the relevant conceptual entities. Researchers have written much about the potential benefits of using them, and most of us regard ontologies as central building blocks of the semantic Web and other semantic systems. Unfortunately, the number and quality of actual, \"non-toy\" ontologies available on the Web today is remarkably low. This implies that the semantic Web community has yet to build practically useful ontologies for a lot of relevant domains in order to make the semantic Web a reality. Theoretically minded advocates often assume that the lack of ontologies is because the \"stupid business people haven't realized ontologies' enormous benefits.\" As a liberal market economist, the author assumes that humans can generally figure out what's best for their well-being, at least in the long run, and that they act accordingly. In other words, the fact that people haven't yet created as many useful ontologies as the ontology research community would like might indicate either unresolved technical limitations or the existence of sound rationales for why individuals refrain from building them - or both. Indeed, several social and technical difficulties exist that put a brake on developing and eventually constrain the space of possible ontologies", "title": "" } ]
scidocsrr
19ece8fe163e71372c8aec67167a7689
Progressive Reasoning by Module Composition
[ { "docid": "a1ef2bce061c11a2d29536d7685a56db", "text": "This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.", "title": "" } ]
[ { "docid": "de0c44ece780b8037f4476391a07a654", "text": "One of the big challenges in Linked Data consumption is to create visual and natural language interfaces to the data usable for nontechnical users. Ontodia provides support for diagrammatic data exploration, showcased in this publication in combination with the Wikidata dataset. We present improvements to the natural language interface regarding exploring and querying Linked Data entities. The method uses models of distributional semantics to find and rank entity properties related to user input in Ontodia. Various word embedding types and model settings are evaluated, and the results show that user experience in visual data exploration benefits from the proposed approach.", "title": "" }, { "docid": "c39836282acc36e77c95e732f4f1c1bc", "text": "In this paper, a new dataset, HazeRD, is proposed for benchmarking dehazing algorithms under more realistic haze conditions. HazeRD contains fifteen real outdoor scenes, for each of which five different weather conditions are simulated. As opposed to prior datasets that made use of synthetically generated images or indoor images with unrealistic parameters for haze simulation, our outdoor dataset allows for more realistic simulation of haze with parameters that are physically realistic and justified by scattering theory. All images are of high resolution, typically six to eight megapixels. We test the performance of several state-of-the-art dehazing techniques on HazeRD. The results exhibit a significant difference among algorithms across the different datasets, reiterating the need for more realistic datasets such as ours and for more careful benchmarking of the methods.", "title": "" }, { "docid": "55063694f2b4582d423c0764e5758fe2", "text": "The mean-variance principle of Markowitz (1952) for portfolio selection gives disappointing results once the mean and variance are replaced by their sample counterparts. The problem is ampli…ed when the number of assets is large and the sample covariance is singular or nearly singular. In this paper, we investigate four regularization techniques to stabilize the inverse of the covariance matrix: the ridge, spectral cut-o¤, Landweber-Fridman and LARS Lasso. These four methods involve a tuning parameter that needs to be selected. The main contribution is to derive a data-driven method for selecting the tuning parameter in an optimal way, i.e. in order to minimize a quadratic loss function measuring the distance between the estimated allocation and the optimal one. The cross-validation type criterion takes a similar form for the four regularization methods. Preliminary simulations show that regularizing yields a higher out-of-sample performance than the sample based Markowitz portfolio and often outperforms the 1 over N equal weights portfolio. We thank Raymond Kan, Bruce Hansen, and Marc Henry for their helpful comments.", "title": "" }, { "docid": "c7f0a749e38b3b7eba871fca80df9464", "text": "This paper presents QurAna: a large corpus created from the original Quranic text, where personal pronouns are tagged with their antecedence. These antecedents are maintained as an ontological list of concepts, which has proved helpful for information retrieval tasks. QurAna is characterized by: (a) comparatively large number of pronouns tagged with antecedent information (over 24,500 pronouns), and (b) maintenance of an ontological concept list out of these antecedents. We have shown useful applications of this corpus. This corpus is the first of its kind covering Classical Arabic text, and could be used for interesting applications for Modern Standard Arabic as well. This corpus will enable researchers to obtain empirical patterns and rules to build new anaphora resolution approaches. Also, this corpus can be used to train, optimize and evaluate existing approaches.", "title": "" }, { "docid": "617ec3be557749e0646ad7092a1afcb6", "text": "The difficulty of directly measuring gene flow has lead to the common use of indirect measures extrapolated from genetic frequency data. These measures are variants of FST, a standardized measure of the genetic variance among populations, and are used to solve for Nm, the number of migrants successfully entering a population per generation. Unfortunately, the mathematical model underlying this translation makes many biologically unrealistic assumptions; real populations are very likely to violate these assumptions, such that there is often limited quantitative information to be gained about dispersal from using gene frequency data. While studies of genetic structure per se are often worthwhile, and FST is an excellent measure of the extent of this population structure, it is rare that FST can be translated into an accurate estimate of Nm.", "title": "" }, { "docid": "57290d8e0a236205c4f0ce887ffed3ab", "text": "We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model. This approach is in contrast with most frameworks of conditional GANs used in application today, which use the conditional information by concatenating the (embedded) conditional vector to the feature vectors. With this modification, we were able to significantly improve the quality of the class conditional image generation on ILSVRC2012 (ImageNet) 1000-class image dataset from the current state-of-the-art result, and we achieved this with a single pair of a discriminator and a generator. We were also able to extend the application to super-resolution and succeeded in producing highly discriminative super-resolution images. This new structure also enabled high quality category transformation based on parametric functional transformation of conditional batch normalization layers in the generator. The code with Chainer (Tokui et al., 2015), generated images and pretrained models are available at https://github.com/pfnet-research/sngan_projection.", "title": "" }, { "docid": "d805dc116db48b644b18e409dda3976e", "text": "Based on previous cross-sectional findings, we hypothesized that weight loss could improve several hemostatic factors associated with cardiovascular disease. In a randomized controlled trial, moderately overweight men and women were assigned to one of four weight loss treatment groups or to a control group. Measurements of plasminogen activator inhibitor-1 (PAI-1) antigen, tissue-type plasminogen activator (t-PA) antigen, D-dimer antigen, factor VII activity, fibrinogen, and protein C antigens were made at baseline and after 6 months in 90 men and 88 women. Net treatment weight loss was 9.4 kg in men and 7.4 kg in women. There was no net change (p > 0.05) in D-dimer, fibrinogen, or protein C with weight loss. Significant (p < 0.05) decreases were observed in the combined treatment groups compared with the control group for mean PAI-1 (31% decline), t-PA antigen (24% decline), and factor VII (11% decline). Decreases in these hemostatic variables were correlated with the amount of weight lost and the degree that plasma triglycerides declined; these correlations were stronger in men than women. These findings suggest that weight loss can improve abnormalities in hemostatic factors associated with obesity.", "title": "" }, { "docid": "2b3929da96949056bc473e8da947cebe", "text": "This paper presents “Value-Difference Based Exploration” (VDBE), a method for balancing the exploration/exploitation dilemma inherent to reinforcement learning. The proposed method adapts the exploration parameter of ε-greedy in dependence of the temporal-difference error observed from value-function backups, which is considered as a measure of the agent’s uncertainty about the environment. VDBE is evaluated on a multi-armed bandit task, which allows for insight into the behavior of the method. Preliminary results indicate that VDBE seems to be more parameter robust than commonly used ad hoc approaches such as ε-greedy or softmax.", "title": "" }, { "docid": "f25bf9cdbe3330dcb450a66ae25d19bd", "text": "The hypoplastic, weak lateral crus of the nose may cause concave alar rim deformity, and in severe cases, even alar rim collapse. These deformities may lead to both aesthetic disfigurement and functional impairment of the nose. The cephalic part of the lateral crus was folded and fixed to reinforce the lateral crus. The study included 17 women and 15 men with a median age of 24 years. The average follow-up period was 12 months. For 23 patients, the described technique was used to treat concave alar rim deformity, whereas for 5 patients, who had thick and sebaceous skin, it was used to prevent weakness of the alar rim. The remaining 4 patients underwent surgery for correction of a collapsed alar valve. Satisfactory results were achieved without any complications. Turn-in folding of the cephalic portion of lateral crus not only functionally supports the lateral crus, but also provides aesthetic improvement of the nasal tip as successfully as cephalic excision of the lateral crura.", "title": "" }, { "docid": "1d1cec012f9f78b40a0931ae5dea53d0", "text": "Recursive subdivision using interval arithmetic allows us to render CSG combinations of implicit function surfaces with or without anti -aliasing, Related algorithms will solve the collision detection problem for dynamic simulation, and allow us to compute mass. center of gravity, angular moments and other integral properties required for Newtonian dynamics. Our hidden surface algorithms run in ‘constant time.’ Their running times are nearly independent of the number of primitives in a scene, for scenes in which the visible details are not much smaller than the pixels. The collision detection and integration algorithms are utterly robust — collisions are never missed due 10 numerical error and we can provide guaranteed bounds on the values of integrals. CR", "title": "" }, { "docid": "3ff83589bb0a3c93a263be1a3743e8ff", "text": "Recent interest in managing uncertainty in data integration has led to the introduction of probabilistic schema mappings and the use of probabilistic methods to answer queries across multiple databases using two semantics: by-table and by-tuple. In this paper, we develop three possible semantics for aggregate queries: the range, distribution, and expected value semantics, and show that these three semantics combine with the by-table and by-tuple semantics in six ways. We present algorithms to process COUNT, AVG, SUM, MIN, and MAX queries under all six semantics and develop results on the complexity of processing such queries under all six semantics. We show that computing COUNT is in PTIME for all six semantics and computing SUM is in PTIME for all but the by-tuple/distribution semantics. Finally, we show that AVG, MIN, and MAX are PTIME computable for all by-table semantics and for the by-tuple/range semantics.We developed a prototype implementation and experimented with both real-world traces and simulated data. We show that, as expected, naive processing of aggregates does not scale beyond small databases with a small number of mappings. The results also show that the polynomial time algorithms are scalable up to several million tuples as well as with a large number of mappings.", "title": "" }, { "docid": "bba813ba24b8bc3a71e1afd31cf0454d", "text": "Betweenness-Centrality measure is often used in social and computer communication networks to estimate the potential monitoring and control capabilities a vertex may have on data flowing in the network. In this article, we define the Routing Betweenness Centrality (RBC) measure that generalizes previously well known Betweenness measures such as the Shortest Path Betweenness, Flow Betweenness, and Traffic Load Centrality by considering network flows created by arbitrary loop-free routing strategies.\n We present algorithms for computing RBC of all the individual vertices in the network and algorithms for computing the RBC of a given group of vertices, where the RBC of a group of vertices represents their potential to collaboratively monitor and control data flows in the network. Two types of collaborations are considered: (i) conjunctive—the group is a sequences of vertices controlling traffic where all members of the sequence process the traffic in the order defined by the sequence and (ii) disjunctive—the group is a set of vertices controlling traffic where at least one member of the set processes the traffic. The algorithms presented in this paper also take into consideration different sampling rates of network monitors, accommodate arbitrary communication patterns between the vertices (traffic matrices), and can be applied to groups consisting of vertices and/or edges.\n For the cases of routing strategies that depend on both the source and the target of the message, we present algorithms with time complexity of O(n2m) where n is the number of vertices in the network and m is the number of edges in the routing tree (or the routing directed acyclic graph (DAG) for the cases of multi-path routing strategies). The time complexity can be reduced by an order of n if we assume that the routing decisions depend solely on the target of the messages.\n Finally, we show that a preprocessing of O(n2m) time, supports computations of RBC of sequences in O(kn) time and computations of RBC of sets in O(n3n) time, where k in the number of vertices in the sequence or the set.", "title": "" }, { "docid": "d2fe01fea2c21492f7db0a0ee51f51e6", "text": "New opportunities and challenges arise with the growing availability of online Arabic reviews. Sentiment analysis of these reviews can help the beneficiary by summarizing the opinions of others about entities or events. Also, for opinions to be comprehensive, analysis should be provided for each aspect or feature of the entity. In this paper, we propose a generic approach that extracts the entity aspects and their attitudes for reviews written in modern standard Arabic. The proposed approach does not exploit predefined sets of features, nor domain ontology hierarchy. Instead we add sentiment tags on the patterns and roots of an Arabic lexicon and used these tags to extract the opinion bearing words and their polarities. The proposed system is evaluated on the entity-level using two datasets of 500 movie reviews with accuracy 96% and 1000 restaurant reviews with accuracy 86.7%. Then the system is evaluated on the aspect-level using 500 Arabic reviews in different domains (Novels, Products, Movies, Football game events and Hotels). It extracted aspects, at 80.8% recall and 77.5% precision with respect to the aspects defined by domain experts.", "title": "" }, { "docid": "f0f88be4a2b7619f6fb5cdcca1741d1f", "text": "BACKGROUND\nThere is no evidence from randomized trials to support a strategy of lowering systolic blood pressure below 135 to 140 mm Hg in persons with type 2 diabetes mellitus. We investigated whether therapy targeting normal systolic pressure (i.e., <120 mm Hg) reduces major cardiovascular events in participants with type 2 diabetes at high risk for cardiovascular events.\n\n\nMETHODS\nA total of 4733 participants with type 2 diabetes were randomly assigned to intensive therapy, targeting a systolic pressure of less than 120 mm Hg, or standard therapy, targeting a systolic pressure of less than 140 mm Hg. The primary composite outcome was nonfatal myocardial infarction, nonfatal stroke, or death from cardiovascular causes. The mean follow-up was 4.7 years.\n\n\nRESULTS\nAfter 1 year, the mean systolic blood pressure was 119.3 mm Hg in the intensive-therapy group and 133.5 mm Hg in the standard-therapy group. The annual rate of the primary outcome was 1.87% in the intensive-therapy group and 2.09% in the standard-therapy group (hazard ratio with intensive therapy, 0.88; 95% confidence interval [CI], 0.73 to 1.06; P=0.20). The annual rates of death from any cause were 1.28% and 1.19% in the two groups, respectively (hazard ratio, 1.07; 95% CI, 0.85 to 1.35; P=0.55). The annual rates of stroke, a prespecified secondary outcome, were 0.32% and 0.53% in the two groups, respectively (hazard ratio, 0.59; 95% CI, 0.39 to 0.89; P=0.01). Serious adverse events attributed to antihypertensive treatment occurred in 77 of the 2362 participants in the intensive-therapy group (3.3%) and 30 of the 2371 participants in the standard-therapy group (1.3%) (P<0.001).\n\n\nCONCLUSIONS\nIn patients with type 2 diabetes at high risk for cardiovascular events, targeting a systolic blood pressure of less than 120 mm Hg, as compared with less than 140 mm Hg, did not reduce the rate of a composite outcome of fatal and nonfatal major cardiovascular events. (ClinicalTrials.gov number, NCT00000620.)", "title": "" }, { "docid": "1cb39c8a2dd05a8b2241c9c795ca265f", "text": "An ever growing interest and wide adoption of Internet of Things (IoT) and Web technologies are unleashing a true potential of designing a broad range of high-quality consumer applications. Smart cities, smart buildings, and e-health are among various application domains which are currently benefiting and will continue to benefit from IoT and Web technologies in a foreseeable future. Similarly, semantic technologies have proven their effectiveness in various domains and a few among multiple challenges which semantic Web technologies are addressing are to (i) mitigate heterogeneity by providing semantic inter-operability, (ii) facilitate easy integration of data application, (iii) deduce and extract new knowledge to build applications providing smart solutions, and (iv) facilitate inter-operability among various data processes including representation, management and storage of data. In this tutorial, our focus will be on the combination of Web technologies, Semantic Web, and IoT technologies and we will present to our audience that how a merger of these technologies is leading towards an evolution from IoT to Web of Things (WoT) to Semantic Web of Things. This tutorial will introduce the basics of Internet of Things, Web of Things and Semantic Web and will demonstrate tools and techniques designed to enable the rapid development of semantics-based Web of Things applications. One key aspect of this tutorial is to familiarize its audience with the open source tools designed by different semantic Web, IoT and WoT based projects and provide the audience a rich hands-on experience to use these tools and build smart applications with minimal efforts. Thus, reducing the learning curve to its maximum. We will showcase real-world use case scenarios which are designed using semantically-enabled WoT frameworks (e.g. CityPulse, FIESTA-IoT and M3).", "title": "" }, { "docid": "50cc2033252216368c3bf19ea32b8a2c", "text": "Sometimes you just have to clench your teeth and go for the differential matrix algebra. And the central limit theorems. Together with the maximum likelihood techniques. And the static mean variance portfolio theory. Not forgetting the dynamic asset pricing models. And these are just the tools you need before you can start making empirical inferences in financial economics.” So wrote Ruben Lee, playfully, in a review of The Econometrics of Financial Markets, winner of TIAA-CREF’s  Paul A. Samuelson Award. In  economist Harry M. Markowitz, who in  won the Nobel Prize in Economics, published his landmark thesis “Portfolio Selection” as an article in the Journal of Finance, and financial economics was born. Over the subsequent decades, this young and burgeoning field saw many advances in theory but few in econometric technique or empirical results. Then, nearly four decades later, Campbell, Lo, and MacKinlay’s The Econometrics of Financial Markets made a bold leap forward by integrating theory and empirical work. The three economists combined their own pathbreaking research with a generation of foundational work in modern financial theory and research. The book includes treatment of topics from the predictability of asset returns to the capital asset pricing model and arbitrage pricing theory, from statistical fractals to chaos theory. Read widely in academe as well as in the business world, The Econometrics of Financial Markets has become a new landmark in financial economics, extending and enhancing the Nobel Prize– winning work established by the early trailblazers in this important field.", "title": "" }, { "docid": "e5a18d6df921ab96da8e106cdb4eeac7", "text": "This article extends psychological methods and concepts into a domain that is as profoundly consequential as it is poorly understood: intelligence analysis. We report findings from a geopolitical forecasting tournament that assessed the accuracy of more than 150,000 forecasts of 743 participants on 199 events occurring over 2 years. Participants were above average in intelligence and political knowledge relative to the general population. Individual differences in performance emerged, and forecasting skills were surprisingly consistent over time. Key predictors were (a) dispositional variables of cognitive ability, political knowledge, and open-mindedness; (b) situational variables of training in probabilistic reasoning and participation in collaborative teams that shared information and discussed rationales (Mellers, Ungar, et al., 2014); and (c) behavioral variables of deliberation time and frequency of belief updating. We developed a profile of the best forecasters; they were better at inductive reasoning, pattern detection, cognitive flexibility, and open-mindedness. They had greater understanding of geopolitics, training in probabilistic reasoning, and opportunities to succeed in cognitively enriched team environments. Last but not least, they viewed forecasting as a skill that required deliberate practice, sustained effort, and constant monitoring of current affairs.", "title": "" }, { "docid": "67925645b590cba622dd101ed52cf9e2", "text": "This study is the first to demonstrate that features of psychopathy can be reliably and validly detected by lay raters from \"thin slices\" (i.e., small samples) of behavior. Brief excerpts (5 s, 10 s, and 20 s) from interviews with 96 maximum-security inmates were presented in video or audio form or in both modalities combined. Forty raters used these excerpts to complete assessments of overall psychopathy and its Factor 1 and Factor 2 components, various personality disorders, violence proneness, and attractiveness. Thin-slice ratings of psychopathy correlated moderately and significantly with psychopathy criterion measures, especially those related to interpersonal features of psychopathy, particularly in the 5- and 10-s excerpt conditions and in the video and combined channel conditions. These findings demonstrate that first impressions of psychopathy and related constructs, particularly those pertaining to interpersonal functioning, can be reasonably reliable and valid. They also raise intriguing questions regarding how individuals form first impressions and about the extent to which first impressions may influence the assessment of personality disorders. (PsycINFO Database Record (c) 2009 APA, all rights reserved).", "title": "" }, { "docid": "2b7465ad660dadd040bd04839d3860f3", "text": "Simulation of a pen-and-ink illustration style in a realtime rendering system is a challenging computer graphics problem. Tonal art maps (TAMs) were recently suggested as a solution to this problem. Unfortunately, only the hatching aspect of pen-and-ink media was addressed thus far. We extend the TAM approach and enable representation of arbitrary textures. We generate TAM images by distributing stroke primitives according to a probability density function. This function is derived from the input image and varies depending on the TAM’s scale and tone levels. The resulting depiction of textures approximates various styles of pen-and-ink illustrations such as outlining, stippling, and hatching.", "title": "" } ]
scidocsrr
aee69768a3b925dff146ec7f6681e070
A unifying framework for detecting outliers and change points from time series
[ { "docid": "49f1d3ebaf3bb3e575ac3e40101494d9", "text": "This paper discusses the current status of research on fraud detection undertaken a.s part of the European Commissionfunded ACTS ASPECT (Advanced Security for Personal Communications Technologies) project, by Royal Holloway University of London. Using a recurrent neural network technique, we uniformly distribute prototypes over Toll Tickets. sampled from the U.K. network operator, Vodafone. The prototypes, which continue to adapt to cater for seasonal or long term trends, are used to classify incoming Toll Tickets to form statistical behaviour proFdes covering both the short and long-term past. These behaviour profiles, maintained as probability distributions, comprise the input to a differential analysis utilising a measure known as the HeUinger distance[5] between them as an alarm criteria. Fine tuning the system to minimise the number of false alarms poses a significant ask due to the low fraudulent/non fraudulent activity ratio. We benefit from using unsupervised learning in that no fraudulent examples ate requited for training. This is very relevant considering the currently secure nature of GSM where fraud scenarios, other than Subscription Fraud, have yet to manifest themselves. It is the aim of ASPECT to be prepared for the would-be fraudster for both GSM and UMTS, Introduction When a mobile originated phone call is made or various inter-call criteria are met he cells or switches that a mobile phone is communicating with produce information pertaining to the call attempt. These data records, for billing purposes, are referred to as Toll Tickets. Toll Tickets contain a wealth of information about the call so that charges can be made to the subscriber. By considering well studied fraud indicators these records can also be used to detect fraudulent activity. By this we mean i terrogating a series of recent Toll Tickets and comparing a function of the various fields with fixed criteria, known as triggers. A trigger, if activated, raises an alert status which cumulatively would lead to investigation by the network operator. Some xample fraud indicators are that of a new subscriber making long back-to-back international calls being indicative of direct call selling or short back-to-back calls to a single land number indicating an attack on a PABX system. Sometimes geographical information deduced from the cell sites visited in a call can indicate cloning. This can be detected through setting a velocity trap. Fixed trigger criteria can be set to catch such extremes of activity, but these absolute usage criteria cannot trap all types of fraud. An alternative approach to the problem is to perform a differential analysis. Here we develop behaviour profiles relating to the mobile phone’s activity and compare its most recent activities with a longer history of its usage. Techniques can then be derived to determine when the mobile phone’s behaviour changes ignificantly. One of the most common indicators of fraud is a significant change in behaviour. The performance expectations of such a system must be of prime concern when developing any fraud detection strategy. To implement a real time fraud detection tool on the Vodafone network in the U.K, it was estimated that, on average, the system would need to be able to process around 38 Toll Tickets per second. This figure varied with peak and off-peak usage and also had seasonal trends. The distribution of the times that calls are made and the duration of each call is highly skewed. Considering all calls that are made in the U.K., including the use of supplementary services, we found the average call duration to be less than eight seconds, hardly time to order a pizza. In this paper we present one of the methods developed under ASPECT that tackles the problem of skewed distributions and seasonal trends using a recurrent neural network technique that is based around unsupervised learning. We envisage this technique would form part of a larger fraud detection suite that also comprises a rule based fraud detection tool and a neural network fraud detection tool that uses supervised learning on a multi-layer perceptron. Each of the systems has its strengths and weaknesses but we anticipate that the hybrid system will combine their strengths. 9 From: AAAI Technical Report WS-97-07. Compilation copyright © 1997, AAAI (www.aaai.org). All rights reserved.", "title": "" } ]
[ { "docid": "48544ec3225799c82732db7b3215833b", "text": "Christian M Jones Laura Scholes Daniel Johnson Mary Katsikitis Michelle C. Carras University of the Sunshine Coast University of the Sunshine Coast Queensland University of Technology University of the Sunshine Coast Johns Hopkins University Queensland, Australia Queensland, Australia Queensland, Australia Queensland, Australia Baltimore, MD, USA [email protected] [email protected] [email protected] [email protected] [email protected]", "title": "" }, { "docid": "c1a4c276865d830b66a794a55cabe813", "text": "In recent years, fraud is increasing rapidly with the development of modern technology and global communication. Although many literatures have addressed the fraud detection problem, these existing works focus only on formulating the fraud detection problem as a binary classification problem. Due to limitation of information provided by telecommunication records, such classifier-based approaches for fraudulent phone call detection normally do not work well. In this paper, we develop a graph-mining-based fraudulent phone call detection framework for a mobile application to automatically annotate fraudulent phone numbers with a \"fraud\" tag, which is a crucial prerequisite for distinguishing fraudulent phone calls from normal phone calls. Our detection approach performs a weighted HITS algorithm to learn the trust value of a remote phone number. Based on telecommunication records, we build two kinds of directed bipartite graph: i) CPG and ii) UPG to represent telecommunication behavior of users. To weight the edges of CPG and UPG, we extract features for each pair of user and remote phone number in two different yet complementary aspects: 1) duration relatedness (DR) between user and phone number; and 2) frequency relatedness (FR) between user and phone number. Upon weighted CPG and UPG, we determine a trust value for each remote phone number. Finally, we conduct a comprehensive experimental study based on a dataset collected through an anti-fraud mobile application, Whoscall. The results demonstrate the effectiveness of our weighted HITS-based approach and show the strength of taking both DR and FR into account in feature extraction.", "title": "" }, { "docid": "5fb732fd3210a5c9bba42426b1b4ce49", "text": "While there are optimal TSP solvers, as well as recent learning-based approaches, the generalization of the TSP to the Multiple Traveling Salesmen Problem is much less studied. Here, we design a neural network solution that treats the salesmen, cities and depot as three different sets of varying cardinalities. We apply a novel technique that combines elements from recent architectures that were developed for sets, as well as elements from graph networks. Coupled with new constraint enforcing output layers, a dedicated loss, and a search method, our solution is shown to outperform all the meta-heuristics of the leading solver in the field.", "title": "" }, { "docid": "64acb2d16c23f2f26140c0bce1785c9b", "text": "Physical forces of gravity, hemodynamic stresses, and movement play a critical role in tissue development. Yet, little is known about how cells convert these mechanical signals into a chemical response. This review attempts to place the potential molecular mediators of mechanotransduction (e.g. stretch-sensitive ion channels, signaling molecules, cytoskeleton, integrins) within the context of the structural complexity of living cells. The model presented relies on recent experimental findings, which suggests that cells use tensegrity architecture for their organization. Tensegrity predicts that cells are hard-wired to respond immediately to mechanical stresses transmitted over cell surface receptors that physically couple the cytoskeleton to extracellular matrix (e.g. integrins) or to other cells (cadherins, selectins, CAMs). Many signal transducing molecules that are activated by cell binding to growth factors and extracellular matrix associate with cytoskeletal scaffolds within focal adhesion complexes. Mechanical signals, therefore, may be integrated with other environmental signals and transduced into a biochemical response through force-dependent changes in scaffold geometry or molecular mechanics. Tensegrity also provides a mechanism to focus mechanical energy on molecular transducers and to orchestrate and tune the cellular response.", "title": "" }, { "docid": "87835d75704f493639744abbf0119bdb", "text": "Developers of cloud-scale applications face a difficult decision of which kind of storage to use, summarised by the CAP theorem. Currently the choice is between classical CP databases, which provide strong guarantees but are slow, expensive, and unavailable under partition, and NoSQL-style AP databases, which are fast and available, but too hard to program against. We present an alternative: Cure provides the highest level of guarantees that remains compatible with availability. These guarantees include: causal consistency (no ordering anomalies), atomicity (consistent multi-key updates), and support for high-level data types (developer friendly API) with safe resolution of concurrent updates (guaranteeing convergence). These guarantees minimise the anomalies caused by parallelism and distribution, thus facilitating the development of applications. This paper presents the protocols for highly available transactions, and an experimental evaluation showing that Cure is able to achieve scalability similar to eventually-consistent NoSQL databases, while providing stronger guarantees.", "title": "" }, { "docid": "1ad24b9cb8815ee5ee8ef723f2cddc65", "text": "Scene text recognition is a useful but very challenging task due to uncontrolled condition of text in natural scenes. This paper presents a novel approach to recognize text in scene images. In the proposed technique, a word image is first converted into a sequential column vectors based on Histogram of Oriented Gradient (HOG). The Recurrent Neural Network (RNN) is then adapted to classify the sequential feature vectors into the corresponding word. Compared with most of the existing methods that follow a bottom-up approach to form words by grouping the recognized characters, our proposed method is able to recognize the whole word images without character-level segmentation and recognition. Experiments on a number of publicly available datasets show that the proposed method outperforms the state-of-the-art techniques significantly. In addition, the recognition results on publicly available datasets provide a good benchmark for the future research in this area.", "title": "" }, { "docid": "5481f319296c007412e62129d2ec5943", "text": "We propose a new family of optimization criteria for variational auto-encoding models, generalizing the standard evidence lower bound. We provide conditions under which they recover the data distribution and learn latent features, and formally show that common issues such as blurry samples and uninformative latent features arise when these conditions are not met. Based on these new insights, we propose a new sequential VAE model that can generate sharp samples on the LSUN image dataset based on pixel-wise reconstruction loss, and propose an optimization criterion that encourages unsupervised learning of informative latent features.", "title": "" }, { "docid": "851069abb48c8941ec7a2b82230c846d", "text": "Fiducial markers are artificial landmarks added to a scene to facilitate locating point correspondences between images, or between images and a known model. Reliable fiducials solve the interest point detection and matching problems when adding markers is convenient. The proper design of fiducials and the associated computer vision algorithms to detect them can enable accurate pose detection for applications ranging from augmented reality, input devices for HCI, to robot navigation. Marker systems typically have two stages, hypothesis generation from unique image features and verification/identification. A set of criteria for high robustness and practical use are identified and then optimized to produce the ARTag fiducial marker system. An edge-based method robust to lighting and partial occlusion is used for the hypothesis stage, and a reliable digital coding system is used for the identification and verification stage. Using these design criteria large gains in performance are achieved by ARTag over conventional ad hoc designs.", "title": "" }, { "docid": "90ea7dc1052ffc5e06a09cf4199be320", "text": "This paper presents the ringing suppressing method in class D resonant inverter operating at 13.56MHz for wireless power transfer systems. The ringing loop in half-bridge topology inverter at 13.56MHz strongly effect to the stable, performance and efficiency of the inverter. Typically, this ringing can be damped by using a snubber circuit which is placed parallel to the MOSFETs or using a damping circuit which is place inside the ringing loop. But in the resonant inverter with high power and high frequency, the snubber circuit or general damping circuit may reduce performance and efficiency of inverter. A new damping circuit combining with drive circuit design solution is proposed in this paper. The simulation and experiment results showed that the proposed design significantly suppresses the ringing current and ringing voltage in the circuit. The power loss on the MOSFETs is reduced while the efficiency of inverter increases 2% to obtain 93.1% at 1.2kW output power. The inverter becomes more stable and compact.", "title": "" }, { "docid": "15f46090f74282257979c38c5f151469", "text": "Integrating data from multiple sources has been a longstanding challenge in the database community. Techniques such as privacy-preserving data mining promises privacy, but assume data has integration has been accomplished. Data integration methods are seriously hampered by inability to share the data to be integrated. This paper lays out a privacy framework for data integration. Challenges for data integration in the context of this framework are discussed, in the context of existing accomplishments in data integration. Many of these challenges are opportunities for the data mining community.", "title": "" }, { "docid": "3982c66e695fdefe36d8d143247add88", "text": "A recognition scheme that scales efficiently to a large number of objects is presented. The efficiency and quality is exhibited in a live demonstration that recognizes CD-covers from a database of 40000 images of popular music CD’s. The scheme builds upon popular techniques of indexing descriptors extracted from local regions, and is robust to background clutter and occlusion. The local region descriptors are hierarchically quantized in a vocabulary tree. The vocabulary tree allows a larger and more discriminatory vocabulary to be used efficiently, which we show experimentally leads to a dramatic improvement in retrieval quality. The most significant property of the scheme is that the tree directly defines the quantization. The quantization and the indexing are therefore fully integrated, essentially being one and the same. The recognition quality is evaluated through retrieval on a database with ground truth, showing the power of the vocabulary tree approach, going as high as 1 million images.", "title": "" }, { "docid": "c0df2d0b9d0cda0bc2ae8689857887c1", "text": "Collective operations and non-blocking point-to-point operations have always been part of MPI. Although non-blocking collective operations are an obvious extension to MPI, there have been no comprehensive studies of this functionality. In this paper we present LibNBC, a portable high-performance library for implementing non-blocking collective MPI communication operations. LibNBC provides non-blocking versions of all MPI collective operations, is layered on top of MPI-1, and is portable to nearly all parallel architectures. To measure the performance characteristics of our implementation, we also present a microbenchmark for measuring both latency and overlap of computation and communication. Experimental results demonstrate that the blocking performance of the collective operations in our library is comparable to that of collective operations in other high-performance MPI implementations. Our library introduces a very low overhead between the application and the underlying MPI and thus, in conjunction with the potential to overlap communication with computation, offers the potential for optimizing real-world applications.", "title": "" }, { "docid": "ce8729f088aaf9f656c9206fc67ff4bd", "text": "Traditional passive radar detectors compute cross correlation of the raw data in the reference and surveillance channels. However, there is no optimality guarantee for this detector in the presence of a noisy reference. Here, we develop a new detector that utilizes a test statistic based on the cross correlation of the principal left singular vectors of the reference and surveillance signal-plus-noise matrices. This detector offers better performance by exploiting the inherent low-rank structure when the transmitted signals are a weighted periodic summation of several identical waveforms (amplitude and phase modulation), as is the case with commercial digital illuminators as well as noncooperative radar. We consider a scintillating target. We provide analytical detection performance guarantees establishing signal-to-noise ratio thresholds above which the proposed detection statistic reliably discriminates, in an asymptotic sense, the signal versus no-signal hypothesis. We validate these results using extensive numerical simulations. We demonstrate the “near constant false alarm rate (CFAR)” behavior of the proposed detector with respect to a fixed, SNR-independent threshold and contrast that with the need to adjust the detection threshold in an SNR-dependent manner to maintain CFAR for other detectors found in the literature. Extensions of the proposed detector for settings applicable to orthogonal frequency division multiplexing (OFDM), adaptive radar are discussed.", "title": "" }, { "docid": "34a21bf5241d8cc3a7a83e78f8e37c96", "text": "A current-biased voltage-programmed (CBVP) pixel circuit for active-matrix organic light-emitting diode (AMOLED) displays is proposed. The pixel circuit can not only ensure an accurate and fast compensation for the threshold voltage variation and degeneration of the driving TFT and the OLED, but also provide the OLED with a negative bias during the programming period. The negative bias prevents the OLED from a possible light emitting during the programming period and potentially suppresses the degradation of the OLED.", "title": "" }, { "docid": "82857fedec78e8317498e3c66268d965", "text": "In this paper, we provide an improved evolutionary algorithm for bilevel optimization. It is an extension of a recently proposed Bilevel Evolutionary Algorithm based on Quadratic Approximations (BLEAQ). Bilevel optimization problems are known to be difficult and computationally demanding. The recently proposed BLEAQ approach has been able to bring down the computational expense significantly as compared to the contemporary approaches. The strategy proposed in this paper further improves the algorithm by incorporating archiving and local search. Archiving is used to store the feasible members produced during the course of the algorithm that provide a larger pool of members for better quadratic approximations of optimal lower level solutions. Frequent local searches at upper level supported by the quadratic approximations help in faster convergence of the algorithm. The improved results have been demonstrated on two different sets of test problems, and comparison results against the contemporary approaches are also provided.", "title": "" }, { "docid": "8d7e8ee0f6305d50276d25ce28bcdf9c", "text": "The advancement of visual sensing has introduced better capturing of the discrete information from a complex, crowded scene for assisting in the analysis. However, after reviewing existing system, we find that majority of the work carried out till date is associated with significant problems in modeling event detection as well as reviewing abnormality of the given scene. Therefore, the proposed system introduces a model that is capable of identifying the degree of abnormality for an event captured on the crowded scene using unsupervised training methodology. The proposed system contributes to developing a novel region-wise repository to extract the contextual information about the discrete-event for a given scene. The study outcome shows highly improved the balance between the computational time and overall accuracy as compared to the majority of the standard research work emphasizing on event", "title": "" }, { "docid": "283d3f1ff0ca4f9c0a2a6f4beb1f7771", "text": "As a proof-of-concept for the vision “SSD as SQL Engine” (SaS in short), we demonstrate that SQLite [4], a popular mobile database engine, in its entirety can run inside a real SSD development platform. By turning storage device into database engine, SaS allows applications to directly interact with full SQL database server running inside storage device. In SaS, the SQL language itself, not the traditional dummy block interface, will be provided as new interface between applications and storage device. In addition, since SaS plays the role of the uni ed platform of database computing node and storage node, the host and the storage need not be segregated any more as separate physical computing components.", "title": "" }, { "docid": "53a49412d75190357df5d159b11843f0", "text": "Perception and reasoning are basic human abilities that are seamlessly connected as part of human intelligence. However, in current machine learning systems, the perception and reasoning modules are incompatible. Tasks requiring joint perception and reasoning ability are difficult to accomplish autonomously and still demand human intervention. Inspired by the way language experts decoded Mayan scripts by joining two abilities in an abductive manner, this paper proposes the abductive learning framework. The framework learns perception and reasoning simultaneously with the help of a trial-and-error abductive process. We present the Neural-Logical Machine as an implementation of this novel learning framework. We demonstrate thatusing human-like abductive learningthe machine learns from a small set of simple hand-written equations and then generalizes well to complex equations, a feat that is beyond the capability of state-of-the-art neural network models. The abductive learning framework explores a new direction for approaching human-level learning ability.", "title": "" }, { "docid": "256ab1145fe2fb9de4c0b89b7ce1321c", "text": "Public elections are one of the basis upon which representative democracy is built. Thus, it is of the utmost importance that governments and organizations are able to successfully hold non-fraudulent representative elections. Several methods are employed in order to allow citizens to cast their votes, such as Ballot-based voting, purely electronic methods, and Electronic Voting Machines, among others. However, we argue that current methods, specially those based on electronic platforms, provide unsatisfactory levels of transparency to voters, thus harming the trust voters have that the vote they cast was the same one counted by election officials, a problem known as voter confidence. Instead of stepping back to traditional and inefficient offline strategies, we suggest the modernization of State structures by the use of emerging technologies. In this research, we explore the possibility of using Blockchain technology to help in solving those transparency and confidence problems. First, we give an overview of Blockchain itself and other uses focused on societal problems and their respective analysis. We then analyze how the adoption of Blockchain into a digital government repertoire can contribute to common e-voting issues and also promote elections transparency, increase auditability, enhance voter confidence and strengthen democracy. By attending to this poster presentation, visitors will have a clear understanding of what is Blockchain and its basic concepts, why are market and researchers so excited about it, how it can help to solve common voting systems issues, who is already using it and the benefits and potential risks of its adoption.", "title": "" }, { "docid": "7895810c92a80b6d5fd8b902241d66c9", "text": "This paper discusses a high-voltage pulse generator for producing corona plasma. The generator consists of three resonant charging circuits, a transmission line transformer, and a triggered spark-gap switch. Voltage pulses in the order of 30–100 kV with a rise time of 10–20 ns, a pulse duration of 100–200 ns, a pulse repetition rate of 1–900 pps, an energy per pulse of 0.5–12 J, and the average power of up to 10 kW have been achieved with total energy conversion efficiency of 80%–90%. Moreover, the system has been used in four industrial demonstrations on volatile organic compounds removal, odor emission control, and biogas conditioning.", "title": "" } ]
scidocsrr
954e8971e0f2006a8d29603cd176e861
Applications of the balanced scorecard for strategic management and performance measurement in the health sector.
[ { "docid": "4e1414ce6a8fde64b0e7a89a2ced1a7e", "text": "Several innovative healthcare executives have recently introduced a new business strategy implementation tool: the Balanced Scorecard. The scorecard's measurement and management system provides the following potential benefits to healthcare organizations: It aligns the organization around a more market-oriented, customer-focused strategy It facilitates, monitors, and assesses the implementation of the strategy It provides a communication and collaboration mechanism It assigns accountability for performance at all levels of the organization It provides continual feedback on the strategy and promotes adjustments to marketplace and regulatory changes. We surveyed executives in nine provider organizations that were implementing the Balanced Scorecard. We asked about the following issues relating to its implementation and effect: 1. The role of the Balanced Scorecard in relation to a well-defined vision, mission, and strategy 2. The motivation for adopting the Balanced Scorecard 3. The difference between the Balanced Scorecard and other measurement systems 4. The process followed to develop and implement the Balanced Scorecard 5. The challenges and barriers during the development and implementation process 6. The benefits gained by the organization from adoption and use. The executives reported that the Balanced Scorecard strategy implementation and performance management tool could be successfully applied in the healthcare sector, enabling organizations to improve their competitive market positioning, financial results, and customer satisfaction. This article concludes with guidelines for other healthcare provider organizations to capture the benefits of the Balanced Scorecard performance management system.", "title": "" }, { "docid": "d7569e715a355060d30ff91f8327771c", "text": "The Ministry of Public Health (MOPH) in Afghanistan has developed a balanced scorecard (BSC) to regularly monitor the progress of its strategy to deliver a basic package of health services. Although frequently used in other health-care settings, this represents the first time that the BSC has been employed in a developing country. The BSC was designed via a collaborative process focusing on translating the vision and mission of the MOPH into 29 core indicators and benchmarks representing six different domains of health services, together with two composite measures of performance. In the absence of a routine health information system, the 2004 BSC for Afghanistan was derived from a stratified random sample of 617 health facilities, 5719 observations of patient-provider interactions, and interviews with 5597 patients, 1553 health workers, and 13,843 households. Nationally, health services were found to be reaching more of the poor than the less-poor population, and providing for more women than men, both key concerns of the government. However, serious deficiencies were found in five domains, and particularly in counselling patients, providing delivery care during childbirth, monitoring tuberculosis treatment, placing staff and equipment, and establishing functional village health councils. The BSC also identified wide variations in performance across provinces; no province performed better than the others across all domains. The innovative adaptation of the BSC in Afghanistan has provided a useful tool to summarize the multidimensional nature of health-services performance, and is enabling managers to benchmark performance and identify strengths and weaknesses in the Afghan context.", "title": "" } ]
[ { "docid": "dbda28573269e3f87c520fa34395e533", "text": "The requirements for dielectric measurements on polar liquids lie largely in two areas. First there is scientific interest in revealing the structure of and interactions between the molecules - this can be studied through dielectric spectroscopy. Secondly, polar liquids are widely used as dielectric reference and tissue equivalent materials for biomedical studies and for mobile telecommunications, health and safety related measurements. This review discusses these roles for polar liquids and surveys the techniques available for the measurement of their complex permittivity at RF and Microwave frequencies. One aim of the review is to guide researchers and metrologists in the choice of measurement methods and in their optimization. Particular emphasis is placed on the importance of traceability in these measurements to international standards", "title": "" }, { "docid": "0f42ee3de2d64956fc8620a2afc20f48", "text": "In 4 experiments, the authors addressed the mechanisms by which grammatical gender (in Italian and German) may come to affect meaning. In Experiments 1 (similarity judgments) and 2 (semantic substitution errors), the authors found Italian gender effects for animals but not for artifacts; Experiment 3 revealed no comparable effects in German. These results suggest that gender effects arise as a generalization from an established association between gender of nouns and sex of human referents, extending to nouns referring to sexuated entities. Across languages, such effects are found when the language allows for easy mapping between gender of nouns and sex of human referents (Italian) but not when the mapping is less transparent (German). A final experiment provided further constraints: These effects during processing arise at a lexical-semantic level rather than at a conceptual level.", "title": "" }, { "docid": "d1afaada6bf5927d9676cee61d3a1d49", "text": "t-Closeness is a privacy model recently defined for data anonymization. A data set is said to satisfy t-closeness if, for each group of records sharing a combination of key attributes, the distance between the distribution of a confidential attribute in the group and the distribution of the attribute in the entire data set is no more than a threshold t. Here, we define a privacy measure in terms of information theory, similar to t-closeness. Then, we use the tools of that theory to show that our privacy measure can be achieved by the postrandomization method (PRAM) for masking in the discrete case, and by a form of noise addition in the general case.", "title": "" }, { "docid": "3ec3285a2babcd3a00b453956dda95aa", "text": "Microblog normalisation methods often utilise complex models and struggle to differentiate between correctly-spelled unknown words and lexical variants of known words. In this paper, we propose a method for constructing a dictionary of lexical variants of known words that facilitates lexical normalisation via simple string substitution (e.g. tomorrow for tmrw). We use context information to generate possible variant and normalisation pairs and then rank these by string similarity. Highlyranked pairs are selected to populate the dictionary. We show that a dictionary-based approach achieves state-of-the-art performance for both F-score and word error rate on a standard dataset. Compared with other methods, this approach offers a fast, lightweight and easy-to-use solution, and is thus suitable for high-volume microblog pre-processing. 1 Lexical Normalisation A staggering number of short text “microblog” messages are produced every day through social media such as Twitter (Twitter, 2011). The immense volume of real-time, user-generated microblogs that flows through sites has been shown to have utility in applications such as disaster detection (Sakaki et al., 2010), sentiment analysis (Jiang et al., 2011; González-Ibáñez et al., 2011), and event discovery (Weng and Lee, 2011; Benson et al., 2011). However, due to the spontaneous nature of the posts, microblogs are notoriously noisy, containing many non-standard forms — e.g., tmrw “tomorrow” and 2day “today” — which degrade the performance of natural language processing (NLP) tools (Ritter et al., 2010; Han and Baldwin, 2011). To reduce this effect, attempts have been made to adapt NLP tools to microblog data (Gimpel et al., 2011; Foster et al., 2011; Liu et al., 2011b; Ritter et al., 2011). An alternative approach is to pre-normalise non-standard lexical variants to their standard orthography (Liu et al., 2011a; Han and Baldwin, 2011; Xue et al., 2011; Gouws et al., 2011). For example, se u 2morw!!! would be normalised to see you tomorrow! The normalisation approach is especially attractive as a preprocessing step for applications which rely on keyword match or word frequency statistics. For example, earthqu, eathquake, and earthquakeee — all attested in a Twitter corpus — have the standard form earthquake; by normalising these types to their standard form, better coverage can be achieved for keyword-based methods, and better word frequency estimates can be obtained. In this paper, we focus on the task of lexical normalisation of English Twitter messages, in which out-of-vocabulary (OOV) tokens are normalised to their in-vocabulary (IV) standard form, i.e., a standard form that is in a dictionary. Following other recent work on lexical normalisation (Liu et al., 2011a; Han and Baldwin, 2011; Gouws et al., 2011; Liu et al., 2012), we specifically focus on one-to-one normalisation in which one OOV token is normalised to one IV word. Naturally, not all OOV words in microblogs are lexical variants of IV words: named entities, e.g., are prevalent in microblogs, but not all named entities are included in our dictionary. One challenge for lexical normalisation is therefore to dis-", "title": "" }, { "docid": "38ea50d7e6e5e1816005b3197828dbae", "text": "Life sciences research is based on individuals, often with diverse skills, assembled into research groups. These groups use their specialist expertise to address scientific problems. The in silico experiments undertaken by these research groups can be represented as workflows involving the co-ordinated use of analysis programs and information repositories that may be globally distributed. With regards to Grid computing, the requirements relate to the sharing of analysis and information resources rather than sharing computational power. The Grid project has developed the Taverna workbench for the composition and execution of workflows for the life sciences community. This experience paper describes lessons learnt during the development of Taverna. A common theme is the importance of understanding how workflows fit into the scientists’ experimental context. The lessons reflect an evolving understanding of life scientists’ requirements on a workflow environment, which is relevant to other areas of data intensive and exploratory science.", "title": "" }, { "docid": "1e5bf278ee006aa5bd0721bba051ad6d", "text": "The dynamic features of the JavaScript language not only promote various means for users to interact with websites through Web browsers, but also pose serious security threats to both users and websites. On top of this, obfuscation has become a popular technique among malicious JavaScript code that tries to hide its malicious purpose and to evade the detection of anti-virus software. To defend against obfuscated malicious JavaScript code, in this paper we propose a mostly static approach called JStill. JStill captures some essential characteristics of obfuscated malicious code by function invocation based analysis. It also leverages the combination of static analysis and lightweight runtime inspection so that it can not only detect, but also prevent the execution of the obfuscated malicious JavaScript code in browsers. Our evaluation based on real-world malicious JavaScript samples as well as Alexa top 50,000 websites demonstrates high detection accuracy (all in our experiment) and low false positives of JStill. Meanwhile, JStill only incurs negligible performance overhead, making it a practical solution to preventing obfuscated malicious JavaScript code.", "title": "" }, { "docid": "e8bd4676b8ee39c1da853553bfc8fabc", "text": "BACKGROUND\nThe upper face and periocular region is a complex and dynamic part of the face. Successful rejuvenation requires a combination of minimally invasive modalities to fill dents and hollows, resurface rhytides, improve pigmentation, and smooth the mimetic muscles of the face without masking facial expression.\n\n\nMETHODS\nUsing review of the literature and clinical experience, the authors discuss our strategy for combining botulinum toxin, facial filler, ablative laser, intense pulsed light, microfocused ultrasound, and microneedle fractional radiofrequency to treat aesthetic problems of the upper face including brow ptosis, temple volume loss, A-frame deformity of the superior sulcus, and superficial and deep rhytides.\n\n\nRESULTS\nWith attention to safety recommendations, injectable, light, laser, and energy-based treatments can be safely combined in experienced hands to provide enhanced outcomes in the rejuvenation of the upper face.\n\n\nCONCLUSION\nProviding multiple treatments in 1 session improves patient satisfaction by producing greater improvements in a shorter amount of time and with less overall downtime than would be necessary with multiple office visits.", "title": "" }, { "docid": "3d895fa9057d76ed0488f530a18f15c4", "text": "Nowadays, computer interaction is mostly done using dedicated devices. But gestures are an easy mean of expression between humans that could be used to communicate with computers in a more natural manner. Most of the current research on hand gesture recognition for HumanComputer Interaction rely on either the Neural Networks or Hidden Markov Models (HMMs). In this paper, we compare different approaches for gesture recognition and highlight the major advantages of each. We show that gestures recognition based on the Bio-mechanical characteristic of the hand provides an intuitive approach which provides more accuracy and less complexity.", "title": "" }, { "docid": "7c7801d472e3a03986ec4000d9d86ca8", "text": "The purpose of this study is to examine structural relationships among the capabilities, processes, and performance of knowledge management, and suggest strategic directions for the successful implementation of knowledge management. To serve this purpose, the authors conducted an extensive survey of 68 knowledge management-adopting Korean firms in diverse industries and collected 215 questionnaires. Analyzing hypothesized structural relationships with the data collected, they found that there exists statistically significant relationships among knowledge management capabilities, processes, and performance. The empirical results of this study also support the wellknown strategic hypothesis of the balanced scorecard (BSC). © 2007 Wiley Periodicals, Inc.", "title": "" }, { "docid": "6fd84345b0399a0d59d80fb40829eee2", "text": "This paper describes a method based on a sequenceto-sequence learning (Seq2Seq) with attention and context preservation mechanism for voice conversion (VC) tasks. Seq2Seq has been outstanding at numerous tasks involving sequence modeling such as speech synthesis and recognition, machine translation, and image captioning. In contrast to current VC techniques, our method 1) stabilizes and accelerates the training procedure by considering guided attention and proposed context preservation losses, 2) allows not only spectral envelopes but also fundamental frequency contours and durations of speech to be converted, 3) requires no context information such as phoneme labels, and 4) requires no time-aligned source and target speech data in advance. In our experiment, the proposed VC framework can be trained in only one day, using only one GPU of an NVIDIA Tesla K80, while the quality of the synthesized speech is higher than that of speech converted by Gaussian mixture model-based VC and is comparable to that of speech generated by recurrent neural network-based text-to-speech synthesis, which can be regarded as an upper limit on VC performance.", "title": "" }, { "docid": "a46b1219945ddf41022073fa29729a10", "text": "The economic relevance of Web applications increases the importance of controlling and improving their quality. Moreover, the new available technologies for their development allow the insertion of sophisticated functions, but often leave the developers responsible for their organization and evolution. As a consequence, a high demand is emerging for methodologies and tools for quality assurance of Web based systems.\nIn this paper, a UML model of Web applications is proposed for their high level representation. Such a model is the starting point for several analyses, which can help in the assessment of the static site structure. Moreover, it drives Web application testing, in that it can be exploited to define white box testing criteria and to semi-automatically generate the associated test cases.\nThe proposed techniques were applied to several real world Web applications. Results suggest that an automatic support to the verification and validation activities can be extremely beneficial. In fact, it guarantees that all paths in the site which satisfy a selected criterion are properly exercised before delivery. The high level of automation that is achieved in test case generation and execution increases the number of tests that are conducted and simplifies the regression checks.", "title": "" }, { "docid": "35f268124bd881f8257c2e1f576a023b", "text": "We develop a new randomized iterative algorithm—stochastic dual ascent (SDA)—for finding the projection of a given vector onto the solution space of a linear system. The method is dual in nature: with the dual being a non-strongly concave quadratic maximization problem without constraints. In each iteration of SDA, a dual variable is updated by a carefully chosen point in a subspace spanned by the columns of a random matrix drawn independently from a fixed distribution. The distribution plays the role of a parameter of the method. Our complexity results hold for a wide family of distributions of random matrices, which opens the possibility to fine-tune the stochasticity of the method to particular applications. We prove that primal iterates associated with the dual process converge to the projection exponentially fast in expectation, and give a formula and an insightful lower bound for the convergence rate. We also prove that the same rate applies to dual function values, primal function values and the duality gap. Unlike traditional iterative methods, SDA converges under no additional assumptions on the system (e.g., rank, diagonal dominance) beyond consistency. In fact, our lower bound improves as the rank of the system matrix drops. Many existing randomized methods for linear systems arise as special cases of SDA, including randomized Kaczmarz, randomized Newton, randomized coordinate descent, Gaussian descent, and their variants. In special cases where our method specializes to a known algorithm, we either recover the best known rates, or improve upon them. Finally, we show that the framework can be applied to the distributed average consensus problem to obtain an array of new algorithms. The randomized gossip algorithm arises as a special case.", "title": "" }, { "docid": "b5feea2a9ef2ed18182964acd83cdaee", "text": "We consider the problem of learning general-purpose, paraphrastic sentence embeddings, revisiting the setting of Wieting et al. (2016b). While they found LSTM recurrent networks to underperform word averaging, we present several developments that together produce the opposite conclusion. These include training on sentence pairs rather than phrase pairs, averaging states to represent sequences, and regularizing aggressively. These improve LSTMs in both transfer learning and supervised settings. We also introduce a new recurrent architecture, the GATED RECURRENT AVERAGING NETWORK, that is inspired by averaging and LSTMs while outperforming them both. We analyze our learned models, finding evidence of preferences for particular parts of speech and dependency relations. 1", "title": "" }, { "docid": "d3b03d65b61b98db03445bda899b44ba", "text": "Positioning is basis for providing location information to mobile users, however, with the growth of wireless and mobile communications technologies. Mobile phones are equipped with several radio frequency technologies for driving the positioning information like GSM, Wi-Fi or Bluetooth etc. In this way, the objective of this thesis was to implement an indoor positioning system relying on Bluetooth Received Signal Strength (RSS) technology and it integrates into the Global Positioning Module (GPM) to provide precise information inside the building. In this project, we propose indoor positioning system based on RSS fingerprint and footprint architecture that smart phone users can get their position through the assistance collections of Bluetooth signals, confining RSSs by directions, and filtering burst noises that can overcome the server signal fluctuation problem inside the building. Meanwhile, this scheme can raise more accuracy in finding the position inside the building.", "title": "" }, { "docid": "113a8777ba40002c252d9fcea7e238f4", "text": "An algorithm is presented that scales the pixel intensities of a computer generated greyscale image so that they are all displayable on a standard CRT. This scaling is spatially nonuniform over the image in that different pixels with the same intensity in the original image may have different intensities in the resulting image. The goal of this scaling transformation is to produce an image on the CRT that perceptually mimics the calculated image, while staying within the physical limitations of the CRT.", "title": "" }, { "docid": "2efb10a430e001acd201a0b16ab74836", "text": "As the cost of human full genome sequencing continues to fall, we will soon witness a prodigious amount of human genomic data in the public cloud. To protect the confidentiality of the genetic information of individuals, the data has to be encrypted at rest. On the other hand, encryption severely hinders the use of this valuable information, such as Genome-wide Range Query (GRQ), in medical/genomic research. While the problem of secure range query on outsourced encrypted data has been extensively studied, the current schemes are far from practical deployment in terms of efficiency and scalability due to the data volume in human genome sequencing. In this paper, we investigate the problem of secure GRQ over human raw aligned genomic data in a third-party outsourcing model. Our solution contains a novel secure range query scheme based on multi-keyword symmetric searchable encryption (MSSE). The proposed scheme incurs minimal ciphertext expansion and computation overhead. We also present a hierarchical GRQ-oriented secure index structure tailored for efficient and large-scale genomic data lookup in the cloud while preserving the query privacy. Our experiment on real human genomic data shows that a secure GRQ request with range size 100,000 over more than 300 million encrypted short reads takes less than 3 minutes, which is orders of magnitude faster than existing solutions.", "title": "" }, { "docid": "826612712b3a44da30e6fb7e2dba95bc", "text": "Flyback converters show the characteristics of current source when operating in discontinuous conduction mode (DCM) and boundary conduction mode (BCM), which makes it widely used in photovoltaic grid-connected micro-inverter. In this paper, an active clamp interleaved flyback converter operating with combination of DCM and BCM is proposed in micro-inverter to achieve zero voltage switching (ZVS) for both of primary switches and fully recycle the energy in the leakage inductance. The proposed control method makes active-clamping part include only one clamp capacitor. In DCM area, only one flyback converter operates and turn-off of its auxiliary switch is suggested here to reduce resonant conduction losses, which improve the efficiency at light loads. Performance of the proposed circuit is validated by the simulation results and experimental results.", "title": "" }, { "docid": "4b012d1dc18f18118a73488e934eff4d", "text": "In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are encouraged to visit: s u m m a r y Current drought information is based on indices that do not capture the joint behaviors of hydrologic variables. To address this limitation, the potential of copulas in characterizing droughts from multiple variables is explored in this study. Starting from the standardized index (SI) algorithm, a modified index accounting for seasonality is proposed for precipitation and streamflow marginals. Utilizing Indiana stations with long-term observations (a minimum of 80 years for precipitation and 50 years for streamflow), the dependence structures of precipitation and streamflow marginals with various window sizes from 1-to 12-months are constructed from empirical copulas. A joint deficit index (JDI) is defined by using the distribution function of copulas. This index provides a probability-based description of the overall drought status. Not only is the proposed JDI able to reflect both emerging and prolonged droughts in a timely manner, it also allows a month-by-month drought assessment such that the required amount of precipitation for achieving normal conditions in future can be computed. The use of JDI is generalizable to other hydrologic variables as evidenced by similar drought severities gleaned from JDIs constructed separately from precipitation and streamflow data. JDI further allows the construction of an inter-variable drought index, where the entire dependence structure of precipitation and streamflow marginals is preserved. Introduction Drought, as a prolonged status of water deficit, has been a challenging topic in water resources management. It is perceived as one of the most expensive and least understood natural disasters. In monetary terms, a typical drought costs American farmers and businesses $6–8 billion each year (WGA, 2004), more than damages incurred from floods and hurricanes. The consequences tend to be more severe in areas such as the mid-western part of the United States, where agriculture is the major economic driver. Unfortunately , though there is a strong need to develop an algorithm for characterizing and predicting droughts, it cannot be achieved easily either through physical or statistical analyses. The main obstacles are identification of complex drought-causing mechanisms, and lack of a precise (universal) scientific definition for droughts. When a drought event occurs, moisture deficits are observed in many hydrologic variables, such as precipitation, …", "title": "" }, { "docid": "a53f26ef068d11ea21b9ba8609db6ddf", "text": "This paper presents a novel approach based on enhanced local directional patterns (ELDP) to face recognition, which adopts local edge gradient information to represent face images. Specially, each pixel of every facial image sub-block gains eight edge response values by convolving the local 3 3 neighborhood with eight Kirsch masks, respectively. ELDP just utilizes the directions of the most encoded into a double-digit octal number to produce the ELDP codes. The ELDP dominant patterns (ELDP) are generated by statistical analysis according to the occurrence rates of the ELDP codes in a mass of facial images. Finally, the face descriptor is represented by using the global concatenated histogram based on ELDP or ELDP extracted from the face image which is divided into several sub-regions. The performances of several single face descriptors not integrated schemes are evaluated in face recognition under different challenges via several experiments. The experimental results demonstrate that the proposed method is more robust to non-monotonic illumination changes and slight noise without any filter. & 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "27316b23e7a7cd163abd40f804caf61b", "text": "Attention based recurrent neural networks (RNN) have shown a great success for question answering (QA) in recent years. Although significant improvements have been achieved over the non-attentive models, the position information is not well studied within the attention-based framework. Motivated by the effectiveness of using the word positional context to enhance information retrieval, we assume that if a word in the question (i.e., question word) occurs in an answer sentence, the neighboring words should be given more attention since they intuitively contain more valuable information for question answering than those far away. Based on this assumption, we propose a positional attention based RNN model, which incorporates the positional context of the question words into the answers' attentive representations. Experiments on two benchmark datasets show the great advantages of our proposed model. Specifically, we achieve a maximum improvement of 8.83% over the classical attention based RNN model in terms of mean average precision. Furthermore, our model is comparable to if not better than the state-of-the-art approaches for question answering.", "title": "" } ]
scidocsrr
34bceb1308bf34f89151706575dd41d6
IRRIGATION WITH MAGNETIZED WATER , A NOVEL TOOL FOR IMPROVING CROP PRODUCTION IN EGYPT
[ { "docid": "cbfdea54abb1e4c1234ca44ca6913220", "text": "Seeds of chickpea (Cicer arietinum L.) were exposed in batches to static magnetic fields of strength from 0 to 250 mT in steps of 50 mT for 1-4 h in steps of 1 h for all fields. Results showed that magnetic field application enhanced seed performance in terms of laboratory germination, speed of germination, seedling length and seedling dry weight significantly compared to unexposed control. However, the response varied with field strength and duration of exposure without any particular trend. Among the various combinations of field strength and duration, 50 mT for 2 h, 100 mT for 1 h and 150 mT for 2 h exposures gave best results. Exposure of seeds to these three magnetic fields improved seed coat membrane integrity as it reduced the electrical conductivity of seed leachate. In soil, seeds exposed to these three treatments produced significantly increased seedling dry weights of 1-month-old plants. The root characteristics of the plants showed dramatic increase in root length, root surface area and root volume. The improved functional root parameters suggest that magnetically treated chickpea seeds may perform better under rainfed (un-irrigated) conditions where there is a restrictive soil moisture regime.", "title": "" } ]
[ { "docid": "df7a68ebb9bc03d8a73a54ab3474373f", "text": "We report on the implementation of a color-capable sub-pixel resolving optofluidic microscope based on the pixel super-resolution algorithm and sequential RGB illumination, for low-cost on-chip color imaging of biological samples with sub-cellular resolution.", "title": "" }, { "docid": "749f79007256f570b73983b8d3f36302", "text": "This paper addresses some of the potential benefits of using fuzzy logic controllers to control an inverted pendulum system. The stages of the development of a fuzzy logic controller using a four input Takagi-Sugeno fuzzy model were presented. The main idea of this paper is to implement and optimize fuzzy logic control algorithms in order to balance the inverted pendulum and at the same time reducing the computational time of the controller. In this work, the inverted pendulum system was modeled and constructed using Simulink and the performance of the proposed fuzzy logic controller is compared to the more commonly used PID controller through simulations using Matlab. Simulation results show that the fuzzy logic controllers are far more superior compared to PID controllers in terms of overshoot, settling time and response to parameter changes.", "title": "" }, { "docid": "6d8e78d8c48aab17aef0b9e608f13b99", "text": "Optimal real-time distributed V2G and G2V management of electric vehicles Sonja Stüdli, Emanuele Crisostomi, Richard Middleton & Robert Shorten a Centre for Complex Dynamic Systems and Control, The University of Newcastle, New South Wales, Australia b Department of Energy, Systems, Territory and Constructions, University of Pisa, Pisa, Italy c IBM Research, Dublin, Ireland Accepted author version posted online: 10 Dec 2013.Published online: 05 Feb 2014.", "title": "" }, { "docid": "1d4f89bb3e289ed138f45af0f1e3fc39", "text": "The “covariance” of complex random variables and processes, when defined consistently with the corresponding notion for real random variables, is shown to be determined by the usual (complex) covariance together with a quantity called the pseudo-covariance. A characterization of uncorrelatedness and wide-sense stationarity in terms of covariance and pseudocovariance is given. Complex random variables and processes with a vanishing pseudo-covariance are called proper. It is shown that properness is preserved under affine transformations and that the complex-multivariate Gaussian density assumes a natural form only for proper random variables. The maximum-entropy theorem is generalized to the complex-multivariate case. The differential entropy of a complex random vector with a fixed correlation matrix is shown to be maximum, if and only if the random vector is proper, Gaussian and zero-mean. The notion of circular stutionarity is introduced. For the class of proper complex random processes, a discrete Fourier transform correspondence is derived relating circular stationarity in the time domain to uncorrelatedness in the frequency domain. As an application of the theory, the capacity of a discrete-time channel with complex inputs, proper complex additive white Gaussian noise, and a finite complex unit-sample response is determined. This derivation is considerably simpler than an earlier derivation for the real discrete-time Gaussian channel with intersymbol interference, whose capacity is obtained as a by-product of the results for the complex channel. Znder Terms-Proper complex random processes, circular stationarity, intersymbol interference, capacity.", "title": "" }, { "docid": "97571039c1f7a11c65e71c723d231713", "text": "Blockchains are increasingly attractive due to their decentralization, yet inherent limitations of high latency, in the order of minutes, and attacks on consensus cap their practicality. We introduce Blinkchain, a Byzantine consensus protocol that relies on sharding and locality-preserving techniques from distributed systems to provide a bound on consensus latency, proportional to the network delay between the buyer and the seller nodes. Blinkchain selects a random pool of validators, some of which are legitimate with high probability, even when an attacker focuses its forces to crowd out legitimate validators in a small vicinity.", "title": "" }, { "docid": "4070072c5bd650d1ca0daf3015236b31", "text": "Automated classiication of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the eeciency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identiication of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in video, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion, and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classiier built using these features is able to identify sports clips with an accuracy of about 93%.", "title": "" }, { "docid": "4f68f2a2ef6a21116a5b0814c4f504e6", "text": "Biometric fingerprint scanners are positioned to provide improved security in a great span of applications from government to private. However, one highly publicized vulnerability is that it is possible to spoof a variety of fingerprint scanners using artificial fingers made from Play-Doh, gelatin and silicone molds. Therefore, it is necessary to offer protection for fingerprint systems against these threats. In this paper, an anti-spoofing detection method is proposed which is based on ridge signal and valley noise analysis, to quantify perspiration patterns along ridges in live subjects and noise patterns along valleys in spoofs. The signals representing gray level patterns along ridges and valleys are explored in spatial, frequency and wavelet domains. Based on these features, separation (live/spoof) is performed using standard pattern classification tools including classification trees and neural networks. We test this method on a larger dataset than previously considered which contains 644 live fingerprints (81 subjects with 2 fingers for an average of 4 sessions) and 570 spoof fingerprints (made from Play-Doh, gelatin and silicone molds in multiple sessions) collected from the Identix fingerprint scanner. Results show that the performance can reach 99.1% correct classification overall. The proposed anti-spoofing method is purely software based and integration of this method can provide protection for fingerprint scanners against gelatin, Play-Doh and silicone spoof fingers. ∗Phone: (1)315-2686536 ∗∗Fax: (1)315-2687600 Email addresses: [email protected], [email protected] (Bozhao Tan, Stephanie Schuckers) Preprint submitted to Pattern Recognition November 12, 2009", "title": "" }, { "docid": "66044816ca1af0198acd27d22e0e347e", "text": "BACKGROUND\nThe Close Kinetic Chain Upper Extremity Stability Test (CKCUES test) is a low cost shoulder functional test that could be considered as a complementary and objective clinical outcome for shoulder performance evaluation. However, its reliability was tested only in recreational athletes' males and there are no studies comparing scores between sedentary and active samples. The purpose was to examine inter and intrasession reliability of CKCUES Test for samples of sedentary male and female with (SIS), for samples of sedentary healthy male and female, and for male and female samples of healthy upper extremity sport specific recreational athletes. Other purpose was to compare scores within sedentary and within recreational athletes samples of same gender.\n\n\nMETHODS\nA sample of 108 subjects with and without SIS was recruited. Subjects were tested twice, seven days apart. Each subject performed four test repetitions, with 45 seconds of rest between them. The last three repetitions were averaged and used to statistical analysis. Intraclass Correlation Coefficient ICC2,1 was used to assess intrasession reliability of number of touches score and ICC2,3 was used to assess intersession reliability of number of touches, normalized score, and power score. Test scores within groups of same gender also were compared. Measurement error was determined by calculating the Standard Error of the Measurement (SEM) and Minimum detectable change (MDC) for all scores.\n\n\nRESULTS\nThe CKCUES Test showed excellent intersession reliability for scores in all samples. Results also showed excellent intrasession reliability of number of touches for all samples. Scores were greater in active compared to sedentary, with exception of power score. All scores were greater in active compared to sedentary and SIS males and females. SEM ranged from 1.45 to 2.76 touches (based on a 95% CI) and MDC ranged from 2.05 to 3.91(based on a 95% CI) in subjects with and without SIS. At least three touches are needed to be considered a real improvement on CKCUES Test scores.\n\n\nCONCLUSION\nResults suggest CKCUES Test is a reliable tool to evaluate upper extremity functional performance for sedentary, for upper extremity sport specific recreational, and for sedentary males and females with SIS.", "title": "" }, { "docid": "843e5fc99df33e280fc4f988b5358987", "text": "This special issue of Journal of Communication is devoted to theoretical explanations of news framing, agenda setting, and priming effects. It examines if and how the three models are related and what potential relationships between them tell theorists and researchers about the effects of mass media. As an introduction to this effort, this essay provides a very brief review of the three effects and their roots in media-effects research. Based on this overview, we highlight a few key dimensions along which one can compare, framing, agenda setting, and priming. We conclude with a description of the contexts within which the three models operate, and the broader implications that these conceptual distinctions have for the growth of our discipline.", "title": "" }, { "docid": "9f530b42ae19ddcf52efa41272b2dbc7", "text": "Learning-based methods for appearance-based gaze estimation achieve state-of-the-art performance in challenging real-world settings but require large amounts of labelled training data. Learningby-synthesis was proposed as a promising solution to this problem but current methods are limited with respect to speed, the appearance variability as well as the head pose and gaze angle distribution they can synthesize. We present UnityEyes, a novel method to rapidly synthesize large amounts of variable eye region images as training data. Our method combines a novel generative 3D model of the human eye region with a real-time rendering framework. The model is based on high-resolution 3D face scans and uses realtime approximations for complex eyeball materials and structures as well as novel anatomically inspired procedural geometry methods for eyelid animation. We show that these synthesized images can be used to estimate gaze in difficult in-the-wild scenarios, even for extreme gaze angles or in cases in which the pupil is fully occluded. We also demonstrate competitive gaze estimation results on a benchmark in-the-wild dataset, despite only using a light-weight nearest-neighbor algorithm. We are making our UnityEyes synthesis framework freely available online for the benefit of the research community.", "title": "" }, { "docid": "68c3b039e9b05eef878de3cdc2e992ef", "text": "Genitourinary rhabdomyosarcoma in females usually originates in the vagina or uterus, but rarely the vulva. The authors present a case of rhabdomyosarcoma originating in the clitoris. A 4-year-old with an alveolar rhabdomyosarcoma of the clitoris was treated with radical clitorectomy, radiation, and chemotherapy. Follow-up at 3 years showed no active disease.", "title": "" }, { "docid": "dd01611bcbc8a50fbe20bdc676326ce5", "text": "PURPOSE\nWe evaluated the accuracy of magnetic resonance imaging in determining the size and shape of localized prostate cancer.\n\n\nMATERIALS AND METHODS\nThe subjects were 114 men who underwent multiparametric magnetic resonance imaging before radical prostatectomy with patient specific mold processing of the specimen from 2013 to 2015. T2-weighted images were used to contour the prostate capsule and cancer suspicious regions of interest. The contours were used to design and print 3-dimensional custom molds, which permitted alignment of excised prostates with magnetic resonance imaging scans. Tumors were reconstructed in 3 dimensions from digitized whole mount sections. Tumors were then matched with regions of interest and the relative geometries were compared.\n\n\nRESULTS\nOf the 222 tumors evident on whole mount sections 118 had been identified on magnetic resonance imaging. For the 118 regions of interest mean volume was 0.8 cc and the longest 3-dimensional diameter was 17 mm. However, for matched pathological tumors, of which most were Gleason score 3 + 4 or greater, mean volume was 2.5 cc and the longest 3-dimensional diameter was 28 mm. The median tumor had a 13.5 mm maximal extent beyond the magnetic resonance imaging contour and 80% of cancer volume from matched tumors was outside region of interest boundaries. Size estimation was most accurate in the axial plane and least accurate along the base-apex axis.\n\n\nCONCLUSIONS\nMagnetic resonance imaging consistently underestimates the size and extent of prostate tumors. Prostate cancer foci had an average diameter 11 mm longer and a volume 3 times greater than T2-weighted magnetic resonance imaging segmentations. These results may have important implications for the assessment and treatment of prostate cancer.", "title": "" }, { "docid": "1847cce79f842a7d01f1f65721c1f007", "text": "Many tasks in AI require the collaboration of multiple agents. Typically, the communication protocol between agents is manually specified and not altered during training. In this paper we explore a simple neural model, called CommNN, that uses continuous communication for fully cooperative tasks. The model consists of multiple agents and the communication between them is learned alongside their policy. We apply this model to a diverse set of tasks, demonstrating the ability of the agents to learn to communicate amongst themselves, yielding improved performance over non-communicative agents and baselines. In some cases, it is possible to interpret the language devised by the agents, revealing simple but effective strategies for solving the task at hand.", "title": "" }, { "docid": "6dce88afec3456be343c6a477350aa49", "text": "In order to capture rich language phenomena, neural machine translation models have to use a large vocabulary size, which requires high computing time and large memory usage. In this paper, we alleviate this issue by introducing a sentence-level or batch-level vocabulary, which is only a very small sub-set of the full output vocabulary. For each sentence or batch, we only predict the target words in its sentencelevel or batch-level vocabulary. Thus, we reduce both the computing time and the memory usage. Our method simply takes into account the translation options of each word or phrase in the source sentence, and picks a very small target vocabulary for each sentence based on a wordto-word translation model or a bilingual phrase library learned from a traditional machine translation model. Experimental results on the large-scale English-toFrench task show that our method achieves better translation performance by 1 BLEU point over the large vocabulary neural machine translation system of Jean et al. (2015).", "title": "" }, { "docid": "514f8ca4015f7abac2674e209ccc3f51", "text": "Complex real-world signals, such as images, contain discriminative structures that differ in many aspects including scale, invariance, and data channel. While progress in deep learning shows the importance of learning features through multiple layers, it is equally important to learn features through multiple paths. We propose Multipath Hierarchical Matching Pursuit (M-HMP), a novel feature learning architecture that combines a collection of hierarchical sparse features for image classification to capture multiple aspects of discriminative structures. Our building blocks are MI-KSVD, a codebook learning algorithm that balances the reconstruction error and the mutual incoherence of the codebook, and batch orthogonal matching pursuit (OMP), we apply them recursively at varying layers and scales. The result is a highly discriminative image representation that leads to large improvements to the state-of-the-art on many standard benchmarks, e.g., Caltech-101, Caltech-256, MITScenes, Oxford-IIIT Pet and Caltech-UCSD Bird-200.", "title": "" }, { "docid": "27b3c795085e395eadfd23e181abedc4", "text": "Since remote sensing images are captured from the top of the target, such as from a satellite or plane platform, ship targets can be presented at any orientation. When detecting ship targets using horizontal bounding boxes, there will be background clutter in the box. This clutter makes it harder to detect the ship and find its precise location, especially when the targets are in close proximity or staying close to the shore. To solve these problems, this paper proposes a deep learning algorithm using a multiscale rotated bounding box to detect the ship target in a complex background and obtain the location and orientation information of the ship. When labeling the oriented targets, we use the five-parameter method to ensure that the box shape is maintained rectangular. The algorithm uses a pretrained deep network to extract features and produces two divided flow paths to output the result. One flow path predicts the target class, while the other predicts the location and angle information. In the training stage, we match the prior multiscale rotated bounding boxes to the ground-truth bounding boxes to obtain the positive sample information and use it to train the deep learning model. When matching the rotated bounding boxes, we narrow down the selection scope to reduce the amount of calculation. In the testing stage, we use the trained model to predict and obtain the final result after comparing with the score threshold and nonmaximum suppression post-processing. Experiments conducted on a remote sensing dataset show that the algorithm is robust in detecting ship targets under complex conditions, such as wave clutter background, target in close proximity, ship close to the shore, and multiscale varieties. Compared to other algorithms, our algorithm not only exhibits better performance in ship detection but also obtains the precise location and orientation information of the ship.", "title": "" }, { "docid": "6751bfa8495065db8f6f5b396bbbc2cd", "text": "This paper proposes a new balanced realization and model reduction method for possibly unstable systems by introducing some new controllability and observability Gramians. These Gramians can be related to minimum control energy and minimum estimation error. In contrast to Gramians defined in the literature for unstable systems, these Gramians can always be computed for systems without imaginary axis poles and they reduce to the standard controllability and observability Gramians when the systems are stable. The proposed balanced model reduction method enjoys the similar error bounds as does for the standard balanced model reduction. Furthermore, the new error bounds and the actual approximation errors seem to be much smaller than the ones using the methods given in the literature for unstable systems. Copyright ( 1999 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "85e51ac7980deac92e140d0965a35708", "text": "Ensuring that autonomous systems work ethically is both complex and difficult. However, the idea of having an additional ‘governor’ that assesses options the system has, and prunes them to select the most ethical choices is well understood. Recent work has produced such a governor consisting of a ‘consequence engine’ that assesses the likely future outcomes of actions then applies a Safety/Ethical logic to select actions. Although this is appealing, it is impossible to be certain that the most ethical options are actually taken. In this paper we extend and apply a well-known agent verification approach to our consequence engine, allowing us to verify the correctness of its ethical decision-making.", "title": "" }, { "docid": "74812252b395dca254783d05e1db0cf5", "text": "Cyber-Physical Security Testbeds serve as valuable experimental platforms to implement and evaluate realistic, complex cyber attack-defense experiments. Testbeds, unlike traditional simulation platforms, capture communication, control and physical system characteristics and their interdependencies adequately in a unified environment. In this paper, we show how the PowerCyber CPS testbed at Iowa State was used to implement and evaluate cyber attacks on one of the fundamental Wide-Area Control applications, namely, the Automatic Generation Control (AGC). We provide a brief overview of the implementation of the experimental setup on the testbed. We then present a case study using the IEEE 9 bus system to evaluate the impacts of cyber attacks on AGC. Specifically, we analyzed the impacts of measurement based attacks that manipulated the tie-line and frequency measurements, and control based attacks that manipulated the ACE values sent to generators. We found that these attacks could potentially create under frequency conditions and could cause unnecessary load shedding. As part of future work, we plan to extend this work and utilize the experimental setup to implement other sophisticated, stealthy attack vectors and also develop attack-resilient algorithms to detect and mitigate such attacks.", "title": "" }, { "docid": "5de97faf91bf9ef7d3d70e410d97af68", "text": "Simplified texts are commonly used by teachers and students in bilingual education and other language-learning contexts. These texts are usually manually adapted, and teachers say this is a timeconsuming and sometimes challenging task. Our goal is the development of tools to aid teachers by automatically proposing ways to simplify texts. As a first step, this paper presents a detailed analysis of a corpus of news articles and abridged versions written by a literacy organization in order to learn what kinds of changes people make when simplifying texts for language learners.", "title": "" } ]
scidocsrr
27c472b6f4e664e190ab4105d0b87047
Device-free gesture tracking using acoustic signals
[ { "docid": "2efb71ffb35bd05c7a124ffe8ad8e684", "text": "We present Lumitrack, a novel motion tracking technology that uses projected structured patterns and linear optical sensors. Each sensor unit is capable of recovering 2D location within the projection area, while multiple sensors can be combined for up to six degree of freedom (DOF) tracking. Our structured light approach is based on special patterns, called m-sequences, in which any consecutive sub-sequence of m bits is unique. Lumitrack can utilize both digital and static projectors, as well as scalable embedded sensing configurations. The resulting system enables high-speed, high precision, and low-cost motion tracking for a wide range of interactive applications. We detail the hardware, operation, and performance characteristics of our approach, as well as a series of example applications that highlight its immediate feasibility and utility.", "title": "" } ]
[ { "docid": "c27f8a936f1b5da0b6ddb68bdfb205a8", "text": "Developmental dyslexia refers to a group of children who fail to learn to read at the normal rate despite apparently normal vision and neurological functioning. Dyslexic children typically manifest problems in printed word recognition and spelling, and difficulties in phonological processing are quite common (Lyon, 1995; Rack, Snowling, & Olson, 1992; Stanovich, 1988; Wagner & Torgesen, 1987). The phonological processing problems include, but are not limited to difficulties in pronouncing nonsense words, poor phonemic awareness, problems in representing phonological information in short-term memory and difficulty in rapidly retrieving the names of familiar objects, digits and letters (Stanovich, 1988; Wagner & Torgesen, 1987; Wolf & Bowers, 1999). The underlying cause of phonological deficits in dyslexic children is not yet clear. One possible source is developmentally deviant perception of speech at the phoneme level. A number of studies have shown that dyslexics' categorizations of speech sounds are less sharp than normal readers (Chiappe, Chiappe, & Siegel, 2001; Godfrey, Syrdal-Lasky, Millay, & Knox, 1981; Maassen, Groenen, Crul, Assman-Hulsmans, & Gabreels, 2001; Reed, 1989; Serniclaes, Sprenger-Charolles, Carré, & Demonet, 2001;Werker & Tees, 1987). These group differences have appeared in tasks requiring the labeling of stimuli varying along a perceptual continuum (such as voicing or place of articulation), as well as on speech discrimination tasks. In two studies, there was evidence that dyslexics showed better discrimination of sounds differing phonetically within a category boundary (Serniclaes et al, 2001; Werker & Tees, 1987), whereas in one study, dyslexics were poorer at both within-phoneme and between phoneme discrimination (Maassen et al, 2001). There is evidence that newborns and 6-month olds with a familial risk for dyslexia have reduced sensitivity to speech and non-speech sounds (Molfese, 2000; Pihko, Leppanen, Eklund, Cheour, Guttorm & Lyytinen, 1999). If dyslexics are impaired from birth in auditory processing, or more specifically in speech perception, this would affect the development and use of phonological representations on a wide variety of tasks, most intensively in phonological awareness and decoding. Although differences in speech perception have been observed, it has also been noted that the effects are often weak, small in size or shown by only some of the dyslexic subjects (Adlard & Hazan, 1998; Brady, Shankweiler, & Mann, 1983; Elliot, Scholl, Grant, & Hammer, 1990; Manis, McBride-Chang, Seidenberg, Keating, Doi, Munson, & Petersen (1997); Nittrouer, 1999; Snowling, Goulandris, Bowlby, & Howell, 1986). One reason for small, or variable effects, might be that the dyslexic population is heterogeneous, and that speech perception problems are more common among particular subgroups of dyslexics. A specific hypothesis is that speech perception problems are more concentrated among dyslexic children showing greater", "title": "" }, { "docid": "b17015641d4ae89767bedf105802d838", "text": "We propose prefix constraints, a novel method to enforce constraints on target sentences in neural machine translation. It places a sequence of special tokens at the beginning of target sentence (target prefix), while side constraints (Sennrich et al., 2016) places a special token at the end of source sentence (source suffix). Prefix constraints can be predicted from source sentence jointly with target sentence, while side constraints must be provided by the user or predicted by some other methods. In both methods, special tokens are designed to encode arbitrary features on target-side or metatextual information. We show that prefix constraints are more flexible than side constraints and can be used to control the behavior of neural machine translation, in terms of output length, bidirectional decoding, domain adaptation, and unaligned target word generation.", "title": "" }, { "docid": "63efc5ad8b4ad3dce3c561b6921c985a", "text": "Augmented Books show three-dimensional animated educational content and provide a means for students to interact with this content in an engaging learning experience. In this paper we present a framework for creating educational Augmented Reality (AR) books that overlay virtual content over real book pages. The framework features support for certain types of user interaction, model and texture animations, and an enhanced marker design suitable for educational books. Three books teaching electromagnetism concepts were created with this framework. To evaluate the effectiveness in helping students learn, we conducted a small pilot study with ten secondary school students, studying electromagnetism concepts using the three books. Half of the group used the books with the diagrams augmented, while the other half used the books without augmentation. Participants completed a pre-test, a test after the learning session and a retention test administered 1 month later. Results suggest that AR has potential to be effective in teaching complex 3D concepts.", "title": "" }, { "docid": "2e475a64d99d383b85730e208703e654", "text": "—Detecting a variety of anomalies in computer network, especially zero-day attacks, is one of the real challenges for both network operators and researchers. An efficient technique detecting anomalies in real time would enable network operators and administrators to expeditiously prevent serious consequences caused by such anomalies. We propose an alternative technique, which based on a combination of time series and feature spaces, for using machine learning algorithms to automatically detect anomalies in real time. Our experimental results show that the proposed technique can work well for a real network environment, and it is a feasible technique with flexible capabilities to be applied for real-time anomaly detection.", "title": "" }, { "docid": "d4615de80544972d2313c6d80a9e19fd", "text": "Herein is presented an external capacitorless low-dropout regulator (LDO) that provides high-power-supply rejection (PSR) at all low-to-high frequencies. The LDO is designed to have the dominant pole at the gate of the pass transistor to secure stability without the use of an external capacitor, even when the load current increases significantly. Using the proposed adaptive supply-ripple cancellation (ASRC) technique, in which the ripples copied from the supply are injected adaptively to the body gate, the PSR hump that appears in conventional gate-pole-dominant LDOs can be suppressed significantly. Since the ASRC circuit continues to adjust the magnitude of the injecting ripples to an optimal value, the LDO presented here can maintain high PSRs, irrespective of the magnitude of the load current <inline-formula> <tex-math notation=\"LaTeX\">$I_{L}$ </tex-math></inline-formula>, or the dropout voltage <inline-formula> <tex-math notation=\"LaTeX\">$V_{\\mathrm {DO}}$ </tex-math></inline-formula>. The proposed LDO was fabricated in a 65-nm CMOS process, and it had an input voltage of 1.2 V. With a 240-pF load capacitor, the measured PSRs were less than −36 dB at all frequencies from 10 kHz to 1 GHz, despite changes of <inline-formula> <tex-math notation=\"LaTeX\">$I_{L}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$V_{\\mathrm {DO}}$ </tex-math></inline-formula> as well as process, voltage, temperature (PVT) variations.", "title": "" }, { "docid": "81f5f2e9da401a40a561f91b8c6b6bc5", "text": "Human computer interaction is defined as Users (Humans) interact with the computers. Speech recognition is an area of computer science that deals with the designing of systems that recognize spoken words. Speech recognition system allows ordinary people to speak to the system. Recognizing and understanding a spoken sentence is obviously a knowledge-intensive process, which must take into account all variable information about the speech communication process, from acoustics to semantics and pragmatics. This paper is the survey of how speech is converted in text and that text in translated into another language. In this paper, we outline a speech recognition system, learning based approach and target language generation mechanism with the help of language EnglishSanskrit language pair using rule based machine translation technique [1]. Rule Based Machine Translation provides high quality translation and requires in depth knowledge of the language apart from real world knowledge and the differences in cultural background and conceptual divisions. Here the English speech is first converted into text and that will translated into Sanskrit language. Keywords-Speech Recognition, Sanskrit, Context Free Grammar, Rule based machine translation, Database.", "title": "" }, { "docid": "0dceeaf757d29138a653b3970de50d56", "text": "Plantings in residential neighborhoods can support wild pollinators. However, it is unknown how effectively wild pollinators maintain pollination services in small, urban gardens with diverse floral resources. We used a ‘mobile garden’ experimental design, whereby potted plants of cucumber, eggplant, and purple coneflower were brought to 30 residential yards in Chicago, IL, USA, to enable direct assessment of pollination services provided by wild pollinator communities. We measured fruit and seed set and investigated the effect of within-yard characteristics and adjacent floral resources on plant pollination. Increased pollinator visitation and taxonomic richness generally led to increases in fruit and seed set for all focal plants. Furthermore, fruit and seed set were correlated across the three species, suggesting that pollination services vary across the landscape in ways that are consistent among different plant species. Plant species varied in terms of which pollinator groups provided the most visits and benefit for pollination. Cucumber pollination was linked to visitation by small sweat bees (Lasioglossum spp.), whereas eggplant pollination was linked to visits by bumble bees. Purple coneflower was visited by the most diverse group of pollinators and, perhaps due to this phenomenon, was more effectively pollinated in florally-rich gardens. Our results demonstrate how a diversity of wild bees supports pollination of multiple plant species, highlighting the importance of pollinator conservation within cities. Non-crop resources should continue to be planted in urban gardens, as these resources have a neutral and potentially positive effect on crop pollination.", "title": "" }, { "docid": "d67c9703ee45ad306384bbc8fe11b50e", "text": "Approximately thirty-four percent of people who experience acute low back pain (LBP) will have recurrent episodes. It remains unclear why some people experience recurrences and others do not, but one possible cause is a loss of normal control of the back muscles. We investigated whether the control of the short and long fibres of the deep back muscles was different in people with recurrent unilateral LBP from healthy participants. Recurrent unilateral LBP patients, who were symptom free during testing, and a group of healthy volunteers, participated. Intramuscular and surface electrodes recorded the electromyographic activity (EMG) of the short and long fibres of the lumbar multifidus and the shoulder muscle, deltoid, during a postural perturbation associated with a rapid arm movement. EMG onsets of the short and long fibres, relative to that of deltoid, were compared between groups, muscles, and sides. In association with a postural perturbation, short fibre EMG onset occurred later in participants with recurrent unilateral LBP than in healthy participants (p=0.022). The short fibres were active earlier than long fibres on both sides in the healthy participants (p<0.001) and on the non-painful side in the LBP group (p=0.045), but not on the previously painful side in the LBP group. Activity of deep back muscles is different in people with a recurrent unilateral LBP, despite the resolution of symptoms. Because deep back muscle activity is critical for normal spinal control, the current results provide the first evidence of a candidate mechanism for recurrent episodes.", "title": "" }, { "docid": "c120e4390d2f814a32d4eba12c2a7951", "text": "We continue the study of Homomorphic Secret Sharing (HSS), recently introduced by Boyle et al. (Crypto 2016, Eurocrypt 2017). A (2-party) HSS scheme splits an input <i>x</i> into shares (<i></i>x<sup>0</sup>,<i>x</i><sup>1</sup>) such that (1) each share computationally hides <i>x</i>, and (2) there exists an efficient homomorphic evaluation algorithm $\\Eval$ such that for any function (or \"program\") <i></i> from a given class it holds that Eval(<i>x</i><sup>0</sup>,<i>P</i>)+Eval(<i>x</i><sup>1</sup>,<i>P</i>)=<i>P</i>(<i>x</i>). Boyle et al. show how to construct an HSS scheme for branching programs, with an inverse polynomial error, using discrete-log type assumptions such as DDH.\n We make two types of contributions.\n <b>Optimizations</b>. We introduce new optimizations that speed up the previous optimized implementation of Boyle et al. by more than a factor of 30, significantly reduce the share size, and reduce the rate of leakage induced by selective failure.\n <b>Applications.</b> Our optimizations are motivated by the observation that there are natural application scenarios in which HSS is useful even when applied to simple computations on short inputs. We demonstrate the practical feasibility of our HSS implementation in the context of such applications.", "title": "" }, { "docid": "fdd0067a8c3ebf285c68cac7172590a7", "text": "We introduce an effective technique to enhance night-time hazy scenes. Our technique builds on multi-scale fusion approach that use several inputs derived from the original image. Inspired by the dark-channel [1] we estimate night-time haze computing the airlight component on image patch and not on the entire image. We do this since under night-time conditions, the lighting generally arises from multiple artificial sources, and is thus intrinsically non-uniform. Selecting the size of the patches is non-trivial, since small patches are desirable to achieve fine spatial adaptation to the atmospheric light, this might also induce poor light estimates and reduced chance of capturing hazy pixels. For this reason, we deploy multiple patch sizes, each generating one input to a multiscale fusion process. Moreover, to reduce the glowing effect and emphasize the finest details, we derive a third input. For each input, a set of weight maps are derived so as to assign higher weights to regions of high contrast, high saliency and small saturation. Finally the derived inputs and the normalized weight maps are blended in a multi-scale fashion using a Laplacian pyramid decomposition. The experimental results demonstrate the effectiveness of our approach compared with recent techniques both in terms of computational efficiency and quality of the outputs.", "title": "" }, { "docid": "69f3c2dbffe44c7da113798a1f528d72", "text": "Behavior modification in health is difficult, as habitual behaviors are extremely well-learned, by definition. This research is focused on building a persuasive system for behavior modification around emotional eating. In this paper, we make strides towards building a just-in-time support system for emotional eating in three user studies. The first two studies involved participants using a custom mobile phone application for tracking emotions, food, and receiving interventions. We found lots of individual differences in emotional eating behaviors and that most participants wanted personalized interventions, rather than a pre-determined intervention. Finally, we also designed a novel, wearable sensor system for detecting emotions using a machine learning approach. This system consisted of physiological sensors which were placed into women's brassieres. We tested the sensing system and found positive results for emotion detection in this mobile, wearable system.", "title": "" }, { "docid": "4b8a46065520d2b7489bf0475321c726", "text": "With computing increasingly becoming more dispersed, relying on mobile devices, distributed computing, cloud computing, etc. there is an increasing threat from adversaries obtaining physical access to some of the computer systems through theft or security breaches. With such an untrusted computing node, a key challenge is how to provide secure computing environment where we provide privacy and integrity for data and code of the application. We propose SecureME, a hardware-software mechanism that provides such a secure computing environment. SecureME protects an application from hardware attacks by using a secure processor substrate, and also from the Operating System (OS) through memory cloaking, permission paging, and system call protection. Memory cloaking hides data from the OS but allows the OS to perform regular virtual memory management functions, such as page initialization, copying, and swapping. Permission paging extends the OS paging mechanism to provide a secure way for two applications to establish shared pages for inter-process communication. Finally, system call protection applies spatio-temporal protection for arguments that are passed between the application and the OS. Based on our performance evaluation using microbenchmarks, single-program workloads, and multiprogrammed workloads, we found that SecureME only adds a small execution time overhead compared to a fully unprotected system. Roughly half of the overheads are contributed by the secure processor substrate. SecureME also incurs a negligible additional storage overhead over the secure processor substrate.", "title": "" }, { "docid": "c421007cd20cf1adf5345fc0ef8d6604", "text": "A novel compact monopulse cavity-backed substrate integrated waveguide (SIW) antenna is proposed. The antenna consists of an array of four circularly polarized (CP) cavity-backed SIW antennas, three dual-mode hybrid coupler, and three input ports. TE10 and TE20 modes are excited in the dual-mode hybrid to produce sum and difference patterns, respectively. The antenna is modeled with a fast full-wave hybrid numerical method and also simulated using full-wave Ansoft HFSS. The whole antenna is integrated on a two-layer dielectric with the size of 42 mm × 36 mm. A prototype of the proposed monopulse antenna at the center frequency of 9.9 GHz is manufactured. Measured results show -10-dB impedance bandwidth of 2.4%, 3-dB axial ratio (AR) bandwidth of 1.75%, 12.3-dBi gain, and -28-dB null depth. The proposed antenna has good monopulse radiation characteristics, high efficiency, and can be easily integrated with planar circuits.", "title": "" }, { "docid": "7a7e0363ca4ad5c83a571449f53834ca", "text": "Principal component analysis (PCA) minimizes the mean square error (MSE) and is sensitive to outliers. In this paper, we present a new rotational-invariant PCA based on maximum correntropy criterion (MCC). A half-quadratic optimization algorithm is adopted to compute the correntropy objective. At each iteration, the complex optimization problem is reduced to a quadratic problem that can be efficiently solved by a standard optimization method. The proposed method exhibits the following benefits: 1) it is robust to outliers through the mechanism of MCC which can be more theoretically solid than a heuristic rule based on MSE; 2) it requires no assumption about the zero-mean of data for processing and can estimate data mean during optimization; and 3) its optimal solution consists of principal eigenvectors of a robust covariance matrix corresponding to the largest eigenvalues. In addition, kernel techniques are further introduced in the proposed method to deal with nonlinearly distributed data. Numerical results demonstrate that the proposed method can outperform robust rotational-invariant PCAs based on L1 norm when outliers occur.", "title": "" }, { "docid": "b52f3f298f1bbf96a242b9857f712099", "text": "In multiple-input multiple-output orthogonal frequency-division multiplexing (MIMO-OFDM) systems, multi-user detection (MUD) algorithms play an important role in reducing the effect of multi-access interference (MAI). A combination of the estimation of channel and multi-user detection is proposed for eliminating various interferences and reduce the bit error rate (BER). First, a novel sparse based k-nearest neighbor classifier is proposed to estimate the unknown activity factor at a high data rate. The active users are continuously detected and their data are decoded at the base station (BS) receiver. The activity detection considers both the pilot and data symbols. Second, an optimal pilot allocation method is suggested to select the minimum mutual coherence in the measurement matrix for optimal pilot placement. The suggested algorithm for designing pilot patterns significantly improves the results in terms of mean square error (MSE), symbol error rate (SER) and bit error rate for channel detection. An optimal pilot placement reduces the computational complexity and maximizes the accuracy of the system. The performance of the channel estimation (CE) and MUD for the proposed scheme was good as it provided significant results, which were validated through simulations.", "title": "" }, { "docid": "262c11ab9f78e5b3f43a31ad22cf23c5", "text": "Responding to threats in the environment is crucial for survival. Certain types of threat produce defensive responses without necessitating previous experience and are considered innate, whereas other threats are learned by experiencing aversive consequences. Two important innate threats are whether an encountered stimulus is a member of the same species (social threat) and whether a stimulus suddenly appears proximal to the body (proximal threat). These threats are manifested early in human development and robustly elicit defensive responses. Learned threat, on the other hand, enables adaptation to threats in the environment throughout the life span. A well-studied form of learned threat is fear conditioning, during which a neutral stimulus acquires the ability to eliciting defensive responses through pairings with an aversive stimulus. If innate threats can facilitate fear conditioning, and whether different types of innate threats can enhance each other, is largely unknown. We developed an immersive virtual reality paradigm to test how innate social and proximal threats are related to each other and how they influence conditioned fear. Skin conductance responses were used to index the autonomic component of the defensive response. We found that social threat modulates proximal threat, but that neither proximal nor social threat modulates conditioned fear. Our results suggest that distinct processes regulate autonomic activity in response to proximal and social threat on the one hand, and conditioned fear on the other.", "title": "" }, { "docid": "2802e8fd4d8df23d55dee9afac0f4177", "text": "Brain plasticity refers to the brain's ability to change structure and function. Experience is a major stimulant of brain plasticity in animal species as diverse as insects and humans. It is now clear that experience produces multiple, dissociable changes in the brain including increases in dendritic length, increases (or decreases) in spine density, synapse formation, increased glial activity, and altered metabolic activity. These anatomical changes are correlated with behavioral differences between subjects with and without the changes. Experience-dependent changes in neurons are affected by various factors including aging, gonadal hormones, trophic factors, stress, and brain pathology. We discuss the important role that changes in dendritic arborization play in brain plasticity and behavior, and we consider these changes in the context of changing intrinsic circuitry of the cortex in processes such as learning.", "title": "" }, { "docid": "22fe3d064e176ae4eca449b4e5b38891", "text": "This paper presents a control technique of cascaded H-bridge multilevel voltage source inverter (CHB-MLI) for a grid-connected photovoltaic system (GCPVS). The proposed control technique is the modified ripple-correlation control maximum power point tracking (MRCC-MPPT). This algorithm has been developed using the mean function concept to continuously correct the maximum power point (MPP) of power transferring from each PV string and to speedily reach the MPP in rapidly shading irradiance. Additionally, It can reduce a PV voltage harmonic filter in the dc-link voltage controller. In task of injecting the quality current to the utility grid, the current control technique based-on the principle of rotating reference frame is proposed. This method can generate the sinusoidal current and independently control the injection of active and reactive power to the utility grid. Simulation results for two H-bridge cells CHB-MLI 4000W/220V/50Hz GCPVS are presented to validate the proposed control scheme.", "title": "" }, { "docid": "4df436dcadb378a4ae72fe558267fddf", "text": "UNLABELLED\nPanic disorder refers to the frequent and recurring acute attacks of anxiety.\n\n\nOBJECTIVE\nThis study describes the routine use of mobiles phones (MPs) and investigates the appearance of possible emotional alterations or symptoms related to their use in patients with panic disorder (PD).\n\n\nBACKGROUND\nWe compared patients with PD and agoraphobia being treated at the Panic and Respiration Laboratory of The Institute of Psychiatry, Federal University of Rio de Janeiro, Brazil, to a control group of healthy volunteers.\n\n\nMETHODS\nAn MP-use questionnaire was administered to a consecutive sample of 50 patients and 70 controls.\n\n\nRESULTS\nPeople with PD showed significant increases in anxiety, tachycardia, respiratory alterations, trembling, perspiration, panic, fear and depression related to the lack of an MP compared to the control group.\n\n\nCONCLUSIONS\nBoth groups exhibited dependence on and were comforted by having an MP; however, people with PD and agoraphobia showed significantly more emotional alterations as well as intense physical and psychological symptoms when they were apart from or unable to use an MP compared to healthy volunteers.", "title": "" }, { "docid": "a6471943d5b80e9b45d216e10a62b2c3", "text": "Comparison of relative fixation rates of synonymous (silent) and nonsynonymous (amino acid-altering) mutations provides a means for understanding the mechanisms of molecular sequence evolution. The nonsynonymous/synonymous rate ratio (omega = d(N)d(S)) is an important indicator of selective pressure at the protein level, with omega = 1 meaning neutral mutations, omega < 1 purifying selection, and omega > 1 diversifying positive selection. Amino acid sites in a protein are expected to be under different selective pressures and have different underlying omega ratios. We develop models that account for heterogeneous omega ratios among amino acid sites and apply them to phylogenetic analyses of protein-coding DNA sequences. These models are useful for testing for adaptive molecular evolution and identifying amino acid sites under diversifying selection. Ten data sets of genes from nuclear, mitochondrial, and viral genomes are analyzed to estimate the distributions of omega among sites. In all data sets analyzed, the selective pressure indicated by the omega ratio is found to be highly heterogeneous among sites. Previously unsuspected Darwinian selection is detected in several genes in which the average omega ratio across sites is <1, but in which some sites are clearly under diversifying selection with omega > 1. Genes undergoing positive selection include the beta-globin gene from vertebrates, mitochondrial protein-coding genes from hominoids, the hemagglutinin (HA) gene from human influenza virus A, and HIV-1 env, vif, and pol genes. Tests for the presence of positively selected sites and their subsequent identification appear quite robust to the specific distributional form assumed for omega and can be achieved using any of several models we implement. However, we encountered difficulties in estimating the precise distribution of omega among sites from real data sets.", "title": "" } ]
scidocsrr
729a356481119423cc9b8591f1f201b0
A Theory of Focus Interpretation
[ { "docid": "2c2942905010e71cda5f8b0f41cf2dd0", "text": "1 Focus and anaphoric destressing Consider a pronunciation of (1) with prominence on the capitalized noun phrases. In terms of a relational notion of prominence, the subject NP she] is prominent within the clause S she beats me], and NP Sue] is prominent within the clause S Sue beats me]. This prosody seems to have the pragmatic function of putting the two clauses into opposition, with prominences indicating where they diier, and prosodic reduction of the remaining parts indicating where the clauses are invariant. (1) She beats me more often than Sue beats me Car84], Roc86] and Roo92] propose theories of focus interpretation which formalize the idea just outlined. Under my assumptions, the prominences are the correlates of a syntactic focus features on the two prominent NPs, written as F subscripts. Further, the grammatical representation of (1) includes operators which interpret the focus features at the level of the minimal dominating S nodes. In the logical form below, each focus feature is interpreted by an operator written .", "title": "" } ]
[ { "docid": "1404cce5101d332d88cc33a78a5cb2b1", "text": "PURPOSE\nAmong patients over 50 years of age, separate vertical wiring alone may be insufficient for fixation of fractures of the inferior pole of the patella. Therefore, mechanical and clinical studies were performed in patients over the age of 50 to test the strength of augmentation of separate vertical wiring with cerclage wire (i.e., combined technique).\n\n\nMATERIALS AND METHODS\nMultiple osteotomies were performed to create four-part fractures in the inferior poles of eight pairs of cadaveric patellae. One patella from each pair was fixed with the separate wiring technique, while the other patella was fixed with a combined technique. The ultimate load to failure and stiffness of the fixation were subsequently measured. In a clinical study of 21 patients (average age of 64 years), comminuted fractures of the inferior pole of the patellae were treated using the combined technique. Operative parameters were recorded from which post-operative outcomes were evaluated.\n\n\nRESULTS\nFor cadaveric patellae, whose mean age was 69 years, the mean ultimate loads to failure for the separate vertical wiring technique and the combined technique were 216.4±72.4 N and 324.9±50.6 N, respectively (p=0.012). The mean stiffness for the separate vertical wiring technique and the combined technique was 241.1±68.5 N/mm and 340.8±45.3 N/mm, respectively (p=0.012). In the clinical study, the mean clinical score at final follow-up was 28.1 points.\n\n\nCONCLUSION\nAugmentation of separate vertical wiring with cerclage wire provides enough strength for protected early exercise of the knee joint and uneventful healing.", "title": "" }, { "docid": "32cf33cbd55f05661703d028f9ffe40f", "text": "Due to the ease with which digital information can be altered, many digital forensic techniques have recently been developed to authenticate multimedia content. One important digital forensic result is that adding or deleting frames from an MPEG video sequence introduces a temporally distributed fingerprint into the video can be used to identify frame deletion or addition. By contrast, very little research exists into anti-forensic operations designed to make digital forgeries undetectable by forensic techniques. In this paper, we propose an anti-forensic technique capable of removing the temporal fingerprint from MPEG videos that have undergone frame addition or deletion. We demonstrate that our proposed anti-forensic technique can effectively remove this fingerprint through a series of experiments.", "title": "" }, { "docid": "bb8d59a0aabc0995f42bd153bfb8f67b", "text": "Abnormal release of Ca from sarcoplasmic reticulum (SR) via the cardiac ryanodine receptor (RyR2) may contribute to contractile dysfunction and arrhythmogenesis in heart failure (HF). We previously demonstrated decreased Ca transient amplitude and SR Ca load associated with increased Na/Ca exchanger expression and enhanced diastolic SR Ca leak in an arrhythmogenic rabbit model of nonischemic HF. Here we assessed expression and phosphorylation status of key Ca handling proteins and measured SR Ca leak in control and HF rabbit myocytes. With HF, expression of RyR2 and FK-506 binding protein 12.6 (FKBP12.6) were reduced, whereas inositol trisphosphate receptor (type 2) and Ca/calmodulin-dependent protein kinase II (CaMKII) expression were increased 50% to 100%. The RyR2 complex included more CaMKII (which was more activated) but less calmodulin, FKBP12.6, and phosphatases 1 and 2A. The RyR2 was more highly phosphorylated by both protein kinase A (PKA) and CaMKII. Total phospholamban phosphorylation was unaltered, although it was reduced at the PKA site and increased at the CaMKII site. SR Ca leak in intact HF myocytes (which is higher than in control) was reduced by inhibition of CaMKII but was unaltered by PKA inhibition. CaMKII inhibition also increased SR Ca content in HF myocytes. Our results suggest that CaMKII-dependent phosphorylation of RyR2 is involved in enhanced SR diastolic Ca leak and reduced SR Ca load in HF, and may thus contribute to arrhythmias and contractile dysfunction in HF.", "title": "" }, { "docid": "f649f6930e349726bd3185a420b4606c", "text": "Malfunctioning medical devices are one of the leading causes of serious injury and death in the US. Between 2006 and 2011, 5,294 recalls and approximately 1.2 million adverse events were reported to the US Food and Drug Administration (FDA). Almost 23 percent of these recalls were due to computer-related failures, of which approximately 94 percent presented medium to high risk of severe health consequences (such as serious injury or death) to patients. This article investigates the causes of failures in computer-based medical devices and their impact on patients by analyzing human-written descriptions of recalls and adverse event reports obtained from public FDA databases. The authors characterize computer-related failures by deriving fault classes, failure modes, recovery actions, and number of devices affected by the recalls. This analysis is used as a basis for identifying safety issues in life-critical medical devices and providing insights on the future challenges in the design of safety-critical medical devices.", "title": "" }, { "docid": "4177fc3fa7c5abe25e4e144e6c079c1f", "text": "A wideband noise-cancelling low-noise amplifier (LNA) without the use of inductors is designed for low-voltage and low-power applications. Based on the common-gate-common-source (CG-CS) topology, a new approach employing local negative feedback is introduced between the parallel CG and CS stages. The moderate gain at the source of the cascode transistor in the CS stage is utilized to boost the transconductance of the CG transistor. This leads to an LNA with higher gain and lower noise figure (NF) compared with the conventional CG-CS LNA, particularly under low power and voltage constraints. By adjusting the local open-loop gain, the NF can be optimized by distributing the power consumption among transistors and resistors based on their contribution to the NF. The optimal value of the local open-loop gain can be obtained by taking into account the effect of phase shift at high frequency. The linearity is improved by employing two types of distortion-cancelling techniques. Fabricated in a 0.13-μm RF CMOS process, the LNA achieves a voltage gain of 19 dB and an NF of 2.8-3.4 dB over a 3-dB bandwidth of 0.2-3.8 GHz. It consumes 5.7 mA from a 1-V supply and occupies an active area of only 0.025 mm2.", "title": "" }, { "docid": "1c2acb749d89626cd17fd58fd7f510e3", "text": "The lack of control of the content published is broadly regarded as a positive aspect of the Web, assuring freedom of speech to its users. On the other hand, there is also a lack of control of the content accessed by users when browsing Web pages. In some situations this lack of control may be undesired. For instance, parents may not desire their children to have access to offensive content available on the Web. In particular, accessing Web pages with nude images is among the most common problem of this sort. One way to tackle this problem is by using automated offensive image detection algorithms which can filter undesired images. Recent approaches on nude image detection use a combination of features based on color, texture, shape and other low level features in order to describe the image content. These features are then used by a classifier which is able to detect offensive images accordingly. In this paper we propose SNIF - simple nude image finder - which uses a color based feature only, extracted by an effective and efficient algorithm for image description, the border/interior pixel classification (BIC), combined with a machine learning technique, namely support vector machines (SVM). SNIF uses a simpler feature model when compared to previously proposed methods, which makes it a fast image classifier. The experiments carried out depict that the proposed method, despite its simplicity, is capable to identify up to 98% of nude images from the test set. This indicates that SNIF is as effective as previously proposed methods for detecting nude images.", "title": "" }, { "docid": "473f51629f0267530a02472fb1e5b7ac", "text": "It has been widely reported that a large number of ERP implementations fail to meet expectations. This is indicative, firstly, of the magnitude of the problems involved in ERP systems implementation and, secondly, of the importance of the ex-ante evaluation and selection process of ERP software. This paper argues that ERP evaluation should extend its scope beyond operational improvements arising from the ERP software/product per se to the strategic impact of ERP on the competitive position of the organisation. Due to the complexity of ERP software, the intangible nature of both costs and benefits, which evolve over time, and the organisational, technological and behavioural impact of ERP, a broad perspective of the ERP systems evaluation process is needed. The evaluation has to be both quantitative and qualitative and requires an estimation of the perceived costs and benefits throughout the life-cycle of ERP systems. The paper concludes by providing a framework of the key issues involved in the selection process of ERP software and the associated costs and benefits. European Journal of Information Systems (2001) 10, 204–215.", "title": "" }, { "docid": "c00a29466c82f972a662b0e41b724928", "text": "We introduce the type theory ¿µv, a call-by-value variant of Parigot's ¿µ-calculus, as a Curry-Howard representation theory of classical propositional proofs. The associated rewrite system is Church-Rosser and strongly normalizing, and definitional equality of the type theory is consistent, compatible with cut, congruent and decidable. The attendant call-by-value programming language µPCFv is obtained from ¿µv by augmenting it by basic arithmetic, conditionals and fixpoints. We study the behavioural properties of µPCFv and show that, though simple, it is a very general language for functional computation with control: it can express all the main control constructs such as exceptions and first-class continuations. Proof-theoretically the dual ¿µv-constructs of naming and µ-abstraction witness the introduction and elimination rules of absurdity respectively. Computationally they give succinct expression to a kind of generic (forward) \"jump\" operator, which may be regarded as a unifying control construct for functional computation. Our goal is that ¿µv and µPCFv respectively should be to functional computation with first-class access to the flow of control what ¿-calculus and PCF respectively are to pure functional programming: ¿µv gives the logical basis via the Curry-Howard correspondence, and µPCFv is a prototypical language albeit in purified form.", "title": "" }, { "docid": "d48ea163dd0cd5d80ba95beecee5102d", "text": "Foodborne pathogens (FBP) represent an important threat to the consumers' health as they are able to cause different foodborne diseases. In order to eliminate the potential risk of those pathogens, lactic acid bacteria (LAB) have received a great attention in the food biotechnology sector since they play an essential function to prevent bacterial growth and reduce the biogenic amines (BAs) formation. The foodborne illnesses (diarrhea, vomiting, and abdominal pain, etc.) caused by those microbial pathogens is due to various reasons, one of them is related to the decarboxylation of available amino acids that lead to BAs production. The formation of BAs by pathogens in foods can cause the deterioration of their nutritional and sensory qualities. BAs formation can also have toxicological impacts and lead to different types of intoxications. The growth of FBP and their BAs production should be monitored and prevented to avoid such problems. LAB is capable of improving food safety by preventing foods spoilage and extending their shelf-life. LAB are utilized by the food industries to produce fermented products with their antibacterial effects as bio-preservative agents to extent their storage period and preserve their nutritive and gustative characteristics. Besides their contribution to the flavor for fermented foods, LAB secretes various antimicrobial substances including organic acids, hydrogen peroxide, and bacteriocins. Consequently, in this paper, the impact of LAB on the growth of FBP and their BAs formation in food has been reviewed extensively.", "title": "" }, { "docid": "98907e5f8aea574618a2e2409378f9c3", "text": "Nonnegative matrix factorization (NMF) provides a lower rank approximation of a nonnegative matrix, and has been successfully used as a clustering method. In this paper, we offer some conceptual understanding for the capabilities and shortcomings of NMF as a clustering method. Then, we propose Symmetric NMF (SymNMF) as a general framework for graph clustering, which inherits the advantages of NMF by enforcing nonnegativity on the clustering assignment matrix. Unlike NMF, however, SymNMF is based on a similarity measure between data points, and factorizes a symmetric matrix containing pairwise similarity values (not necessarily nonnegative). We compare SymNMF with the widely-used spectral clustering methods, and give an intuitive explanation of why SymNMF captures the cluster structure embedded in the graph representation more naturally. In addition, we develop a Newton-like algorithm that exploits second-order information efficiently, so as to show the feasibility of SymNMF as a practical framework for graph clustering. Our experiments on artificial graph data, text data, and image data demonstrate the substantially enhanced clustering quality of SymNMF over spectral clustering and NMF. Therefore, SymNMF is able to achieve better clustering results on both linear and nonlinear manifolds, and serves as a potential basis for many extensions", "title": "" }, { "docid": "6b7d2d82bbfbaa7f55c25b4a304c8d4c", "text": "Services that are delivered over the Internet—e-services— pose unique problems yet offer unprecedented opportunities. In this paper, we classify e-services along the dimensions of their level of digitization and the nature of their target markets (business-to-business, business-toconsumer, consumer-to-consumer). Using the case of application services, we analyze how they differ from traditional software procurement and development. Next, we extend the concept of modular platforms to this domain and identify how knowledge management can be used to assemble rapidly new application services. We also discuss how such traceabilty-based knowledge management can facilitate e-service evolution and version-based market segmentation.", "title": "" }, { "docid": "3e7bac216957b18a24cbd0393b0ff26a", "text": "This research investigated the influence of parent–adolescent communication quality, as perceived by the adolescents, on the relationship between adolescents’ Internet use and verbal aggression. Adolescents (N = 363, age range 10–16, MT1 = 12.84, SD = 1.93) were examined twice with a six-month delay. Controlling for social support in general terms, moderated regression analyses showed that Internet-related communication quality with parents determined whether Internet use is associated with an increase or a decrease in adolescents’ verbal aggression scores over time. A three way interaction indicated that high Internet-related communication quality with peers can have disadvantageous effects if the communication quality with parents is low. Implications on resources and risk factors related to the effects of Internet use are discussed. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "1bf801e8e0348ccd1e981136f604dd18", "text": "Sketch recognition is one of the integral components used by law enforcement agencies in solving crime. In recent past, software generated composite sketches are being preferred as they are more consistent and faster to construct than hand drawn sketches. Matching these composite sketches to face photographs is a complex task because the composite sketches are drawn based on the witness description and lack minute details which are present in photographs. This paper presents a novel algorithm for matching composite sketches with photographs using transfer learning with deep learning representation. In the proposed algorithm, first the deep learning architecture based facial representation is learned using large face database of photos and then the representation is updated using small problem-specific training database. Experiments are performed on the extended PRIP database and it is observed that the proposed algorithm outperforms recently proposed approach and a commercial face recognition system.", "title": "" }, { "docid": "331df0bd161470558dd5f5061d2b1743", "text": "The work on large-scale graph analytics to date has largely focused on the study of static properties of graph snapshots. However, a static view of interactions between entities is often an oversimplification of several complex phenomena like the spread of epidemics, information diffusion, formation of online communities, and so on. Being able to find temporal interaction patterns, visualize the evolution of graph properties, or even simply compare them across time, adds significant value in reasoning over graphs. However, because of lack of underlying data management support, an analyst today has to manually navigate the added temporal complexity of dealing with large evolving graphs. In this paper, we present a system, called Historical Graph Store, that enables users to store large volumes of historical graph data and to express and run complex temporal graph analytical tasks against that data. It consists of two key components: a Temporal Graph Index (TGI), that compactly stores large volumes of historical graph evolution data in a partitioned and distributed fashion; it provides support for retrieving snapshots of the graph as of any timepoint in the past or evolution histories of individual nodes or neighborhoods; and a Spark-based Temporal Graph Analysis Framework (TAF), for expressing complex temporal analytical tasks and for executing them in an efficient and scalable manner. Our experiments demonstrate our system’s efficient storage, retrieval and analytics across a wide variety of queries on large volumes of historical graph data.", "title": "" }, { "docid": "8bd619e8d1816dd5c692317a8fb8e0ed", "text": "The data mining field in computer science specializes in extracting implicit information that is distributed across the stored data records and/or exists as associations among groups of records. Criminal databases contain information on the crimes themselves, the offenders, the victims as well as the vehicles that were involved in the crime. Among these records lie groups of crimes that can be attributed to serial criminals who are responsible for multiple criminal offenses and usually exhibit patterns in their operations, by specializing in a particular crime category (i.e., rape, murder, robbery, etc.), and applying a specific method for implementing their crimes. Discovering serial criminal patterns in crime databases is, in general, a clustering activity in the area of data mining that is concerned with detecting trends in the data by classifying and grouping similar records. In this paper, we report on the different statistical and neural network approaches to the clustering problem in data mining in general, and as it applies to our crime domain in particular. We discuss our approach of using a cascaded network of Kohonen neural networks followed by heuristic processing of the networks outputs that best simulated the experts in the field. We address the issues in this project and the reasoning behind this approach, including: the choice of neural networks, in general, over statistical algorithms as the main tool, and the use of Kohonen networks in particular, the choice for the cascaded approach instead of the direct approach, and the choice of a heuristics subsystem as a back-end subsystem to the neural networks. We also report on the advantages of this approach over both the traditional approach of using a single neural network to accommodate all the attributes, and that of applying a single clustering algorithm on all the data attributes.", "title": "" }, { "docid": "263485ca833637a55f18abcdfff096e2", "text": "We propose an efficient and parameter-free scoring criterio n, the factorized conditional log-likelihood (̂fCLL), for learning Bayesian network classifiers. The propo sed score is an approximation of the conditional log-likelihood criterion. The approximation is devised in order to guarantee decomposability over the network structure, as w ell as efficient estimation of the optimal parameters, achieving the same time and space complexity as the traditional log-likelihood scoring criterion. The resulting criterion has an information-the oretic interpretation based on interaction information, which exhibits its discriminative nature. To evaluate the performance of the proposed criterion, we present an empirical comparison with state-o f-the-art classifiers. Results on a large suite of benchmark data sets from the UCI repository show tha t f̂CLL-trained classifiers achieve at least as good accuracy as the best compared classifiers, us ing significantly less computational resources.", "title": "" }, { "docid": "7c4c33097c12f55a08f8a7cc3634c5cb", "text": "Pattern queries are widely used in complex event processing (CEP) systems. Existing pattern matching techniques, however, can provide only limited performance for expensive queries in real-world applications, which may involve Kleene closure patterns, flexible event selection strategies, and events with imprecise timestamps. To support these expensive queries with high performance, we begin our study by analyzing the complexity of pattern queries, with a focus on the fundamental understanding of which features make pattern queries more expressive and at the same time more computationally expensive. This analysis allows us to identify performance bottlenecks in processing those expensive queries, and provides key insights for us to develop a series of optimizations to mitigate those bottlenecks. Microbenchmark results show superior performance of our system for expensive pattern queries while most state-of-the-art systems suffer from poor performance. A thorough case study on Hadoop cluster monitoring further demonstrates the efficiency and effectiveness of our proposed techniques.", "title": "" }, { "docid": "be4a4e3385067ce8642ff83ed76c4dcf", "text": "We examine what makes a search system domain-specific and find that previous definitions are incomplete. We propose a new definition of domain specific search, together with a corresponding model, to assist researchers, systems designers and system beneficiaries in their analysis of their own domain. This model is then instantiated for two domains: intellectual property search (i.e. patent search) and medical or healthcare search. For each of the two we follow the theoretical model and identify outstanding issues. We find that the choice of dimensions is still an open issue, as linear independence is often absent and specific use-cases, particularly those related to interactive IR, still cannot be covered by the proposed model.", "title": "" }, { "docid": "07aa8c56cdf98a389526c0bdf9a31be1", "text": "Machine translation evaluation methods are highly necessary in order to analyze the performance of translation systems. Up to now, the most traditional methods are the use of automatic measures such as BLEU or the quality perception performed by native human evaluations. In order to complement these traditional procedures, the current paper presents a new human evaluation based on the expert knowledge about the errors encountered at several linguistic levels: orthographic, morphological, lexical, semantic and syntactic. The results obtained in these experiments show that some linguistic errors could have more influence than other at the time of performing a perceptual evaluation.", "title": "" } ]
scidocsrr
8b0ec2b6908a5946b364b9d95fcecf71
3D human pose estimation in video with temporal convolutions and semi-supervised training
[ { "docid": "046710d2b22adeec4a8ebc3656e274be", "text": "This paper addresses the challenge of 3D human pose estimation from a single color image. Despite the general success of the end-to-end learning paradigm, top performing approaches employ a two-step solution consisting of a Convolutional Network (ConvNet) for 2D joint localization and a subsequent optimization step to recover 3D pose. In this paper, we identify the representation of 3D pose as a critical issue with current ConvNet approaches and make two important contributions towards validating the value of end-to-end learning for this task. First, we propose a fine discretization of the 3D space around the subject and train a ConvNet to predict per voxel likelihoods for each joint. This creates a natural representation for 3D pose and greatly improves performance over the direct regression of joint coordinates. Second, to further improve upon initial estimates, we employ a coarse-to-fine prediction scheme. This step addresses the large dimensionality increase and enables iterative refinement and repeated processing of the image features. The proposed approach outperforms all state-of-the-art methods on standard benchmarks achieving a relative error reduction greater than 30% on average. Additionally, we investigate using our volumetric representation in a related architecture which is suboptimal compared to our end-to-end approach, but is of practical interest, since it enables training when no image with corresponding 3D groundtruth is available, and allows us to present compelling results for in-the-wild images.", "title": "" }, { "docid": "101c03b85e3cc8518a158d89cc9b3b39", "text": "Machine translation has recently achieved impressive performance thanks to recent advances in deep learning and the availability of large-scale parallel corpora. There have been numerous attempts to extend these successes to low-resource language pairs, yet requiring tens of thousands of parallel sentences. In this work, we take this research direction to the extreme and investigate whether it is possible to learn to translate even without any parallel data. We propose a model that takes sentences from monolingual corpora in two different languages and maps them into the same latent space. By learning to reconstruct in both languages from this shared feature space, the model effectively learns to translate without using any labeled data. We demonstrate our model on two widely used datasets and two language pairs, reporting BLEU scores of 32.8 and 15.1 on the Multi30k and WMT English-French datasets, without using even a single parallel sentence at training time.", "title": "" } ]
[ { "docid": "8fccceb2757decb670eed84f4b2405a1", "text": "This paper develops and evaluates search and optimization techniques for autotuning 3D stencil (nearest neighbor) computations on GPUs. Observations indicate that parameter tuning is necessary for heterogeneous GPUs to achieve optimal performance with respect to a search space. Our proposed framework takes a most concise specification of stencil behavior from the user as a single formula, autogenerates tunable code from it, systematically searches for the best configuration and generates the code with optimal parameter configurations for different GPUs. This autotuning approach guarantees adaptive performance for different generations of GPUs while greatly enhancing programmer productivity. Experimental results show that the delivered floating point performance is very close to previous handcrafted work and outperforms other autotuned stencil codes by a large margin. Furthermore, heterogeneous GPU clusters are shown to exhibit the highest performance for dissimilar tuning parameters leveraging proportional partitioning relative to single-GPU performance.", "title": "" }, { "docid": "5c90cd6c4322c30efb90589b1a65192e", "text": "The sure thing principle and the law of total probability are basic laws in classic probability theory. A disjunction fallacy leads to the violation of these two classical laws. In this paper, an Evidential Markov (EM) decision making model based on Dempster-Shafer (D-S) evidence theory and Markov modelling is proposed to address this issue and model the real human decision-making process. In an evidential framework, the states are extended by introducing an uncertain state which represents the hesitance of a decision maker. The classical Markov model can not produce the disjunction effect, which assumes that a decision has to be certain at one time. However, the state is allowed to be uncertain in the EM model before the final decision is made. An extra uncertainty degree parameter is defined by a belief entropy, named Deng entropy, to assignment the basic probability assignment of the uncertain state, which is the key to predict the disjunction effect. A classical categorization decision-making experiment is used to illustrate the effectiveness and validity of EM model. The disjunction effect can be well predicted ∗Corresponding author at Wen Jiang: School of Electronics and Information, Northwestern Polytechnical University, Xi’an, Shaanxi 710072, China. Tel: (86-29)88431267. E-mail address: [email protected], [email protected] Preprint submitted to Elsevier May 19, 2017 and the free parameters are less compared with the existing models.", "title": "" }, { "docid": "fd85e1c686c1542920dff1f0e323ed33", "text": "This index covers all technical items - papers, correspondence, reviews, etc. - that appeared in this periodical during the year, and items from previous years that were commented upon or corrected in this year. Departments and other items may also be covered if they have been judged to have archival value. The Author Index contains the primary entry for each item, listed under the first author's name. The primary entry includes the co-authors' names, the title of the paper or other item, and its location, specified by the publication abbreviation, year, month, and inclusive pagination. The Subject Index contains entries describing the item under all appropriate subject headings, plus the first author's name, the publication abbreviation, month, and year, and inclusive pages. Note that the item title is found only under the primary entry in the Author Index.", "title": "" }, { "docid": "7e1e475f5447894a6c246e7d47586c4b", "text": "Between 1983 and 2003 forty accidental autoerotic deaths (all males, 13-79 years old) have been investigated at the Institute of Legal Medicine in Hamburg. Three cases with a rather unusual scenery are described in detail: (1) a 28-year-old fireworker was found hanging under a bridge in a peculiar bound belt system. The autopsy and the reconstruction revealed signs of asphyxiation, feminine underwear, and several layers of plastic clothing. (2) A 16-year-old pupil dressed with feminine plastic and rubber utensils fixed and strangulated himself with an electric wire. (3) A 28-year-old handicapped man suffered from progressive muscular dystrophy and was nearly unable to move. His bizarre sexual fantasies were exaggerating: he induced a nurse to draw plastic bags over his body, close his mouth with plastic strips, and put him in a rubbish container where he died from suffocation.", "title": "" }, { "docid": "01edfc6eb157dc8cf2642f58cf3aba25", "text": "Understanding developmental processes, especially in non-model crop plants, is extremely important in order to unravel unique mechanisms regulating development. Chickpea (C. arietinum L.) seeds are especially valued for their high carbohydrate and protein content. Therefore, in order to elucidate the mechanisms underlying seed development in chickpea, deep sequencing of transcriptomes from four developmental stages was undertaken. In this study, next generation sequencing platform was utilized to sequence the transcriptome of four distinct stages of seed development in chickpea. About 1.3 million reads were generated which were assembled into 51,099 unigenes by merging the de novo and reference assemblies. Functional annotation of the unigenes was carried out using the Uniprot, COG and KEGG databases. RPKM based digital expression analysis revealed specific gene activities at different stages of development which was validated using Real time PCR analysis. More than 90% of the unigenes were found to be expressed in at least one of the four seed tissues. DEGseq was used to determine differentially expressing genes which revealed that only 6.75% of the unigenes were differentially expressed at various stages. Homology based comparison revealed 17.5% of the unigenes to be putatively seed specific. Transcription factors were predicted based on HMM profiles built using TF sequences from five legume plants and analyzed for their differential expression during progression of seed development. Expression analysis of genes involved in biosynthesis of important secondary metabolites suggested that chickpea seeds can serve as a good source of antioxidants. Since transcriptomes are a valuable source of molecular markers like simple sequence repeats (SSRs), about 12,000 SSRs were mined in chickpea seed transcriptome and few of them were validated. In conclusion, this study will serve as a valuable resource for improved chickpea breeding.", "title": "" }, { "docid": "0fd5256c319f7be353d57ed336d94587", "text": "As a precious part of the human cultural heritage, Chinese poetry has influenced people for generations. Automatic poetry composition is a challenge for AI. In recent years, significant progress has been made in this area benefiting from the development of neural networks. However, the coherence in meaning, theme or even artistic conception for a generated poem as a whole still remains a big problem. In this paper, we propose a novel Salient-Clue mechanism for Chinese poetry generation. Different from previous work which tried to exploit all the context information, our model selects the most salient characters automatically from each so-far generated line to gradually form a salient clue, which is utilized to guide successive poem generation process so as to eliminate interruptions and improve coherence. Besides, our model can be flexibly extended to control the generated poem in different aspects, for example, poetry style, which further enhances the coherence. Experimental results show that our model is very effective, outperforming three strong baselines.", "title": "" }, { "docid": "6d8e6be6a36d30ed2c18e3b80197ea44", "text": "The hash symbol, called a hashtag, is used to mark the keyword or topic in a tweet. It was created organically by users as a way to categorize messages. Hashtags also provide valuable information for many research applications such as sentiment classification and topic analysis. However, only a small number of tweets are manually annotated. Therefore, an automatic hashtag recommendation method is needed to help users tag their new tweets. Previous methods mostly use conventional machine learning classifiers such as SVM or utilize collaborative filtering technique. A bottleneck of these approaches is that they all use the TF-IDF scheme to represent tweets and ignore the semantic information in tweets. In this paper, we also regard hashtag recommendation as a classification task but propose a novel recurrent neural network model to learn vector-based tweet representations to recommend hashtags. More precisely, we use a skip-gram model to generate distributed word representations and then apply a convolutional neural network to learn semantic sentence vectors. Afterwards, we make use of the sentence vectors to train a long short-term memory recurrent neural network (LSTM-RNN). We directly use the produced tweet vectors as features to classify hashtags without any feature engineering. Experiments on real world data from Twitter to recommend hashtags show that our proposed LSTM-RNN model outperforms state-of-the-art methods and LSTM unit also obtains the best performance compared to standard RNN and gated recurrent unit (GRU).", "title": "" }, { "docid": "f3a08d4f896f7aa2d0f1fff04764efc3", "text": "The natural distribution of textual data used in text classification is often imbalanced. Categories with fewer examples are under-represented and their classifiers often perform far below satisfactory. We tackle this problem using a simple probability based term weighting scheme to better distinguish documents in minor categories. This new scheme directly utilizes two critical information ratios, i.e. relevance indicators. Such relevance indicators are nicely supported by probability estimates which embody the category membership. Our experimental study using both Support Vector Machines and Naı̈ve Bayes classifiers and extensive comparison with other classic weighting schemes over two benchmarking data sets, including Reuters-21578, shows significant improvement for minor categories, while the performance for major categories are not jeopardized. Our approach has suggested a simple and effective solution to boost the performance of text classification over skewed data sets. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "bc93abd474fe56d744d51317deda03d1", "text": "Land surface temperature (LST) is one of the most important variables measured by satellite remote sensing. Public domain data are available from the newly operational Landsat-8 Thermal Infrared Sensor (TIRS). This paper presents an adjustment of the split window algorithm (SWA) for TIRS that uses atmospheric transmittance and land surface emissivity (LSE) as inputs. Various alternatives for estimating these SWA inputs are reviewed, and a sensitivity analysis of the SWA to misestimating the input parameters is performed. The accuracy of the current development was assessed using simulated Modtran data. The root mean square error (RMSE) of the simulated LST was calculated as 0.93 °C. This SWA development is leading to progress in the determination of LST by Landsat-8 TIRS.", "title": "" }, { "docid": "fbb48416c34d4faee1a87ac2efaf466d", "text": "Do unsupervised methods for learning rich, contextualized token representations obviate the need for explicit modeling of linguistic structure in neural network models for semantic role labeling (SRL)? We address this question by incorporating the massively successful ELMo embeddings (Peters et al., 2018) into LISA (Strubell et al., 2018), a strong, linguisticallyinformed neural network architecture for SRL. In experiments on the CoNLL-2005 shared task we find that though ELMo outperforms typical word embeddings, beginning to close the gap in F1 between LISA with predicted and gold syntactic parses, syntactically-informed models still outperform syntax-free models when both use ELMo, especially on out-of-domain data. Our results suggest that linguistic structures are indeed still relevant in this golden age of deep learning for NLP.", "title": "" }, { "docid": "b6f04270b265cd5a0bb7d0f9542168fb", "text": "This paper presents design and manufacturing procedure of a tele-operative rescue robot. First, the general task to be performed by such a robot is defined, and variant kinematic mechanisms to form the basic structure of the robot are discussed. Choosing an appropriate mechanism, geometric dimensions, and mass properties are detailed to develop a dynamics model for the system. Next, the strength of each component is analyzed to finalize its shape. To complete the design procedure, Patran/Nastran was used to apply the finite element method for strength analysis of complicated parts. Also, ADAMS was used to model the mechanisms, where 3D sketch of each component of the robot was generated by means of Solidworks, and several sets of equations governing the dimensions of system were solved using Matlab. Finally, the components are fabricated and assembled together with controlling hardware. Two main processors are used within the control system of the robot. The operator's PC as the master processor and the laptop installed on the robot as the slave processor. The performance of the system was demonstrated in Rescue robot league of RoboCup 2005 in Osaka (Japan) and achieved the 2nd best design award", "title": "" }, { "docid": "5e8154a99b4b0cc544cab604b680ebd2", "text": "This work presents performance of robust wearable antennas intended to operate in Wireless Body Area Networks (W-BAN) in UHF, TETRAPOL communication band, 380-400 MHz. We propose a Planar Inverted F Antenna (PIFA) as reliable antenna type for UHF W-BAN applications. In order to satisfy the robustness requirements of the UHF band, both from communication and mechanical aspect, a new technology for building these antennas was proposed. The antennas are built out of flexible conductive sheets encapsulated inside a silicone based elastomer, Polydimethylsiloxane (PDMS). The proposed antennas are resistive to washing, bending and perforating. From the communication point of view, opting for a PIFA antenna type we solve the problem of coupling to the wearer and thus improve the overall communication performance of the antenna. Several different tests and comparisons were performed in order to check the stability of the proposed antennas when they are placed on the wearer or left in a common everyday environ- ment, on the ground, table etc. S11 deviations are observed and compared with the commercially available wearable antennas. As a final check, the antennas were tested in the frame of an existing UHF TETRAPOL communication system. All the measurements were performed in a real university campus scenario, showing reliable and good performance of the proposed PIFA antennas.", "title": "" }, { "docid": "ea8adfd28e62e99b4e3786a023711300", "text": "Paper documents still represent a large amount of information supports used nowadays and may contain critical data. Even though official documents are secured with techniques such as printed patterns or artwork, paper documents suffer from a lack of security. However, the high availability of cheap scanning and printing hardware allows non-experts to easily create fake documents. As the use of a watermarking system added during the document production step is hardly possible, solutions have to be proposed to distinguish a genuine document from a forged one. In this paper, we present an automatic forgery detection method based on document's intrinsic features at character level. This method is based on the one hand on outlier character detection in a discriminant feature space and on the other hand on the detection of strictly similar characters. Therefore, a feature set is computed for all characters. Then, based on a distance between characters of the same class, the character is classified as a genuine one or a fake one.", "title": "" }, { "docid": "115c06a2e366293850d1ef3d60f2a672", "text": "Accurate network traffic identification plays important roles in many areas such as traffic engineering, QoS and intrusion detection etc. The emergence of many new encrypted applications which use dynamic port numbers and masquerading techniques causes the most challenging problem in network traffic identification field. One of the challenging issues for existing traffic identification methods is that they can’t classify online encrypted traffic. To overcome the drawback of the previous identification scheme and to meet the requirements of the encrypted network activities, our work mainly focuses on how to build an online Internet traffic identification based on flow information. We propose real-time encrypted traffic identification based on flow statistical characteristics using machine learning in this paper. We evaluate the effectiveness of our proposed method through the experiments on different real traffic traces. By experiment results and analysis, this method can classify online encrypted network traffic with high accuracy and robustness.", "title": "" }, { "docid": "37dcc23a5504466a5f8200f281487888", "text": "Computational approaches that 'dock' small molecules into the structures of macromolecular targets and 'score' their potential complementarity to binding sites are widely used in hit identification and lead optimization. Indeed, there are now a number of drugs whose development was heavily influenced by or based on structure-based design and screening strategies, such as HIV protease inhibitors. Nevertheless, there remain significant challenges in the application of these approaches, in particular in relation to current scoring schemes. Here, we review key concepts and specific features of small-molecule–protein docking methods, highlight selected applications and discuss recent advances that aim to address the acknowledged limitations of established approaches.", "title": "" }, { "docid": "805ff3489d9bc145a0a8b91ce58ce3f9", "text": "The present experiment was designed to test the theory that psychological procedures achieve changes in behavior by altering the level and strength of self-efficacy. In this formulation, perceived self-efficacy. In this formulation, perceived self-efficacy influences level of performance by enhancing intensity and persistence of effort. Adult phobics were administered treatments based upon either performance mastery experiences, vicarious experiences., or they received no treatment. Their efficacy expectations and approach behavior toward threats differing on a similarity dimension were measured before and after treatment. In accord with our prediction, the mastery-based treatment produced higher, stronger, and more generalized expectations of personal efficacy than did the treatment relying solely upon vicarious experiences. Results of a microanalysis further confirm the hypothesized relationship between self-efficacy and behavioral change. Self-efficacy was a uniformly accurate predictor of performance on tasks of varying difficulty with different threats regardless of whether the changes in self-efficacy were produced through enactive mastery or by vicarious experience alone.", "title": "" }, { "docid": "157c084aa6622c74449f248f98314051", "text": "A magnetically-tuned multi-mode VCO featuring an ultra-wide frequency tuning range is presented. By changing the magnetic coupling coefficient between the primary and secondary coils in the transformer tank, the frequency tuning range of a dual-band VCO is greatly increased to continuously cover the whole E-band. Fabricated in a 65-nm CMOS process, the presented VCO measures a tuning range of 44.2% from 57.5 to 90.1 GHz while consuming 7mA to 9mA at 1.2V supply. The measured phase noises at 10MHz offset from carrier frequencies of 72.2, 80.5 and 90.1 GHz are -111.8, -108.9 and -105 dBc/Hz, respectively, which corresponds to a FOMT between -192.2 and -184.2dBc/Hz.", "title": "" }, { "docid": "a117e006785ab63ef391d882a097593f", "text": "An increasing interest in understanding human perception in social media has led to the study of the processes of personality self-presentation and impression formation based on user profiles and text blogs. However, despite the popularity of online video, we do not know of any attempt to study personality impressions that go beyond the use of text and still photos. In this paper, we analyze one facet of YouTube as a repository of brief behavioral slices in the form of personal conversational vlogs, which are a unique medium for selfpresentation and interpersonal perception. We investigate the use of nonverbal cues as descriptors of vloggers’ behavior and find significant associations between automatically extracted nonverbal cues for several personality judgments. As one notable result, audio and visual cues together can be used to predict 34% of the variance of the Extraversion trait of the Big Five model. In addition, we explore the associations between vloggers’ personality scores and the level of social attention that their videos received in YouTube. Our study is conducted on a dataset of 442 YouTube vlogs and 2,210 annotations collected using Amazon’s Mechanical Turk.", "title": "" }, { "docid": "45ff2c8f796eb2853f75bedd711f3be4", "text": "High-quality (<inline-formula> <tex-math notation=\"LaTeX\">$Q$ </tex-math></inline-formula>) oscillators are notorious for being extremely slow during startup. Their long startup time increases the average power consumption in duty-cycled systems. This paper presents a novel precisely timed energy injection technique to speed up the startup behavior of high-<inline-formula> <tex-math notation=\"LaTeX\">$Q$ </tex-math></inline-formula> oscillators. The proposed solution is also insensitive to the frequency variations of the injection signal over a wide enough range that makes it possible to employ an integrated oscillator to provide the injection signal. A theoretical analysis is carried out to calculate the optimal injection duration. As a proof-of-concept, the proposed technique is incorporated in the design of crystal oscillators and is realized in a TSMC 65-nm CMOS technology. To verify the robustness of our technique across resonator parameters and frequency variations, six crystal resonators from different manufacturers with different packagings and <inline-formula> <tex-math notation=\"LaTeX\">$Q$ </tex-math></inline-formula> factors were tested. The implemented IC includes multiple crystal oscillators at 1.84, 10, and 50 MHz frequencies, with measured startup times of 58, 10, and 2 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{s}$ </tex-math></inline-formula>, while consuming 6.7, 45.5, and 195 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{W}$ </tex-math></inline-formula> at steady state, respectively. To the authors’ best knowledge, this is the fastest, reported startup time in the literature, with >15<inline-formula> <tex-math notation=\"LaTeX\">$\\times$ </tex-math></inline-formula> improvement over prior art, while requiring the smallest startup energy (~12 nJ).", "title": "" }, { "docid": "f9da4bfe6dba0a6ec886758b164cd10b", "text": "Physically based deformable models have been widely embraced by the Computer Graphics community. Many problems outlined in a previous survey by Gibson and Mirtich [GM97] have been addressed, thereby making these models interesting and useful for both offline and real-time applications, such as motion pictures and video games. In this paper, we present the most significant contributions of the past decade, which produce such impressive and perceivably realistic animations and simulations: finite element/difference/volume methods, mass-spring systems, meshfree methods, coupled particle systems and reduced deformable models based on modal analysis. For completeness, we also make a connection to the simulation of other continua, such as fluids, gases and melting objects. Since time integration is inherent to all simulated phenomena, the general notion of time discretization is treated separately, while specifics are left to the respective models. Finally, we discuss areas of application, such as elastoplastic deformation and fracture, cloth and hair animation, virtual surgery simulation, interactive entertainment and fluid/smoke animation, and also suggest areas for future research.", "title": "" } ]
scidocsrr
05cd53ecd1cb6729dc198535fb56c571
Real-Time Detection of Denial-of-Service Attacks in IEEE 802.11p Vehicular Networks
[ { "docid": "b64652316fc9ac5d1a049ab29e770afa", "text": "Future cooperative Intelligent Transport Systems (ITS) applications aimed to improve safety, efficiency and comfort on our roads put high demands on the underlying wireless communication system. To gain better understanding of the limitations of the 5.9 GHz frequency band and the set of communication protocols for medium range vehicle to vehicle (V2V) communication, a set of field trials with CALM M5 enabled prototypes has been conducted. This paper describes five different real vehicle traffic scenarios covering both urban and rural settings at varying vehicle speeds and under varying line-of-sight (LOS) conditions and discusses the connectivity (measured as Packet Reception Ratio) that could be achieved between the two test vehicles. Our measurements indicate a quite problematic LOS sensitivity that strongly influences the performance of V2V-based applications. We further discuss how the awareness of these context-based connectivity problems can be used to improve the design of possible future cooperative ITS safety applications.", "title": "" }, { "docid": "dbca7415a584b3a8b9348c47d5ab2fa4", "text": "The shared nature of the medium in wireless networks makes it easy for an adversary to launch a Wireless Denial of Service (WDoS) attack. Recent studies, demonstrate that such attacks can be very easily accomplished using off-the-shelf equipment. To give a simple example, a malicious node can continually transmit a radio signal in order to block any legitimate access to the medium and/or interfere with reception. This act is called jamming and the malicious nodes are referred to as jammers. Jamming techniques vary from simple ones based on the continual transmission of interference signals, to more sophisticated attacks that aim at exploiting vulnerabilities of the particular protocol used. In this survey, we present a detailed up-to-date discussion on the jamming attacks recorded in the literature. We also describe various techniques proposed for detecting the presence of jammers. Finally, we survey numerous mechanisms which attempt to protect the network from jamming attacks. We conclude with a summary and by suggesting future directions.", "title": "" } ]
[ { "docid": "19c3bd8d434229d98741b04d3041286b", "text": "The availability of powerful microprocessors and high-speed networks as commodity components has enabled high performance computing on distributed systems (wide-area cluster computing). In this environment, as the resources are usually distributed geographically at various levels (department, enterprise, or worldwide) there is a great challenge in integrating, coordinating and presenting them as a single resource to the user; thus forming a computational grid. Another challenge comes from the distributed ownership of resources with each resource having its own access policy, cost, and mechanism. The proposed Nimrod/G grid-enabled resource management and scheduling system builds on our earlier work on Nimrod and follows a modular and component-based architecture enabling extensibility, portability, ease of development, and interoperability of independently developed components. It uses the Globus toolkit services and can be easily extended to operate with any other emerging grid middleware services. It focuses on the management and scheduling of computations over dynamic resources scattered geographically across the Internet at department, enterprise, or global level with particular emphasis on developing scheduling schemes based on the concept of computational economy for a real test bed, namely, the Globus testbed (GUSTO).", "title": "" }, { "docid": "34896de2fb07161ecfa442581c1c3260", "text": "Predicting the location of a video based on its content is a very meaningful, yet very challenging problem. Most existing work has focused on developing representative visual features and then searching for visually nearest neighbors in the development set to achieve a prediction. Interestingly, the relationship between scenes has been overlooked in prior work. Two scenes that are visually different, but frequently co-occur in same location, should naturally be considered similar for the geotagging problem. To build upon the above ideas, we propose to model the geo-spatial distributions of scenes by Gaussian Mixture Models (GMMs) and measure the distribution similarity by the Jensen-Shannon divergence (JSD). Subsequently, we present the Spatial Relationship Model (SRM) for geotagging which integrates the geo-spatial relationship of scenes into a hierarchical framework. We segment the Earth's surface into multiple levels of grids and measure the likelihood of input videos with an adaptation to region granularities. We have evaluated our approach using the YFCC100M dataset in the context of the MediaEval 2014 placing task. The total set of 35,000 geotagged videos is further divided into a training set of 25,000 videos and a test set of 10,000 videos. Our experimental results demonstrate the effectiveness of our proposed framework, as our solution achieves good accuracy and outperforms existing visual approaches for video geotagging.", "title": "" }, { "docid": "6adfcf6aec7b33a82e3e5e606c93295d", "text": "Cyber security is a serious global concern. The potential of cyber terrorism has posed a threat to national security; meanwhile the increasing prevalence of malware and incidents of cyber attacks hinder the utilization of the Internet to its greatest benefit and incur significant economic losses to individuals, enterprises, and public organizations. This paper presents some recent advances in intrusion detection, feature selection, and malware detection. In intrusion detection, stealthy and low profile attacks that include only few carefully crafted packets over an extended period of time to delude firewalls and the intrusion detection system (IDS) have been difficult to detect. In protection against malware (trojans, worms, viruses, etc.), how to detect polymorphic and metamorphic versions of recognized malware using static scanners is a great challenge. We present in this paper an agent based IDS architecture that is capable of detecting probe attacks at the originating host and denial of service (DoS) attacks at the boundary controllers. We investigate and compare the performance of different classifiers implemented for intrusion detection purposes. Further, we study the performance of the classifiers in real-time detection of probes and DoS attacks, with respect to intrusion data collected on a real operating network that includes a variety of simulated attacks. Feature selection is as important for IDS as it is for many other modeling problems. We present several techniques for feature selection and compare their performance in the IDS application. It is demonstrated that, with appropriately chosen features, both probes and DoS attacks can be detected in real time or near real time at the originating host or at the boundary controllers. We also briefly present some encouraging recent results in detecting polymorphic and metamorphic malware with advanced static, signature-based scanning techniques.", "title": "" }, { "docid": "78976c627fb72db5393837169060a92a", "text": "Although many variants of language models have been proposed for information retrieval, there are two related retrieval heuristics remaining \"external\" to the language modeling approach: (1) proximity heuristic which rewards a document where the matched query terms occur close to each other; (2) passage retrieval which scores a document mainly based on the best matching passage. Existing studies have only attempted to use a standard language model as a \"black box\" to implement these heuristics, making it hard to optimize the combination parameters.\n In this paper, we propose a novel positional language model (PLM) which implements both heuristics in a unified language model. The key idea is to define a language model for each position of a document, and score a document based on the scores of its PLMs. The PLM is estimated based on propagated counts of words within a document through a proximity-based density function, which both captures proximity heuristics and achieves an effect of \"soft\" passage retrieval. We propose and study several representative density functions and several different PLM-based document ranking strategies. Experiment results on standard TREC test collections show that the PLM is effective for passage retrieval and performs better than a state-of-the-art proximity-based retrieval model.", "title": "" }, { "docid": "7fb2348fbde9dbef88357cc79ff394c5", "text": "This paper presents a measurement system with capacitive sensor connected to an open-source electronic platform Arduino Uno. A simple code was modified in the project, which ensures that the platform works as interface for the sensor. The code can be modified and upgraded at any time to fulfill other specific applications. The simulations were carried out in the platform's own environment and the collected data are represented in graphical form. Accuracy of developed measurement platform is 0.1 pF.", "title": "" }, { "docid": "52315c6102fd4b12ad854c8df3662a50", "text": "Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discrete-to-discrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuous-time signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications.", "title": "" }, { "docid": "864693eec62d70486c681dba00c2375c", "text": "Since the early 1950s, more than one hundred cyanobacterial strains,belonging to twenty different genera, have been investigated with regard tothe production and the released exocellular polysaccharides (RPS) into theculture medium. The chemical and rheological properties show that suchpolysaccharides are complex anionic heteropolymers, in about 80% casescontaining six to ten different monosaccharides and in about 90% casescontaining one or more uronic acids; almost all have non-saccharidiccomponents, such as peptidic moieties, acetyl, pyruvyl and/or sulphategroups. Based on such ingredients, cyanobacterial RPSs show promise asthickening or suspending agents, emulsifying or cation-chelating compoundsand the residual capsulated cyanobacterial biomass, following RPSextraction, could be an effective cation-chelating material. Indeed, wheneleven unicellular and filamentous RPS-producing cyanobacteria, selectedon the basis of the anion density of their RPSs and on the abundance oftheir outermost investments, were screened for their ability to removeCu2+ from aqueous solutions, a quick and most effective heavy metaladsorption was observed for the unicellular Cyanothece CE 4 and thefilamentous Cyanospira capsulata. These results suggest the possibilityto accomplish, through the exploitation of RPS-producing cyanobacteria,a multiproduct strategy to procure a wide range of biopolymers suited tovarious industrial applications, in addition to the residual biomass effectivein the recovery of heavy metals from polluted waters.", "title": "" }, { "docid": "8ff4c6a5208b22a47eb5006c329817dc", "text": "Goal: To evaluate a novel kind of textile electrodes based on woven fabrics treated with PEDOT:PSS, through an easy fabrication process, testing these electrodes for biopotential recordings. Methods: Fabrication is based on raw fabric soaking in PEDOT:PSS using a second dopant, squeezing and annealing. The electrodes have been tested on human volunteers, in terms of both skin contact impedance and quality of the ECG signals recorded at rest and during physical activity (power spectral density, baseline wandering, QRS detectability, and broadband noise). Results: The electrodes are able to operate in both wet and dry conditions. Dry electrodes are more prone to noise artifacts, especially during physical exercise and mainly due to the unstable contact between the electrode and the skin. Wet (saline) electrodes present a stable and reproducible behavior, which is comparable or better than that of traditional disposable gelled Ag/AgCl electrodes. Conclusion: The achieved results reveal the capability of this kind of electrodes to work without the electrolyte, providing a valuable interface with the skin, due to mixed electronic and ionic conductivity of PEDOT:PSS. These electrodes can be effectively used for acquiring ECG signals. Significance: Textile electrodes based on PEDOT:PSS represent an important milestone in wearable monitoring, as they present an easy and reproducible fabrication process, very good performance in wet and dry (at rest) conditions and a superior level of comfort with respect to textile electrodes proposed so far. This paves the way to their integration into smart garments.", "title": "" }, { "docid": "dbf667a05877c170cbaac1f8e0417cac", "text": "Preventive maintenance (PM) scheduling is a very challenging task in semiconductor manufacturing due to the complexity of highly integrated fab tools and systems, the interdependence between PM tasks, and the balancing of work-in-process (WIP) with demand/throughput requirements. In this paper, we propose a two-level hierarchical modeling framework. At the higher level is a model for long-term planning, and at the lower level is a model for short-term PM scheduling. Solving the lower level problem is the focus of this paper. We develop mixed-integer programming (MIP) models for scheduling all due PM tasks for a group of tools, over a planning horizon. Interdependence among different PM tasks, production planning data such as projected WIP levels, manpower constraints, and associated PM time windows and costs, are incorporated in the model. Results of a simulation study comparing the performance of the model-based PM schedule with that of a baseline reference schedule are also presented.", "title": "" }, { "docid": "f2aff84f10b59cbc127dab6266cee11c", "text": "This paper extends the Argument Interchange Format to enable it to represent dialogic argumentation. One of the challenges is to tie together the rules expressed in dialogue protocols with the inferential relations between premises and conclusions. The extensions are founded upon two important analogies which minimise the extra ontological machinery required. First, locutions in a dialogue are analogous to AIF Inodes which capture propositional data. Second, steps between locutions are analogous to AIF S-nodes which capture inferential movement. This paper shows how these two analogies combine to allow both dialogue protocols and dialogue histories to be represented alongside monologic arguments in a single coherent system.", "title": "" }, { "docid": "aa98b79f4c20ad55a979329a6df947b3", "text": "Parallel processing is an essential requirement for optimum computations in modern equipment. In this paper, a communication strategy for the parallelized Flower Pollination Algorithm is proposed for solving numerical optimization problems. In this proposed method, the population flowers are split into several independent groups based on the original structure of the Flower Pollination Algorithm (FPA), and the proposed communication strategy provides the information flow for the flowers to communicate in different groups. Four benchmark functions are used to test the behavior of convergence, the accuracy, and the speed of the proposed method. According to the experimental result, the proposed communicational strategy increases the accuracy of the FPA on finding the best solution is up to 78% in comparison with original method.", "title": "" }, { "docid": "fa71a2d44ea95cf51a9e2d48f1fdcf29", "text": "A recent study showed that people evaluate products more positively when they are physically associated with art images than similar non-art images. Neuroimaging studies of visual art have investigated artistic style and esthetic preference but not brain responses attributable specifically to the artistic status of images. Here we tested the hypothesis that the artistic status of images engages reward circuitry, using event-related functional magnetic resonance imaging (fMRI) during viewing of art and non-art images matched for content. Subjects made animacy judgments in response to each image. Relative to non-art images, art images activated, on both subject- and item-wise analyses, reward-related regions: the ventral striatum, hypothalamus and orbitofrontal cortex. Neither response times nor ratings of familiarity or esthetic preference for art images correlated significantly with activity that was selective for art images, suggesting that these variables were not responsible for the art-selective activations. Investigation of effective connectivity, using time-varying, wavelet-based, correlation-purged Granger causality analyses, further showed that the ventral striatum was driven by visual cortical regions when viewing art images but not non-art images, and was not driven by regions that correlated with esthetic preference for either art or non-art images. These findings are consistent with our hypothesis, leading us to propose that the appeal of visual art involves activation of reward circuitry based on artistic status alone and independently of its hedonic value.", "title": "" }, { "docid": "2aed918913e6b72603e3dfdfca710572", "text": "We investigate the task of building a domain aware chat system which generates intelligent responses in a conversation comprising of different domains. The domain in this case is the topic or theme of the conversation. To achieve this, we present DOM-Seq2Seq, a domain aware neural network model based on the novel technique of using domain-targeted sequence-to-sequence models (Sutskever et al., 2014) and a domain classifier. The model captures features from current utterance and domains of the previous utterances to facilitate the formation of relevant responses. We evaluate our model on automatic metrics and compare our performance with the Seq2Seq model.", "title": "" }, { "docid": "29626105b7d6dad21162230296deef9a", "text": "A quasi-Z-source inverter (qZSI) could achieve buck/boost conversion as well as dc to ac inversion in a single-stage topology, which reduces the structure cost when compared to a traditional two-stage inverter. Specifically, the buck/boost conversion was accomplished via shoot-through state which took place across all phase legs of the inverter. In this paper, instead of using traditional dual-loop-based proportional integral (PI)-P controller, a type 2 based closed-loop voltage controller with novel dc-link voltage reference algorithm was proposed to fulfill the dc-link voltage tracking control of a single-phase qZSI regardless of any loading conditions, without the need of inner inductor current loop. A dc–ac boost inverter with similar circuit parameters as a qZSI was used to verify the flexibility of the proposed controller. The dynamic and transient performances of the proposed controller were investigated to evaluate its superiority against the aforementioned conventional controller. The integrated proposed controller and qZSI topology was then employed in static synchronous compensator application to perform reactive power compensation at the point of common coupling. The effectiveness of the proposed approach was verified through both simulation and experimental studies.", "title": "" }, { "docid": "ec4bf9499f16c415ccb586a974671bf1", "text": "Memory circuit elements, namely memristive, memcapacitive and meminductive systems, are gaining considerable attention due to their ubiquity and use in diverse areas of science and technology. Their modeling within the most widely used environment, SPICE, is thus critical to make substantial progress in the design and analysis of complex circuits. Here, we present a collection of models of different memory circuit elements and provide a methodology for their accurate and reliable modeling in the SPICE environment. We also provide codes of these models written in the most popular SPICE versions (PSpice, LTspice, HSPICE) for the benefit of the reader. We expect this to be of great value to the growing community of scientists interested in the wide range of applications of memory circuit elements.", "title": "" }, { "docid": "51e0e310796e7eecfdd1960529e3b090", "text": "Gender differences in the emotional intensity and content of autobiographical memory (AM) are inconsistent across studies, and may be influenced as much by gender identity as by categorical gender. To explore this question, data were collected from 196 participants (age 18-40), split evenly between men and women. Participants narrated four memories, a neutral event, high point event, low point event, and self-defining memory, completed ratings of emotional intensity for each event, and completed four measures of gender typical identity. For self-reported emotional intensity, gender differences in AM were mediated by identification with stereotypical feminine gender norms. For narrative use of affect terms, both gender and gender typical identity predicted affective expression. The results confirm contextual models of gender identity (e.g., Diamond, 2012 . The desire disorder in research on sexual orientation in women: Contributions of dynamical systems theory. Archives of Sexual Behavior, 41, 73-83) and underscore the dynamic interplay between gender and gender identity in the emotional expression of autobiographical memories.", "title": "" }, { "docid": "1bf8cc02cf21015385cd1fd20ffb2f4e", "text": "© 2018 Macmillan Publishers Limited, part of Springer Nature. All rights reserved. © 2018 Macmillan Publishers Limited, part of Springer Nature. All rights reserved. 1Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA, USA. 2Berkeley Sensor and Actuator Center, University of California, Berkeley, CA, USA. 3Materials Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA, USA. *e-mail: [email protected] Healthcare systems today are mostly reactive. Patients contact doctors after they have developed ailments with noticeable symptoms, and are thereafter passive recipients of care and monitoring by specialists. This approach largely fails in preventing the onset of health conditions, prioritizing diagnostics and treatment over proactive healthcare. It further occludes individuals from being active agents in monitoring their own health. The growing field of wearable sensors (or wearables) aims to tackle the limitations of centralized, reactive healthcare by giving individuals insight into the dynamics of their own physiology. The long-term vision is to develop sensors that can be integrated into wearable formats like clothing, wristbands, patches, or tattoos to continuously probe a range of body indicators. By relaying physiological information as the body evolves over healthy and sick states, these sensors will enable individuals to monitor themselves without expensive equipment or trained professionals (Fig. 1). Various physical and chemical sensors will need to be integrated to obtain a complete picture of dynamic health. These sensors will generate vast time series of data that will need to be parsed with big-data techniques to generate personalized baselines indicative of the user’s health1–4. Sensor readings that cohere with the established baseline can then indicate that the body is in a healthy, equilibrium state, while deviations from the baseline can provide early warnings about developing health conditions. Eventually, deviations caused by different pathologies can be ‘fingerprinted’ to make diagnosis more immediate and autonomous. Together, the integration of wearables with big-data analytics can enable individualized fitness monitoring, early detection of developing health conditions, and better management of chronic diseases. This envisioned medical landscape built around continuous, point-of-care sensing spearheads personalized, predictive, and ultimately preventive healthcare.", "title": "" }, { "docid": "dd2bf018a4edfebc754881cbacb6f705", "text": "In this paper, we propose a new unsupervised spectral feature selection model by embedding a graph regularizer into the framework of joint sparse regression for preserving the local structures of data. To do this, we first extract the bases of training data by previous dictionary learning methods and, then, map original data into the basis space to generate their new representations, by proposing a novel joint graph sparse coding (JGSC) model. In JGSC, we first formulate its objective function by simultaneously taking subspace learning and joint sparse regression into account, then, design a new optimization solution to solve the resulting objective function, and further prove the convergence of the proposed solution. Furthermore, we extend JGSC to a robust JGSC (RJGSC) via replacing the least square loss function with a robust loss function, for achieving the same goals and also avoiding the impact of outliers. Finally, experimental results on real data sets showed that both JGSC and RJGSC outperformed the state-of-the-art algorithms in terms of ${k}$ -nearest neighbor classification performance.", "title": "" }, { "docid": "20d186b7db540be57492daa805b51b31", "text": "Printability, the capability of a 3D printer to closely reproduce a 3D model, is a complex decision involving several geometrical attributes like local thickness, shape of the thin regions and their surroundings, and topology with respect to thin regions. We present a method for assessment of 3D shape printability which efficiently and effectively computes such attributes. Our method uses a simple and efficient voxel-based representation and associated computations. Using tools from multi-scale morphology and geodesic analysis, we propose several new metrics for various printability problems. We illustrate our method with results taken from a real-life application.", "title": "" } ]
scidocsrr
7a7bada89624f40e41ff43279fd865c4
Localization algorithms for multilateration (MLAT) systems in airport surface surveillance
[ { "docid": "36f960b37e7478d8ce9d41d61195f83a", "text": "An effective technique in locating a source based on intersections of hyperbolic curves defined by the time differences of arrival of a signal received at a number of sensors is proposed. The approach is noniterative and gives au explicit solution. It is an approximate realization of the maximum-likelihood estimator and is shown to attain the Cramer-Rao lower bound near the small error region. Comparisons of performance with existing techniques of beamformer, sphericat-interpolation, divide and conquer, and iterative Taylor-series methods are made. The proposed technique performs significantly better than sphericalinterpolation, and has a higher noise threshold than divide and conquer before performance breaks away from the Cramer-Rao lower bound. It provides an explicit solution form that is not available in the beamformmg and Taylor-series methods. Computational complexity is comparable to spherical-interpolation but substantially less than the Taylor-series method.", "title": "" } ]
[ { "docid": "c8b1a0d5956ced6deaefe603efc523ba", "text": "What can wearable sensors and usage of smart phones tell us about academic performance, self-reported sleep quality, stress and mental health condition? To answer this question, we collected extensive subjective and objective data using mobile phones, surveys, and wearable sensors worn day and night from 66 participants, for 30 days each, totaling 1,980 days of data. We analyzed daily and monthly behavioral and physiological patterns and identified factors that affect academic performance (GPA), Pittsburg Sleep Quality Index (PSQI) score, perceived stress scale (PSS), and mental health composite score (MCS) from SF-12, using these month-long data. We also examined how accurately the collected data classified the participants into groups of high/low GPA, good/poor sleep quality, high/low self-reported stress, high/low MCS using feature selection and machine learning techniques. We found associations among PSQI, PSS, MCS, and GPA and personality types. Classification accuracies using the objective data from wearable sensors and mobile phones ranged from 67-92%.", "title": "" }, { "docid": "6ae229321cdc1fded2b06c55c9371727", "text": "Scott R. Bishop, University of Toronto Mark Lau, University of Toronto Shauna Shapiro, VA Palo Alto Health Care System Linda Carlson, University of Calgary Nicole D. Anderson, University of Toronto James Carmody, University of Massachusetts Medical School Zindel V. Segal, University of Toronto Susan Abbey, University of Toronto Michael Speca, University of Calgary Drew Velting, Columbia University Gerald Devins, University of Toronto", "title": "" }, { "docid": "cbb87b1e7e94c95a2502e79d8440e17f", "text": "Research on integrating small numbers of datasets suggests the use of customized matching rules in order to adapt to the patterns in the data and achieve better results. The state-of-the-art work on matching large numbers of datasets exploits attribute co-occurrence as well as the similarity of values between multiple sources. We build upon these research directions in order to develop a method for generalizing matching knowledge using minimal human intervention. The central idea of our research program is that even in large numbers of datasets of a specific domain patterns (matching knowledge) reoccur, and discovering those can facilitate the integration task. Our proposed approach plans to use and extend existing work of our group on schema and instance matching as well as on learning expressive rules with active learning. We plan to evaluate our approach on publicly available e-commerce data collected from the Web.", "title": "" }, { "docid": "ab05a100cfdb072f65f7dad85b4c5aea", "text": "Expanding retrieval practice refers to the idea that gradually increasing the spacing interval between repeated tests ought to promote optimal long-term retention. Belief in the superiority of this technique is widespread, but empirical support is scarce. In addition, virtually all research on expanding retrieval has examined the learning of word pairs in paired-associate tasks. We report two experiments in which we examined the learning of text materials with expanding and equally spaced retrieval practice schedules. Subjects studied brief texts and recalled them in an initial learning phase. We manipulated the spacing of the repeated recall tests and examined final recall 1 week later. Overall we found that (1) repeated testing enhanced retention more than did taking a single test, (2) testing with feedback (restudying the passages) produced better retention than testing without feedback, but most importantly (3) there were no differences between expanding and equally spaced schedules of retrieval practice. Repeated retrieval enhanced long-term retention, but how the repeated tests were spaced did not matter.", "title": "" }, { "docid": "f2f2b48cd35d42d7abc6936a56aa580d", "text": "Complete enumeration of all the sequences to establish global optimality is not feasible as the search space, for a general job-shop scheduling problem, ΠG has an upper bound of (n!). Since the early fifties a great deal of research attention has been focused on solving ΠG, resulting in a wide variety of approaches such as Branch and Bound, Simulated Annealing, Tabu Search, etc. However limited success has been achieved by these methods due to the shear intractability of this generic scheduling problem. Recently, much effort has been concentrated on using neural networks to solve ΠG as they are capable of adapting to new environments with little human intervention and can mimic thought processes. Major contributions in solving ΠG using a Hopfield neural network, as well as applications of back-error propagation to general scheduling problems are presented. To overcome the deficiencies in these applications a modified back-error propagation model, a simple yet powerful parallel architecture which can be successfully simulated on a personal computer, is applied to solve ΠG.", "title": "" }, { "docid": "9eca36b888845c82cc9e65e6bc0db053", "text": "Word embeddings resulting from neural language models have been shown to be a great asset for a large variety of NLP tasks. However, such architecture might be difficult and time-consuming to train. Instead, we propose to drastically simplify the word embeddings computation through a Hellinger PCA of the word cooccurence matrix. We compare those new word embeddings with some well-known embeddings on named entity recognition and movie review tasks and show that we can reach similar or even better performance. Although deep learning is not really necessary for generating good word embeddings, we show that it can provide an easy way to adapt embeddings to specific tasks.", "title": "" }, { "docid": "1be6aecdc3200ed70ede2d5e96cb43be", "text": "In this paper we are exploring different models and methods for improving the performance of text independent speaker identification system for mobile devices. The major issues in speaker recognition for mobile devices are (i) presence of varying background environment, (ii) effect of speech coding introduced by the mobile device, and (iii) impairments due to wireless channel. In this paper, we are proposing multi-SNR multi-environment speaker models and speech enhancement (preprocessing) methods for improving the performance of speaker recognition system in mobile environment. For this study, we have simulated five different background environments (Car, Factory, High frequency, pink noise and white Gaussian noise) using NOISEX data. Speaker recognition studies are carried out on TIMIT, cellular, and microphone speech databases. Autoassociative neural network models are explored for developing these multi-SNR multi-environment speaker models. The results indicate that the proposed multi-SNR multi-environment speaker models and speech enhancement preprocessing methods have enhanced the speaker recognition performance in the presence of different noisy environments.", "title": "" }, { "docid": "62766b08b1666085543b732cf839dec0", "text": "The research area of evolutionary multiobjective optimization (EMO) is reaching better understandings of the properties and capabilities of EMO algorithms, and accumulating much evidence of their worth in practical scenarios. An urgent emerging issue is that the favoured EMO algorithms scale poorly when problems have \"many\" (e.g. five or more) objectives. One of the chief reasons for this is believed to be that, in many-objective EMO search, populations are likely to be largely composed of nondominated solutions. In turn, this means that the commonly-used algorithms cannot distinguish between these for selective purposes. However, there are methods that can be used validly to rank points in a nondominated set, and may therefore usefully underpin selection in EMO search. Here we discuss and compare several such methods. Our main finding is that simple variants of the often-overlooked \"Average Ranking\" strategy usually outperform other methods tested, covering problems with 5-20 objectives and differing amounts of inter-objective correlation.", "title": "" }, { "docid": "8c95392ab3cc23a7aa4f621f474d27ba", "text": "Designing agile locomotion for quadruped robots often requires extensive expertise and tedious manual tuning. In this paper, we present a system to automate this process by leveraging deep reinforcement learning techniques. Our system can learn quadruped locomotion from scratch using simple reward signals. In addition, users can provide an open loop reference to guide the learning process when more control over the learned gait is needed. The control policies are learned in a physics simulator and then deployed on real robots. In robotics, policies trained in simulation often do not transfer to the real world. We narrow this reality gap by improving the physics simulator and learning robust policies. We improve the simulation using system identification, developing an accurate actuator model and simulating latency. We learn robust controllers by randomizing the physical environments, adding perturbations and designing a compact observation space. We evaluate our system on two agile locomotion gaits: trotting and galloping. After learning in simulation, a quadruped robot can successfully perform both gaits in the real world.", "title": "" }, { "docid": "a0903fc562ccd9dfe708afbef43009cd", "text": "A stacked field-effect transistor (FET) linear cellular antenna switch adopting a transistor layout with odd-symmetrical drain-source metal wiring and an extremely low-power biasing strategy has been implemented in silicon-on-insulator CMOS technology. A multi-fingered switch-FET device with odd-symmetrical drain-source metal wiring is adopted herein to improve the insertion loss (IL) and isolation of the antenna switch by minimizing the product of the on-resistance and off-capacitance. To remove the spurious emission and digital switching noise problems from the antenna switch driver circuits, an extremely low-power biasing scheme driven by only positive bias voltage has been devised. The proposed antenna switch that employs the new biasing scheme shows almost the same power-handling capability and harmonic distortion as a conventional version based on a negative biasing scheme, while greatly reducing long start-up time and wasteful active current consumption in a stand-by mode of the conventional antenna switch driver circuits. The implemented single-pole four-throw antenna switch is perfectly capable of handling a high power signal up to +35 dBm with suitably low IL of less than 1 dB, and shows second- and third-order harmonic distortion of less than -45 dBm when a 1-GHz RF signal with a power of +35 dBm and a 2-GHz RF signal with a power of +33 dBm are applied. The proposed antenna switch consumes almost no static power.", "title": "" }, { "docid": "52b1adf3b7b6bf08651c140d726143c3", "text": "The antifungal potential of aqueous leaf and fruit extracts of Capsicum frutescens against four major fungal strains associated with groundnut storage was evaluated. These seed-borne fungi, namely Aspergillus flavus, A. niger, Penicillium sp. and Rhizopus sp. were isolated by standard agar plate method and identified by macroscopic and microscopic features. The minimum inhibitory concentrations (MIC) and minimum fungicidal concentration (MFC) of C. frutescens extracts were determined. MIC values of the fruit extract were lower compared to the leaf extract. At MIC, leaf extract showed strong activity against A. flavus (88.06%), while fruit extract against A. niger (88.33%) in the well diffusion method. Groundnut seeds treated with C.frutescens fruit extract (10mg/ml) showed a higher rate of fungal inhibition. The present results suggest that groundnuts treated with C. frutescens fruit extracts are capable of preventing fungal infection to a certain extent.", "title": "" }, { "docid": "13f5414bcdc5213fef9458fa31f2e593", "text": "It has been suggested that the prevalence of Helicobacter pylori infection has stabilized in the USA and is decreasing in China. We conducted a systematic literature analysis to test this hypothesis. PubMed and Embase searches were conducted up to 19 January 2015. Trends in the prevalence of H. pylori infection over time were assessed by regression analysis using Microsoft Excel. Overall, 25 Chinese studies (contributing 28 datasets) and 11 US studies (contributing 11 datasets) were included. There was a significant decrease over time in the H. pylori infection prevalence for the Chinese studies overall (p = 0.00018) and when studies were limited to those that used serum immunoglobulin G (IgG) assays to detect H. pylori infection (p = 0.014; 20 datasets). The weighted mean prevalence of H. pylori infection was 66 % for rural Chinese populations and 47 % for urban Chinese populations. There was a significant trend towards a decreasing prevalence of H. pylori infection for studies that included only urban populations (p = 0.04; 9 datasets). This trend was no longer statistically significant when these studies were further restricted to those that used serum IgG assays to detect H. pylori infection, although this may have been because of low statistical power due to the small number of datasets available for this analysis (p = 0.28; 6 datasets). There were no significant trends in terms of changes in the prevalence of H. pylori infection over time for studies conducted in the USA. In conclusion, the prevalence of H. pylori infection is most likely decreasing in China, due to a combination of increasing urbanization, which we found to be associated with lower H. pylori infection rates, and possibly also decreasing rates of H. pylori infection within urban populations. This will probably result in a gradual decrease in peptic ulcer and gastric cancer rates in China over time.", "title": "" }, { "docid": "9adbbfb73f27d266f0ac975c784c22f1", "text": "Estonia has one of the most established e-voting systems in the world. Internet voting remote e-voting using the voter’s own equipment was piloted in 2005 [12] (with the first real elections using e-voting being conducted the same year) and has been in use ever since. So far, the Estonian internet voting system has been used for the whole country in three sets of local elections, two European Parliament elections and three parliamentary elections [5]. This chapter begins by exploring the voting system in Estonia; we consider the organisation of the electoral system in the three main kinds of election (municipal, parliamentary and European Parliament), the traditional ways of voting and the methods used to tally votes and elect candidates. Next we investigate the Estonian national ID card, an identity document that plays a key part in enabling internet voting to be possible in Estonia. After considering these pre-requisites, we describe the current internet voting system, including how it has evolved over time and the relatively new verification mechanisms that are available to voters. Next we discuss the assumptions and choices that have been made in the design of this system and the challenges and criticism that it has received. Finally, we conclude by discussing how the system has performed over the 10 years it has been in use, and the impact it appears to have had on voter turnout and satisfaction.", "title": "" }, { "docid": "ea01ef46670d4bb8244df0d6ab08a3d5", "text": "In this paper, statics model of an underactuated wire-driven flexible robotic arm is introduced. The robotic arm is composed of a serpentine backbone and a set of controlling wires. It has decoupled bending rigidity and axial rigidity, which enables the robot large axial payload capacity. Statics model of the robotic arm is developed using the Newton-Euler method. Combined with the kinematics model, the robotic arm deformation as well as the wire motion needed to control the robotic arm can be obtained. The model is validated by experiments. Results show that, the proposed model can well predict the robotic arm bending curve. Also, the bending curve is not affected by the wire pre-tension. This enables the wire-driven robotic arm with potential applications in minimally invasive surgical operations.", "title": "" }, { "docid": "ee4c8c4d9bbd39562ecd644cbc9cde90", "text": "We consider generic optimization problems that can be formu lated as minimizing the cost of a feasible solution w T x over a combinatorial feasible set F ⊂ {0, 1}. For these problems we describe a framework of risk-averse stochastic problems where the cost vector W has independent random components, unknown at the time of so lution. A natural and important objective that incorporates risk in this stochastic setting is to look for a feasible solution whose stochastic cost has a small tail or a small convex combi nation of mean and standard deviation. Our models can be equivalently reformulated as nonconvex programs for whi ch no efficient algorithms are known. In this paper, we make progress on these hard problems. Our results are several efficient general-purpose approxim ation schemes. They use as a black-box (exact or approximate) the solution to the underlying deterministic pr oblem and thus immediately apply to arbitrary combinatoria l problems. For example, from an available δ-approximation algorithm to the linear problem, we constru ct aδ(1 + ǫ)approximation algorithm for the stochastic problem, which invokes the linear algorithm only a logarithmic number of times in the problem input (and polynomial in 1 ǫ ), for any desired accuracy level ǫ > 0. The algorithms are based on a geometric analysis of the curvature and approximabilit y of he nonlinear level sets of the objective functions.", "title": "" }, { "docid": "333c8a22b502b771c9f5f0df67d6da1c", "text": "Brain extraction from magnetic resonance imaging (MRI) is crucial for many neuroimaging workflows. Current methods demonstrate good results on non-enhanced T1-weighted images, but struggle when confronted with other modalities and pathologically altered tissue. In this paper we present a 3D convolutional deep learning architecture to address these shortcomings. In contrast to existing methods, we are not limited to non-enhanced T1w images. When trained appropriately, our approach handles an arbitrary number of modalities including contrast-enhanced scans. Its applicability to MRI data, comprising four channels: non-enhanced and contrast-enhanced T1w, T2w and FLAIR contrasts, is demonstrated on a challenging clinical data set containing brain tumors (N=53), where our approach significantly outperforms six commonly used tools with a mean Dice score of 95.19. Further, the proposed method at least matches state-of-the-art performance as demonstrated on three publicly available data sets: IBSR, LPBA40 and OASIS, totaling N=135 volumes. For the IBSR (96.32) and LPBA40 (96.96) data set the convolutional neuronal network (CNN) obtains the highest average Dice scores, albeit not being significantly different from the second best performing method. For the OASIS data the second best Dice (95.02) results are achieved, with no statistical difference in comparison to the best performing tool. For all data sets the highest average specificity measures are evaluated, whereas the sensitivity displays about average results. Adjusting the cut-off threshold for generating the binary masks from the CNN's probability output can be used to increase the sensitivity of the method. Of course, this comes at the cost of a decreased specificity and has to be decided application specific. Using an optimized GPU implementation predictions can be achieved in less than one minute. The proposed method may prove useful for large-scale studies and clinical trials.", "title": "" }, { "docid": "5eaac9e4945b72c93b1dbe3689c5de9f", "text": "Ahstract-A novel TRM calibration procedure aimed to improve the quality of on-wafer S-parameter measurement, especially in mm-wave frequency band, has been proposed. This procedure is based on active reverse signal injections to improve the accuracy of the raw thru s-parameter measurement. This calibration method can effectively improve the S-parameter measurement quality at mm-wave frequency and hence improve the modelling accuracy. This new optimized calibration method eliminates the need of utilizing complex and expensive loadpull system or post calibration optimization algorithms, and can be easily implemented in modelling extraction process or further implemented into other LRL/TRL based calibration algorithms. Finally, this proposed method has been tested on a real measurement system over a 16nm FinFET CMOS device to test its validity.", "title": "" }, { "docid": "f9dc4c6277ad29a757dedf26f3572dce", "text": "The transdisciplinary research project Virtopsy is dedicated to implementing modern imaging techniques into forensic medicine and pathology in order to augment current examination techniques or even to offer alternative methods. Our project relies on three pillars: three-dimensional (3D) surface scanning for the documentation of body surfaces, and both multislice computed tomography (MSCT) and magnetic resonance imaging (MRI) to visualise the internal body. Three-dimensional surface scanning has delivered remarkable results in the past in the 3D documentation of patterned injuries and of objects of forensic interest as well as whole crime scenes. Imaging of the interior of corpses is performed using MSCT and/or MRI. MRI, in addition, is also well suited to the examination of surviving victims of assault, especially choking, and helps visualise internal injuries not seen at external examination of the victim. Apart from the accuracy and three-dimensionality that conventional documentations lack, these techniques allow for the re-examination of the corpse and the crime scene even decades later, after burial of the corpse and liberation of the crime scene. We believe that this virtual, non-invasive or minimally invasive approach will improve forensic medicine in the near future.", "title": "" }, { "docid": "bf784b515ffbf7a9df1217236efe3228", "text": "This paper focuses on the open-center multi-way valve used in loader buckets. To solve the problem of excessive flow force that leads to spool clamping in the reset process, joint simulations adopting MATLAB, AMESim, and FLUENT were carried out. Boundary conditions play a decisive role in the results of computational fluid dynamics (CFD) simulation. However, the boundary conditions of valve ports depend on the hydraulic system’s working condition and are significantly impacted by the port area, which has always been neglected. This paper starts with the port area calculation method, then the port area curves are input into the simulation hydraulic system, obtaining the flow curves of valve port as output, which are then applied as the boundary conditions of the spool valve CFD simulation. Therefore, the steady-state flow force of the spool valve is accurately calculated, and the result verifies the hypothesis that excess flow force causes spool clamping. Based on this, four kinds of structures were introduced in an attempt to improve the situation, and simulating calculation and theoretical analysis were adopted to verify the effects of improvement. Results show that the four structures could reduce the peak value of flow force by 17.8%, 60.6%, 61.6%, and 55.7%, respectively. Of the four, structures II, III, and IV can reduce the peak value of flow force to below reset spring force value, thus successfully avoiding the spool clamping caused by flow force.", "title": "" } ]
scidocsrr
b10df8968e9b15a82ea22a14af93b43e
Self-confidence and sports performance.
[ { "docid": "2a1f1576ab73e190dce400dedf80df36", "text": "No wonder you activities are, reading will be always needed. It is not only to fulfil the duties that you need to finish in deadline time. Reading will encourage your mind and thoughts. Of course, reading will greatly develop your experiences about everything. Reading motivation reconsidered the concept of competence is also a way as one of the collective books that gives many advantages. The advantages are not only for you, but for the other peoples with those meaningful benefits.", "title": "" } ]
[ { "docid": "964deb65d393564f62b9df68fa1b00d9", "text": "Inferring abnormal glucose events such as hyperglycemia and hypoglycemia is crucial for the health of both diabetic patients and non-diabetic people. However, regular blood glucose monitoring can be invasive and inconvenient in everyday life. We present SugarMate, a first smartphone-based blood glucose inference system as a temporary alternative to continuous blood glucose monitors (CGM) when they are uncomfortable or inconvenient to wear. In addition to the records of food, drug and insulin intake, it leverages smartphone sensors to measure physical activities and sleep quality automatically. Provided with the imbalanced and often limited measurements, a challenge of SugarMate is the inference of blood glucose levels at a fine-grained time resolution. We propose Md3RNN, an efficient learning paradigm to make full use of the available blood glucose information. Specifically, the newly designed grouped input layers, together with the adoption of a deep RNN model, offer an opportunity to build blood glucose models for the general public based on limited personal measurements from single-user and grouped-users perspectives. Evaluations on 112 users demonstrate that Md3RNN yields an average accuracy of 82.14%, significantly outperforming previous learning methods those are either shallow, generically structured, or oblivious to grouped behaviors. Also, a user study with the 112 participants shows that SugarMate is acceptable for practical usage.", "title": "" }, { "docid": "534a3885c710bc9a65fa2d66e2937dd4", "text": "This paper examines the concept of culture, and the potential impact of intercultural dynamics of software development. Many of the difficulties confronting today's global software development (GSD) environment have little to do with technical issues; rather, they are \"human\" issues that occur when extensive collaboration and communication among developers with distinct cultural backgrounds are required. Although project managers are reporting that intercultural factors are impacting software practices and artifacts and deserve more detailed study, little analytical research has been conducted in this area other than anecdotal testimonials by software professionals. This paper presents an introductory analysis of the effect that intercultural factors have on global software development. The paper first establishes a framework for intercultural variations by introducing several models commonly used to define culture. Cross-cultural issues that often arise in software development are then identified. The paper continues by explaining the importance of taking intercultural issues seriously and proposes some ideas for future research in the area", "title": "" }, { "docid": "f93c47dae193e00ca9fc052028b6167f", "text": "© International Association for Applied Psychology, 2005. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. Blackwell Publishing, Ltd. Oxford, UK APPS pplied Psychology: an International Review 0269-994X © Int rnational Association for Applied Psychology, 2005 ri 2005 54 2 riginal Arti le SELF-REGULATION IN THE CLASS OOM OEKA RTS and CORNO Self-Regulation in the Classroom: A Perspective on Assessment and Intervention", "title": "" }, { "docid": "7c5a80b0fef3e0e1fe5ce314b6e5aaf4", "text": "OBJECTIVES\nGiven the large-scale adoption and deployment of mobile phones by health services and frontline health workers (FHW), we aimed to review and synthesise the evidence on the feasibility and effectiveness of mobile-based services for healthcare delivery.\n\n\nMETHODS\nFive databases - MEDLINE, EMBASE, Global Health, Google Scholar and Scopus - were systematically searched for relevant peer-reviewed articles published between 2000 and 2013. Data were extracted and synthesised across three themes as follows: feasibility of use of mobile tools by FHWs, training required for adoption of mobile tools and effectiveness of such interventions.\n\n\nRESULTS\nForty-two studies were included in this review. With adequate training, FHWs were able to use mobile phones to enhance various aspects of their work activities. Training of FHWs to use mobile phones for healthcare delivery ranged from a few hours to about 1 week. Five key thematic areas for the use of mobile phones by FHWs were identified as follows: data collection and reporting, training and decision support, emergency referrals, work planning through alerts and reminders, and improved supervision of and communication between healthcare workers. Findings suggest that mobile based data collection improves promptness of data collection, reduces error rates and improves data completeness. Two methodologically robust studies suggest that regular access to health information via SMS or mobile-based decision-support systems may improve the adherence of the FHWs to treatment algorithms. The evidence on the effectiveness of the other approaches was largely descriptive and inconclusive.\n\n\nCONCLUSIONS\nUse of mHealth strategies by FHWs might offer some promising approaches to improving healthcare delivery; however, the evidence on the effectiveness of such strategies on healthcare outcomes is insufficient.", "title": "" }, { "docid": "c08a06592c7ffa4764824a11be904517", "text": "Work breaks can play an important role in the mental and physical well-being of workers and contribute positively to productivity. In this paper we explore the use of activity-, physiological-, and indoor-location sensing to promote mobility during work-breaks. While the popularity of devices and applications to promote physical activity is growing, prior research highlights important constraints when designing for the workplace. With these constraints in mind, we developed BreakSense, a mobile application that uses a Bluetooth beacon infrastructure, a smartphone and a smartwatch to encourage mobility during breaks with a game-like design. We discuss constraints imposed by design for work and the workplace, and highlight challenges associated with the use of noisy sensors and methods to overcome them. We then describe a short deployment of BreakSense within our lab that examined bound vs. unbound augmented breaks and how they affect users' sense of completion and readiness to work.", "title": "" }, { "docid": "14c4e051a23576b33507c453d7e0fe84", "text": "There is a growing interest in subspace learning techniques for face recognition; however, the excessive dimension of the data space often brings the algorithms into the curse of dimensionality dilemma. In this paper, we present a novel approach to solve the supervised dimensionality reduction problem by encoding an image object as a general tensor of second or even higher order. First, we propose a discriminant tensor criterion, whereby multiple interrelated lower dimensional discriminative subspaces are derived for feature extraction. Then, a novel approach, called k-mode optimization, is presented to iteratively learn these subspaces by unfolding the tensor along different tensor directions. We call this algorithm multilinear discriminant analysis (MDA), which has the following characteristics: 1) multiple interrelated subspaces can collaborate to discriminate different classes, 2) for classification problems involving higher order tensors, the MDA algorithm can avoid the curse of dimensionality dilemma and alleviate the small sample size problem, and 3) the computational cost in the learning stage is reduced to a large extent owing to the reduced data dimensions in k-mode optimization. We provide extensive experiments on ORL, CMU PIE, and FERET databases by encoding face images as second- or third-order tensors to demonstrate that the proposed MDA algorithm based on higher order tensors has the potential to outperform the traditional vector-based subspace learning algorithms, especially in the cases with small sample sizes", "title": "" }, { "docid": "9ce2aaa0ad3bfe383099782c46746819", "text": "To achieve high production of rosmarinic acid and derivatives in Escherichia coli which are important phenolic acids found in plants, and display diverse biological activities. The synthesis of rosmarinic acid was achieved by feeding caffeic acid and constructing an artificial pathway for 3,4-dihydroxyphenyllactic acid. Genes encoding the following enzymes: rosmarinic acid synthase from Coleus blumei, 4-coumarate: CoA ligase from Arabidopsis thaliana, 4-hydroxyphenyllactate 3-hydroxylase from E. coli and d-lactate dehydrogenase from Lactobacillus pentosus, were overexpressed in an l-tyrosine over-producing E. coli strain. The yield of rosmarinic acid reached ~130 mg l−1 in the recombinant strain. In addition, a new intermediate, caffeoyl-phenyllactate (~55 mg l−1), was also produced by the engineered E. coli strain. This work not only leads to high yield production of rosmarinic acid and analogues, but also sheds new light on the construction of the pathway of rosmarinic acid in E. coli.", "title": "" }, { "docid": "a6a9376f6205d5c2bc48964d482b6443", "text": "Enrollment in online courses is rapidly increasing and attrition rates remain high. This paper presents a literature review addressing the role of interactivity in student satisfaction and persistence in online learning. Empirical literature was reviewed through the lens of Bandura's social cognitive theory, Anderson's interaction equivalency theorem, and Tinto's social integration theory. Findings suggest that interactivity is an important component of satisfaction and persistence for online learners, and that preferences for types of online interactivity vary according to type of learner. Student–instructor interaction was also noted to be a primary variable in online student satisfaction and persistence.", "title": "" }, { "docid": "415d4bdd83d5dc96c8f8417696943c57", "text": "Many hypothesized applications of mobile robotics require multiple robots. Multiple robots substantially increase the complexity of the operator’s task because attention must be continually shifted among robots. One approach to increasing human capacity for control is to remove the independence among robots by allowing them to cooperate. This paper presents an initial experiment using multiagent teamwork proxies to help control robots performing a search and rescue task. .", "title": "" }, { "docid": "44a1c6ebc90e57398ee92a137a5a54f8", "text": "Most of human actions consist of complex temporal compositions of more simple actions. Action recognition tasks usually relies on complex handcrafted structures as features to represent the human action model. Convolutional Neural Nets (CNN) have shown to be a powerful tool that eliminate the need for designing handcrafted features. Usually, the output of the last layer in CNN (a layer before the classification layer -known as fc7) is used as a generic feature for images. In this paper, we show that fc7 features, per se, can not get a good performance for the task of action recognition, when the network is trained only on images. We present a feature structure on top of fc7 features, which can capture the temporal variation in a video. To represent the temporal components, which is needed to capture motion information, we introduced a hierarchical structure. The hierarchical model enables to capture sub-actions from a complex action. At the higher levels of the hierarchy, it represents a coarse capture of action sequence and lower levels represent fine action elements. Furthermore, we introduce a method for extracting key-frames using binary coding of each frame in a video, which helps to improve the performance of our hierarchical model. We experimented our method on several action datasets and show that our method achieves superior results compared to other stateof-the-arts methods.", "title": "" }, { "docid": "00cf565cc59b8d006ed56bf668f76232", "text": "Azadeh Yektaseresht1*, Amin Gholamhosseini2 , Ali Janparvar3 1*Department of Pathobiology; School of Veterinary Medicine; Shiraz University, Shiraz; Iran. *Corresponding author: Azadeh Yektaseresht, Department of Pathobiology, School of Veterinary Medicine, Shiraz University, Shiraz, Iran 2Department of Aquatic Animal Health and Diseases; School of Veterinary Medicine; Shiraz University; Shiraz; Iran.", "title": "" }, { "docid": "6379ddf52f418861e4f95ddc861a58c9", "text": "BACKGROUND\nFluoroquinolones and second-line injectable drugs are the backbone of treatment regimens for multidrug-resistant tuberculosis, and resistance to these drugs defines extensively drug-resistant tuberculosis. We assessed the accuracy of an automated, cartridge-based molecular assay for the detection, directly from sputum specimens, of Mycobacterium tuberculosis with resistance to fluoroquinolones, aminoglycosides, and isoniazid.\n\n\nMETHODS\nWe conducted a prospective diagnostic accuracy study to compare the investigational assay against phenotypic drug-susceptibility testing and DNA sequencing among adults in China and South Korea who had symptoms of tuberculosis. The Xpert MTB/RIF assay and sputum culture were performed. M. tuberculosis isolates underwent phenotypic drug-susceptibility testing and DNA sequencing of the genes katG, gyrA, gyrB, and rrs and of the eis and inhA promoter regions.\n\n\nRESULTS\nAmong the 308 participants who were culture-positive for M. tuberculosis, when phenotypic drug-susceptibility testing was used as the reference standard, the sensitivities of the investigational assay for detecting resistance were 83.3% for isoniazid (95% confidence interval [CI], 77.1 to 88.5), 88.4% for ofloxacin (95% CI, 80.2 to 94.1), 87.6% for moxifloxacin at a critical concentration of 0.5 μg per milliliter (95% CI, 79.0 to 93.7), 96.2% for moxifloxacin at a critical concentration of 2.0 μg per milliliter (95% CI, 87.0 to 99.5), 71.4% for kanamycin (95% CI, 56.7 to 83.4), and 70.7% for amikacin (95% CI, 54.5 to 83.9). The specificity of the assay for the detection of phenotypic resistance was 94.3% or greater for all drugs except moxifloxacin at a critical concentration of 2.0 μg per milliliter (specificity, 84.0% [95% CI, 78.9 to 88.3]). When DNA sequencing was used as the reference standard, the sensitivities of the investigational assay for detecting mutations associated with resistance were 98.1% for isoniazid (95% CI, 94.4 to 99.6), 95.8% for fluoroquinolones (95% CI, 89.6 to 98.8), 92.7% for kanamycin (95% CI, 80.1 to 98.5), and 96.8% for amikacin (95% CI, 83.3 to 99.9), and the specificity for all drugs was 99.6% (95% CI, 97.9 to 100) or greater.\n\n\nCONCLUSIONS\nThis investigational assay accurately detected M. tuberculosis mutations associated with resistance to isoniazid, fluoroquinolones, and aminoglycosides and holds promise as a rapid point-of-care test to guide therapeutic decisions for patients with tuberculosis. (Funded by the National Institute of Allergy and Infectious Diseases, National Institutes of Health, and the Ministry of Science and Technology of China; ClinicalTrials.gov number, NCT02251327 .).", "title": "" }, { "docid": "2c5750e6498bd97fdbbbd5b141819a86", "text": "The ultimate goal of work in cognitive architecture is to provide the foundation for a system capable of general intelligent behavior. That is, the goal is to provide the underlying structure that would enable a system to perform the full range of cognitive tasks, employ the full range of problem-solving methods and representations appropriate for the tasks, and learn about all aspects of the tasks and its performance on them. In this article we present Soar, an implemented proposal for such an architecture. We describe its organizational principles, the system as currently implemented, and demonstrations of its capabilities. This research was sponsored by the Defense Advanced Research Projects Agency (DOD), ARPA Order No. 3597, monitored by the Air Force Avionics Laboratory under contracts F33615-81-K-1539 and N00039-83C-0136, and by the Personnel and Training Research Programs, Psychological Sciences Division, Office of Naval Research, under contract number N00014-82C-0067, contract authority identification number NR667-477. Additional partial support was provided by the Sloan Foundation and some computing support was supplied by the SUMEX-AIM facility (NIH grant number RR-00785). The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency, the Office of Naval Research, the Sloan Foundation, the National Institute of Health, or the US Government.", "title": "" }, { "docid": "51da5294d196e876ea2535dc13f8c1c5", "text": "Leaders who express an ethical identity are proposed to affect followers’ attitudes and work behaviors. In two multi-source studies, we first test a model suggesting that work engagement acts as a mediator in the relationships between ethical leadership and employee initiative (a form of organizational citizenship behavior) as well as counterproductive work behavior. Next, we focus on whether ethical leadership always forms an authentic expression of an ethical identity, thus in the second study, we add leader Machiavellianism to the model. For Machiavellian leaders, the publicly expressed identity of ethical leadership is inconsistent with the privately held unethical Machiavellian norms. Literature on surface acting suggests people can at least to some extent pick up on such inauthentic displays, making the effects less strong. We thus argue that the positive effects of ethical leader behavior are likely to be suppressed when leaders are highly Machiavellian. Support for this moderated mediation model was found: The effects of ethical leader behavior on engagement are less strong when ethical leaders are high as opposed to low on Machiavellianism.", "title": "" }, { "docid": "a7747c3329f26833e01ade020b45eaeb", "text": "The objective of this paper is to present the role of Ontology Learning Process in supporting an ontology engineer for creating and maintaining ontologies from textual resources. The knowledge structures that interest us are legal domain-specific ontologies. We will use these ontologies to build legal domain ontology for a Lebanese legal knowledge based system. The domain application of this work is the Lebanese criminal system. Ontologies can be learnt from various sources, such as databases, structured and unstructured documents. Here, the focus is on the acquisition of ontologies from unstructured text, provided as input. In this work, the Ontology Learning Process represents a knowledge extraction phase using Natural Language Processing techniques. The resulted ontology is considered as inexpressive ontology. There is a need to reengineer it in order to build a complete, correct and more expressive domain-specific ontology.", "title": "" }, { "docid": "557da3544fd738ecfc3edf812b92720b", "text": "OBJECTIVES\nTo describe the sonographic appearance of the structures of the posterior cranial fossa in fetuses at 11 + 3 to 13 + 6 weeks of pregnancy and to determine whether abnormal findings of the brain and spine can be detected by sonography at this time.\n\n\nMETHODS\nThis was a prospective study including 692 fetuses whose mothers attended Innsbruck Medical University Hospital for first-trimester sonography. In 3% (n = 21) of cases, measurement was prevented by fetal position. Of the remaining 671 cases, in 604 there was either a normal anomaly scan at 20 weeks or delivery of a healthy child and in these cases the transcerebellar diameter (TCD) and the anteroposterior diameter of the cisterna magna (CM), measured at 11 + 3 to 13 + 6 weeks, were analyzed. In 502 fetuses, the anteroposterior diameter of the fourth ventricle (4V) was also measured. In 25 fetuses, intra- and interobserver repeatability was calculated.\n\n\nRESULTS\nWe observed a linear correlation between crown-rump length (CRL) and CM (CM = 0.0536 × CRL - 1.4701; R2 = 0.688), TCD (TCD = 0.1482 × CRL - 1.2083; R2 = 0.701) and 4V (4V = 0.0181 × CRL + 0.9186; R2 = 0.118). In three patients with posterior fossa cysts, measurements significantly exceeded the reference values. One fetus with spina bifida had an obliterated CM and the posterior border of the 4V could not be visualized.\n\n\nCONCLUSIONS\nTransabdominal sonographic assessment of the posterior fossa is feasible in the first trimester. Measurements of the 4V, the CM and the TCD performed at this time are reliable. The established reference values assist in detecting fetal anomalies. However, findings must be interpreted carefully, as some supposed malformations might be merely delayed development of brain structures.", "title": "" }, { "docid": "1ecade87386366ab7b1631b8a47c7c32", "text": "We introduce an efficient computational framework for hashing data belonging to multiple modalities into a single representation space where they become mutually comparable. The proposed approach is based on a novel coupled siamese neural network architecture and allows unified treatment of intra- and inter-modality similarity learning. Unlike existing cross-modality similarity learning approaches, our hashing functions are not limited to binarized linear projections and can assume arbitrarily complex forms. We show experimentally that our method significantly outperforms state-of-the-art hashing approaches on multimedia retrieval tasks.", "title": "" }, { "docid": "9b220cb4c3883cb959d1665abefa5406", "text": "Time domain synchronous OFDM (TDS-OFDM) has a higher spectrum and energy efficiency than standard cyclic prefix OFDM (CP-OFDM) by replacing the unknown CP with a known pseudorandom noise (PN) sequence. However, due to mutual interference between the PN sequence and the OFDM data block, TDS-OFDM cannot support high-order modulation schemes such as 256QAM in realistic static channels with large delay spread or high-definition television (HDTV) delivery in fast fading channels. To solve these problems, we propose the idea of using multiple inter-block-interference (IBI)-free regions of small size to realize simultaneous multi-channel reconstruction under the framework of structured compressive sensing (SCS). This is enabled by jointly exploiting the sparsity of wireless channels as well as the characteristic that path delays vary much slower than path gains. In this way, the mutually conditional time-domain channel estimation and frequency-domain data demodulation in TDS-OFDM can be decoupled without the use of iterative interference removal. The Cramér-Rao lower bound (CRLB) of the proposed estimation scheme is also derived. Moreover, the guard interval amplitude in TDS-OFDM can be reduced to improve the energy efficiency, which is infeasible for CP-OFDM. Simulation results demonstrate that the proposed SCS-aided TDS-OFDM scheme has a higher spectrum and energy efficiency than CP-OFDM by more than 10% and 20% respectively in typical applications.", "title": "" }, { "docid": "d8f21e77a60852ea83f4ebf74da3bcd0", "text": "In recent years different lines of evidence have led to the idea that motor actions and movements in both vertebrates and invertebrates are composed of elementary building blocks. The entire motor repertoire can be spanned by applying a well-defined set of operations and transformations to these primitives and by combining them in many different ways according to well-defined syntactic rules. Motor and movement primitives and modules might exist at the neural, dynamic and kinematic levels with complicated mapping among the elementary building blocks subserving these different levels of representation. Hence, while considerable progress has been made in recent years in unravelling the nature of these primitives, new experimental, computational and conceptual approaches are needed to further advance our understanding of motor compositionality.", "title": "" } ]
scidocsrr
829069f5fc6e7efce36d80f5aa1cb9ae
Technology Selection for Big Data and Analytical Applications
[ { "docid": "2f778cc324101f5b7d1c9349e181e088", "text": "Business Intelligence (BI) refers to technologies, tools, and practices for collecting, integrating, analyzing, and presenting large volumes of information to enable better decision making. Today's BI architecture typically consists of a data warehouse (or one or more data marts), which consolidates data from several operational databases, and serves a variety of front-end querying, reporting, and analytic tools. The back-end of the architecture is a data integration pipeline for populating the data warehouse by extracting data from distributed and usually heterogeneous operational sources; cleansing, integrating and transforming the data; and loading it into the data warehouse. Since BI systems have been used primarily for off-line, strategic decision making, the traditional data integration pipeline is a oneway, batch process, usually implemented by extract-transform-load (ETL) tools. The design and implementation of the ETL pipeline is largely a labor-intensive activity, and typically consumes a large fraction of the effort in data warehousing projects. Increasingly, as enterprises become more automated, data-driven, and real-time, the BI architecture is evolving to support operational decision making. This imposes additional requirements and tradeoffs, resulting in even more complexity in the design of data integration flows. These include reducing the latency so that near real-time data can be delivered to the data warehouse, extracting information from a wider variety of data sources, extending the rigidly serial ETL pipeline to more general data flows, and considering alternative physical implementations. We describe the requirements for data integration flows in this next generation of operational BI system, the limitations of current technologies, the research challenges in meeting these requirements, and a framework for addressing these challenges. The goal is to facilitate the design and implementation of optimal flows to meet business requirements.", "title": "" }, { "docid": "461ee7b6a61a6d375a3ea268081f80f5", "text": "In this paper, we review the background and state-of-the-art of big data. We first introduce the general background of big data and review related technologies, such as could computing, Internet of Things, data centers, and Hadoop. We then focus on the four phases of the value chain of big data, i.e., data generation, data acquisition, data storage, and data analysis. For each phase, we introduce the general background, discuss the technical challenges, and review the latest advances. We finally examine the several representative applications of big data, including enterprise management, Internet of Things, online social networks, medial applications, collective intelligence, and smart grid. These discussions aim to provide a comprehensive overview and big-picture to readers of this exciting area. This survey is concluded with a discussion of open problems and future directions.", "title": "" } ]
[ { "docid": "4eda5bc4f8fa55ae55c69f4233858fc7", "text": "In this paper, we set out to compare several techniques that can be used in the analysis of imbalanced credit scoring data sets. In a credit scoring context, imbalanced data sets frequently occur as the number of defaulting loans in a portfolio is usually much lower than the number of observations that do not default. As well as using traditional classification techniques such as logistic regression, neural networks and decision trees, this paper will also explore the suitability of gradient boosting, least square support vector machines and random forests for loan default prediction. Five real-world credit scoring data sets are used to build classifiers and test their performance. In our experiments, we progressively increase class imbalance in each of these data sets by randomly undersampling the minority class of defaulters, so as to identify to what extent the predictive power of the respective techniques is adversely affected. The performance criterion chosen to measure this effect is the area under the receiver operating characteristic curve (AUC); Friedman’s statistic and Nemenyi post hoc tests are used to test for significance of AUC differences between techniques. The results from this empirical study indicate that the random forest and gradient boosting classifiers perform very well in a credit scoring context and are able to cope comparatively well with pronounced class imbalances in these data sets. We also found that, when faced with a large class imbalance, the C4.5 decision tree algorithm, quadratic discriminant analysis and k-nearest neighbours perform significantly worse than the best performing classifiers. 2011 Elsevier Ltd.", "title": "" }, { "docid": "67e6ec33b2afb4cf0c363d99869496bf", "text": "This and the following two papers describe event-related potentials (ERPs) evoked by visual stimuli in 98 patients in whom electrodes were placed directly upon the cortical surface to monitor medically intractable seizures. Patients viewed pictures of faces, scrambled faces, letter-strings, number-strings, and animate and inanimate objects. This paper describes ERPs generated in striate and peristriate cortex, evoked by faces, and evoked by sinusoidal gratings, objects and letter-strings. Short-latency ERPs generated in striate and peristriate cortex were sensitive to elementary stimulus features such as luminance. Three types of face-specific ERPs were found: (i) a surface-negative potential with a peak latency of approximately 200 ms (N200) recorded from ventral occipitotemporal cortex, (ii) a lateral surface N200 recorded primarily from the middle temporal gyrus, and (iii) a late positive potential (P350) recorded from posterior ventral occipitotemporal, posterior lateral temporal and anterior ventral temporal cortex. Face-specific N200s were preceded by P150 and followed by P290 and N700 ERPs. N200 reflects initial face-specific processing, while P290, N700 and P350 reflect later face processing at or near N200 sites and in anterior ventral temporal cortex. Face-specific N200 amplitude was not significantly different in males and females, in the normal and abnormal hemisphere, or in the right and left hemisphere. However, cortical patches generating ventral face-specific N200s were larger in the right hemisphere. Other cortical patches in the same region of extrastriate cortex generated grating-sensitive N180s and object-specific or letter-string-specific N200s, suggesting that the human ventral object recognition system is segregated into functionally discrete regions.", "title": "" }, { "docid": "30045d9e8153110926a0157c0cdcebf3", "text": "The self-oscillating flyback converter is a popular circuit for cost-sensitive applications due to its simplicity and low component count. It is widely employed in mobile phone chargers and as the stand-by power source in off-line power supplies for data-processing equipment. However, this circuit was almost not explored for supplying power LEDs. This paper presents a self-oscillating flyback driver for supplying Power LEDs directly, with no additional circuit. A simplified mathematical model of the LED was used to characterize the self-oscillating converter for driving the power LEDs. With the proposed converter the LEDs manufacturing tolerances and drifts over temperature presents little to no influence over the LED average current. This is proved by using the LED electrical model on the analysis.", "title": "" }, { "docid": "457ba37bf69b870db2653b851d271b0b", "text": "This paper presents a unified approach to local trajectory planning and control for the autonomous ground vehicle driving along a rough predefined path. In order to cope with the unpredictably changing environment reactively and reason about the global guidance, we develop an efficient sampling-based model predictive local path generation approach to generate a set of kinematically-feasible trajectories aligning with the reference path. A discrete optimization scheme is developed to select the best path based on a specified objective function, then followed by the velocity profile generation. As for the low-level control, to achieve high performance of control, two degree of freedom control architecture is employed by combining the feedforward control with the feedback control. The simulation results demonstrate the capability of the proposed approach to track the curvature-discontinuous reference path robustly, while avoiding collisions with static obstacles.", "title": "" }, { "docid": "6ba5dfa6f37e04d8e679e0393905aafe", "text": "A review’s quality can be evaluated through metric-based automated metareview. But not all the metrics should be weighted the same when it comes to evaluating the overall quality of reviews. For instance, if a review identifies specific problems about the reviewed artifact, then even with a low score for other metrics it should be evaluated as a helpful review. To evaluate the usefulness of a review, we propose a use of decision-tree based classifier models computed from the raw score of metareview metrics, instead of using all the metrics, we can use a subset of them.", "title": "" }, { "docid": "61615273dad80e5a0a95ecbe3002fd72", "text": "Other than serving as building blocks for DNA and RNA, purine metabolites provide a cell with the necessary energy and cofactors to promote cell survival and proliferation. A renewed interest in how purine metabolism may fuel cancer progression has uncovered a new perspective into how a cell regulates purine need. Under cellular conditions of high purine demand, the de novo purine biosynthetic enzymes cluster near mitochondria and microtubules to form dynamic multienzyme complexes referred to as 'purinosomes'. In this review, we highlight the purinosome as a novel level of metabolic organization of enzymes in cells, its consequences for regulation of purine metabolism, and the extent that purine metabolism is being targeted for the treatment of cancers.", "title": "" }, { "docid": "471eca6664d0ae8f6cdfb848bc910592", "text": "Taxonomic relation identification aims to recognize the ‘is-a’ relation between two terms. Previous works on identifying taxonomic relations are mostly based on statistical and linguistic approaches, but the accuracy of these approaches is far from satisfactory. In this paper, we propose a novel supervised learning approach for identifying taxonomic relations using term embeddings. For this purpose, we first design a dynamic weighting neural network to learn term embeddings based on not only the hypernym and hyponym terms, but also the contextual information between them. We then apply such embeddings as features to identify taxonomic relations using a supervised method. The experimental results show that our proposed approach significantly outperforms other state-of-the-art methods by 9% to 13% in terms of accuracy for both general and specific domain datasets.", "title": "" }, { "docid": "4ea537e5b8c773c318a81c0ba7a8d789", "text": "Behavioral economics increases the explanatory power of economics by providing it with more realistic psychological foundations. This book consists of representative recent articles in behavioral economics. This chapter is intended to provide an introduction to the approach and methods of behavioral economics, and to some of its major findings, applications, and promising new directions. It also seeks to fill some unavoidable gaps in the chapters’ coverage of topics.", "title": "" }, { "docid": "03caa37e087405e7a1ba5cc768e83228", "text": "Research on visual perception indicates that the human visual system is sensitive to center-surround (C-S) contrast in the bottom-up saliency-driven attention process. Different from the traditional contrast computation of feature difference, models based on reconstruction have emerged to estimate saliency by starting from original images themselves instead of seeking for certain ad hoc features. However, in the existing reconstruction-based methods, the reconstruction parameters of each area are calculated independently without taking their global correlation into account. In this paper, inspired by the powerful feature learning and data reconstruction ability of deep autoencoders, we construct a deep C-S inference network and train it with the data sampled randomly from the entire image to obtain a unified reconstruction pattern for the current image. In this way, global competition in sampling and learning processes can be integrated into the nonlocal reconstruction and saliency estimation of each pixel, which can achieve better detection results than the models with separate consideration on local and global rarity. Moreover, by learning from the current scene, the proposed model can achieve the feature extraction and interaction simultaneously in an adaptive way, which can form a better generalization ability to handle more types of stimuli. Experimental results show that in accordance with different inputs, the network can learn distinct basic features for saliency modeling in its code layer. Furthermore, in a comprehensive evaluation on several benchmark data sets, the proposed method can outperform the existing state-of-the-art algorithms.", "title": "" }, { "docid": "26241f7523ce36cb51fd2f4d91b827d0", "text": "We introduce Mix & Match (M&M) – a training framework designed to facilitate rapid and effective learning in RL agents, especially those that would be too slow or too challenging to train otherwise. The key innovation is a procedure that allows us to automatically form a curriculum over agents. Through such a curriculum we can progressively train more complex agents by, effectively, bootstrapping from solutions found by simpler agents. In contradistinction to typical curriculum learning approaches, we do not gradually modify the tasks or environments presented, but instead use a process to gradually alter how the policy is represented internally. We show the broad applicability of our method by demonstrating significant performance gains in three different experimental setups: (1) We train an agent able to control more than 700 actions in a challenging 3D first-person task; using our method to progress through an action-space curriculum we achieve both faster training and better final performance than one obtains using traditional methods. (2) We further show that M&M can be used successfully to progress through a curriculum of architectural variants defining an agents internal state. (3) Finally, we illustrate how a variant of our method can be used to improve agent performance in a multitask setting.", "title": "" }, { "docid": "30d0453033d3951f5b5faf3213eacb89", "text": "Semantic mapping is the incremental process of “mapping” relevant information of the world (i.e., spatial information, temporal events, agents and actions) to a formal description supported by a reasoning engine. Current research focuses on learning the semantic of environments based on their spatial location, geometry and appearance. Many methods to tackle this problem have been proposed, but the lack of a uniform representation, as well as standard benchmarking suites, prevents their direct comparison. In this paper, we propose a standardization in the representation of semantic maps, by defining an easily extensible formalism to be used on top of metric maps of the environments. Based on this, we describe the procedure to build a dataset (based on real sensor data) for benchmarking semantic mapping techniques, also hypothesizing some possible evaluation metrics. Nevertheless, by providing a tool for the construction of a semantic map ground truth, we aim at the contribution of the scientific community in acquiring data for populating the dataset.", "title": "" }, { "docid": "eeb6f968622316d013942b6ea2b8c735", "text": "Using deep learning for different machine learning tasks such as image classification and word embedding has recently gained many attentions. Its appealing performance reported across specific Natural Language Processing (NLP) tasks in comparison with other approaches is the reason for its popularity. Word embedding is the task of mapping words or phrases to a low dimensional numerical vector. In this paper, we use deep learning to embed Wikipedia Concepts and Entities. The English version of Wikipedia contains more than five million pages, which suggest its capability to cover many English Entities, Phrases, and Concepts. Each Wikipedia page is considered as a concept. Some concepts correspond to entities, such as a person’s name, an organization or a place. Contrary to word embedding, Wikipedia Concepts Embedding is not ambiguous, so there are different vectors for concepts with similar surface form but different mentions. We proposed several approaches and evaluated their performance based on Concept Analogy and Concept Similarity tasks. The results show that proposed approaches have the performance comparable and in some cases even higher than the state-of-the-art methods.", "title": "" }, { "docid": "979e01842f7572a0ec45dadb6f1c2f86", "text": "With the rapid development of the credit industry, credit scoring models has become a very important issue in the credit industry. Many credit scoring models based on machine learning have been widely used. Such as artificial neural network (ANN), rough set, support vector machine (SVM), and other innovative credit scoring models. However, in practical applications, a large amount of irrelevant and redundant features in the credit data, which leads to higher computational complexity and lower prediction accuracy. So, the face of a large number of credit data, effective feature selection method is necessary. In this paper, we propose a novel credit scoring model, called NCSM, based on feature selection and grid search to optimize random forest algorithm. The model reduces the influence of the irrelevant and redundant features and to get the higher prediction accuracy. In NCSM, the information entropy is regarded as the heuristic to select the optimal feature. Two credit data sets in UCI database are used as experimental data to demonstrate the accuracy of the NCSM. Compared with linear SVM, CART, MLP, H2O RF models, the experimental result shows that NCSM has a superior performance in improving the prediction accuracy.", "title": "" }, { "docid": "156b2c39337f4fe0847b49fa86dc094b", "text": "The paper attempts to describe the space of possible mind designs by first equating all minds to software. Next it proves some properties of the mind design space such as infinitude of minds, size and representation complexity of minds. A survey of mind design taxonomies is followed by a proposal for a new field of investigation devoted to study of minds, intellectology.", "title": "" }, { "docid": "72be75e973b6a843de71667566b44929", "text": "We think that hand pose estimation technologies with a camera should be developed for character conversion systems from sign languages with a not so high performance terminal. Fingernail positions can be used for getting finger information which can’t be obtained from outline information. Therefore, we decided to construct a practical fingernail detection system. The previous fingernail detection method, using distribution density of strong nail-color pixels, was not good at removing some skin areas having gloss like finger side area. Therefore, we should use additional information to remove them. We thought that previous method didn’t use boundary information and this information would be available. Color continuity information is available for getting it. In this paper, therefore, we propose a new fingernail detection method using not only distribution density but also color continuity to improve accuracy. We investigated the relationship between wrist rotation angles and percentages of correct detection. The number of users was three. As a result, we confirmed that our proposed method raised accuracy compared with previous method and could detect only fingernails with at least 85% probability from -90 to 40 degrees and from 40 to 90 degrees. Therefore, we concluded that our proposed method was effective.", "title": "" }, { "docid": "06d42f15aa724120bd99f3ab3bed6053", "text": "With today's unprecedented proliferation in smart-devices, the Internet of Things Vision has become more of a reality than ever. With the extreme diversity of applications running on these heterogeneous devices, numerous middle-ware solutions have consequently emerged to address IoT-related challenges. These solutions however, heavily rely on the cloud for better data management, integration, and processing. This might potentially compromise privacy, add latency, and place unbearable traffic load. In this paper, we propose The Hive, an edge-based middleware architecture and protocol, that enables heterogeneous edge devices to dynamically share data and resources for enhanced application performance and privacy. We implement a prototype of the Hive, test it for basic robustness, show its modularity, and evaluate its performance with a real world smart emotion recognition application running on edge devices.", "title": "" }, { "docid": "f52d387faf03421bd97500494addd260", "text": "OBJECTIVE\nTo test the association of behavioral and psychosocial health domains with contextual variables and perceived health in ethnically and economically diverse postpartum women.\n\n\nDESIGN\nMail survey of a stratified random sample.\n\n\nSETTING\nSouthwestern community in Texas.\n\n\nPARTICIPANTS\nNon-Hispanic White, African American, and Hispanic women (N = 168).\n\n\nMETHODS\nA questionnaire was sent to a sample of 600 women. The adjusted response rate was 32.8%. The questionnaire covered behavioral (diet, physical activity, smoking, and alcohol use) and psychosocial (depression symptoms and body image) health, contextual variables (race/ethnicity, income, perceived stress, and social support), and perceived health. Hypotheses were tested using linear and logistic regression.\n\n\nRESULTS\nBody image, dietary behaviors, physical activity behaviors, and depression symptoms were all significantly correlated (Spearman ρ = -.15 to .47). Higher income was associated with increased odds of higher alcohol use (more than 1 drink on 1 to 4 days in a 14-day period). African American ethnicity was correlated with less healthy dietary behaviors and Hispanic ethnicity with less physical activity. In multivariable regressions, perceived stress was associated with less healthy dietary behaviors, increased odds of depression, and decreased odds of higher alcohol use, whereas social support was associated with less body image dissatisfaction, more physical activity, and decreased odds of depression. All behavioral and psychosocial domains were significantly correlated with perceived health, with higher alcohol use related to more favorable perceived health. In regressions analyses, perceived stress was a significant contextual predictor of perceived health.\n\n\nCONCLUSION\nStress and social support had more consistent relationships to behavioral and psychosocial variables than race/ethnicity and income level.", "title": "" }, { "docid": "ac8a0b4ad3f2905bc4e37fa4b0fcbe0a", "text": "In this work we present a NIDS cluster as a scalable solution for realizing high-performance, stateful network intrusion detection on commodity hardware. The design addresses three challenges: (i) distributing traffic evenly across an extensible set of analysis nodes in a fashion that minimizes the communication required for coordination, (ii) adapting the NIDS’s operation to support coordinating its low-level analysis rather than just aggregating alerts; and (iii) validating that the cluster produces sound results. Prototypes of our NIDS cluster now operate at the Lawrence Berkeley National Laboratory and the University of California at Berkeley. In both environments the clusters greatly enhance the power of the network security monitoring.", "title": "" }, { "docid": "90dcd18ccaa1bddbcce8f540a655abe7", "text": "Medical organizations find it challenging to adopt cloud-based electronic medical records services, due to the risk of data breaches and the resulting compromise of patient data. Existing authorization models follow a patient centric approach for EHR management where the responsibility of authorizing data access is handled at the patients' end. This however creates a significant overhead for the patient who has to authorize every access of their health record. This is not practical given the multiple personnel involved in providing care and that at times the patient may not be in a state to provide this authorization. Hence there is a need of developing a proper authorization delegation mechanism for safe, secure and easy cloud-based EHR management. We have developed a novel, centralized, attribute based authorization mechanism that uses Attribute Based Encryption (ABE) and allows for delegated secure access of patient records. This mechanism transfers the service management overhead from the patient to the medical organization and allows easy delegation of cloud-based EHR's access authority to the medical providers. In this paper, we describe this novel ABE approach as well as the prototype system that we have created to illustrate it.", "title": "" }, { "docid": "fe38b44457f89bcb63aabe65babccd03", "text": "Single sample face recognition have become an important problem because of the limitations on the availability of gallery images. In many real-world applications such as passport or driver license identification, there is only a single facial image per subject available. The variations between the single gallery face image and the probe face images, captured in unconstrained environments, make the single sample face recognition even more difficult. In this paper, we present a fully automatic face recognition system robust to most common face variations in unconstrained environments. Our proposed system is capable of recognizing faces from non-frontal views and under different illumination conditions using only a single gallery sample for each subject. It normalizes the face images for both in-plane and out-of-plane pose variations using an enhanced technique based on active appearance models (AAMs). We improve the performance of AAM fitting, not only by training it with in-the-wild images and using a powerful optimization technique, but also by initializing the AAM with estimates of the locations of the facial landmarks obtained by a method based on flexible mixture of parts. The proposed initialization technique results in significant improvement of AAM fitting to non-frontal poses and makes the normalization process robust, fast and reliable. Owing to the proper alignment of the face images, made possible by this approach, we can use local feature descriptors, such as Histograms of Oriented Gradients (HOG), for matching. The use of HOG features makes the system robust against illumination variations. In order to improve the discriminating information content of the feature vectors, we also extract Gabor features from the normalized face images and fuse them with HOG features using Canonical Correlation Analysis (CCA). Experimental results performed on various databases outperform the state-of-the-art methods and show the effectiveness of our proposed method in normalization and recognition of face images obtained in unconstrained environments.", "title": "" } ]
scidocsrr
3c1981773694d9995f150c8cd93ec9bd
Decoding of Polar Code by Using Deep Feed-Forward Neural Networks
[ { "docid": "545adbeb802c7f8a70390ecf424e7f58", "text": "We describe a successive-cancellation list decoder for polar codes, which is a generalization of the classic successive-cancellation decoder of Arikan. In the proposed list decoder, up to L decoding paths are considered concurrently at each decoding stage. Simulation results show that the resulting performance is very close to that of a maximum-likelihood decoder, even for moderate values of L. Thus it appears that the proposed list decoder bridges the gap between successive-cancellation and maximum-likelihood decoding of polar codes. The specific list-decoding algorithm that achieves this performance doubles the number of decoding paths at each decoding step, and then uses a pruning procedure to discard all but the L “best” paths. In order to implement this algorithm, we introduce a natural pruning criterion that can be easily evaluated. Nevertheless, straightforward implementation still requires O(L · n2) time, which is in stark contrast with the O(n log n) complexity of the original successive-cancellation decoder. We utilize the structure of polar codes to overcome this problem. Specifically, we devise an efficient, numerically stable, implementation taking only O(L · n log n) time and O(L · n) space.", "title": "" } ]
[ { "docid": "d8d1bffe934e0c4ab104df71e69e4b0e", "text": "A validation study of the English version of the 28-item Life Regard Index-Revised was undertaken with a sample of 91 participants from the general population. All previous studies of the Index have examined the Dutch version. The test-retest reliabilities at 8 wk. for the total Index (r =.87), Framework (r =.82), and Fulfillment (r =.81) subscales were very high. Cronbach alphas were .92, .83, and .87, respectively. A significant restriction of range was observed at the high-meaning end of the scale. Factor analysis only weakly supported the theorized two-factor structure. A very high disattenuated correlation between the Framework and Fulfillment subscales was observed (r=.94). The Index appeared to have adequate evidence supporting its concurrent and discriminant validity when compared with measures of hopelessness, spiritual well-being, and other measures of personal meaning. A significant positive association was found between the index and the Marlowe-Crowne Social Desirability Scale (r=.38). The Index was also significantly associated with sex (women scoring higher) and marital status (divorced people scoring lower). Revisions of the English version may address the restriction of range problem by employing a 5-point rating scale, instead of the current 3-point scale, or by adding more discriminating items. Further factor-analytic studies with larger samples are needed before conclusions can be drawn regarding this scale's factor structure.", "title": "" }, { "docid": "69e98d180d82d559372612013b7bc6a2", "text": "Intelligent fault diagnosis of bearings has been a heated research topic in the prognosis and health management of rotary machinery systems, due to the increasing amount of available data collected by sensors. This has given rise to more and more business desire to apply data-driven methods for health monitoring of machines. In recent years, various deep learning algorithms have been adapted to this field, including multi-layer perceptrons, autoencoders, convolutional neural networks, and so on. Among these methods, autoencoder is of particular interest for us because of its simple structure and its ability to learn useful features from data in an unsupervised fashion. Previous studies have exploited the use of autoencoders, such as denoising autoencoder, sparsity aotoencoder, and so on, either with one layer or with several layers stacked together, and they have achieved success to certain extent. In this paper, a bearing fault diagnosis method based on fully-connected winner-take-all autoencoder is proposed. The model explicitly imposes lifetime sparsity on the encoded features by keeping only $k$ % largest activations of each neuron across all samples in a mini-batch. A soft voting method is implemented to aggregate prediction results of signal segments sliced by a sliding window to increase accuracy and stability. A simulated data set is generated by adding white Gaussian noise to original signals to test the diagnosis performance under noisy environment. To evaluate the performance of the proposed method, we compare our methods with some state-of-the-art bearing fault diagnosis methods. The experiments result show that, with a simple two-layer network, the proposed method is not only capable of diagnosing with high precision under normal conditions, but also has better robustness to noise than some deeper and more complex models.", "title": "" }, { "docid": "24957794ed251c2e970d787df6d87064", "text": "Glyph as a powerful multivariate visualization technique is used to visualize data through its visual channels. To visualize 3D volumetric dataset, glyphs are usually placed on 2D surface, such as the slicing plane or the feature surface, to avoid occluding each other. However, the 3D spatial structure of some features may be missing. On the other hand, placing large number of glyphs over the entire 3D space results in occlusion and visual clutter that make the visualization ineffective. To avoid the occlusion, we propose a view-dependent interactive 3D lens that removes the occluding glyphs by pulling the glyphs aside through the animation. We provide two space deformation models and two lens shape models to displace the glyphs based on their spatial distributions. After the displacement, the glyphs around the user-interested region are still visible as the context information, and their spatial structures are preserved. Besides, we attenuate the brightness of the glyphs inside the lens based on their depths to provide more depth cue. Furthermore, we developed an interactive glyph visualization system to explore different glyph-based visualization applications. In the system, we provide a few lens utilities that allows users to pick a glyph or a feature and look at it from different view directions. We compare different display/interaction techniques to visualize/manipulate our lens and glyphs.", "title": "" }, { "docid": "a078ace7b4093d10e4998667156c68bf", "text": "In this study we develop a method which improves a credit card fraud detection solution currently being used in a bank. With this solution each transaction is scored and based on these scores the transactions are classified as fraudulent or legitimate. In fraud detection solutions the typical objective is to minimize the wrongly classified number of transactions. However, in reality, wrong classification of each transaction do not have the same effect in that if a card is in the hand of fraudsters its whole available limit is used up. Thus, the misclassification cost should be taken as the available limit of the card. This is what we aim at minimizing in this study. As for the solution method, we suggest a novel combination of the two well known meta-heuristic approaches, namely the genetic algorithms and the scatter search. The method is applied to real data and very successful results are obtained compared to current practice. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "028cdddc5d61865d0ea288180cef91c0", "text": "This paper investigates the use of Convolutional Neural Networks for classification of painted symbolic road markings. Previous work on road marking recognition is mostly based on either template matching or on classical feature extraction followed by classifier training which is not always effective and based on feature engineering. However, with the rise of deep neural networks and their success in ADAS systems, it is natural to investigate the suitability of CNN for road marking recognition. Unlike others, our focus is solely on road marking recognition and not detection; which has been extensively explored and conventionally based on MSER feature extraction of the IPM images. We train five different CNN architectures with variable number of convolution/max-pooling and fully connected layers, and different resolution of road mark patches. We use a publicly available road marking data set and incorporate data augmentation to enhance the size of this data set which is required for training deep nets. The augmented data set is randomly partitioned in 70% and 30% for training and testing. The best CNN network results in an average recognition rate of 99.05% for 10 classes of road markings on the test set.", "title": "" }, { "docid": "8117b4daeac4cca15a4be1ee84b0e65f", "text": "Multi-Attribute Trade-Off Analysis (MATA) provides decision-makers with an analytical tool to identify Pareto Superior options for solving a problem with conflicting objectives or attributes. This technique is ideally suited to electric distribution systems, where decision-makers must choose investments that will ensure reliable service at reasonable cost. This paper describes the application of MATA to an electric distribution system facing dramatic growth, the Abu Dhabi Distribution Company (ADDC) in the United Arab Emirates. ADDC has a range of distribution system design options from which to choose in order to meet this growth. The distribution system design options have different levels of service quality (i.e., reliability) and service cost. Management can use MATA to calculate, summarize and compare the service quality and service cost attributes of the various design options. The Pareto frontier diagrams present management with clear, simple pictures of the trade-offs between service cost and service quality.", "title": "" }, { "docid": "33e88cb3ce4b17d3540b4dfc6d9ef08a", "text": "We propose MAD-GAN, an intuitive generalization to the Generative Adversarial Networks (GANs) and its conditional variants to address the well known problem of mode collapse. First, MAD-GAN is a multi-agent GAN architecture incorporating multiple generators and one discriminator. Second, to enforce that different generators capture diverse high probability modes, the discriminator of MAD-GAN is designed such that along with finding the real and fake samples, it is also required to identify the generator that generated the given fake sample. Intuitively, to succeed in this task, the discriminator must learn to push different generators towards different identifiable modes. We perform extensive experiments on synthetic and real datasets and compare MAD-GAN with different variants of GAN. We show high quality diverse sample generations for challenging tasks such as image-to-image translation and face generation. In addition, we also show that MAD-GAN is able to disentangle different modalities when trained using highly challenging diverse-class dataset (e.g. dataset with images of forests, icebergs, and bedrooms). In the end, we show its efficacy on the unsupervised feature representation task.", "title": "" }, { "docid": "377210d62d3d3cd36f312c1812080a31", "text": "Nowadays air quality data can be easily accumulated by sensors around the world. Analysis on air quality data is very useful for society decision. Among five major air pollutants which are calculated for AQI (Air Quality Index), PM2.5 data is the most concerned by the people. PM2.5 data is also cross-impacted with the other factors in the air and which has properties of non-linear nonstationary including high noise level and outlier. Traditional methods cannot solve the problem of PM2.5 data clustering very well because of their inherent characteristics. In this paper, a novel model-based feature extraction method is proposed to address this issue. The EPLS model includes 1) Mode Decomposition, in which EEMD algorithm is applied to the aggregation dataset; 2) Dimension Reduction, which is carried out for a more significant set of vectors; 3) Least Squares Projection, in which all testing data are projected to the obtained vectors. Synthetic dataset and air quality dataset are applied to different clustering methods and similarity measures. Experimental results demonstrate IFully documented templates are available in the elsarticle package on CTAN. ∗Corresponding author at: Department of Computer Science, China University of Geosciences, Wuhan, China. ∗∗Corresponding author at: Department of Computer Science, China University of Geosciences, Wuhan, China. Email addresses: [email protected] (Lizhe Wang), [email protected] (Fangyuan Li) 1Email address: Cyl [email protected]. Preprint submitted to Journal of LTEX Templates December 1, 2016", "title": "" }, { "docid": "05ddc7e7819e5f9ac777f80e578f63ef", "text": "This paper introduces adaptive reinforcement learning (ARL) as the basis for a fully automated trading system application. The system is designed to trade foreign exchange (FX) markets and relies on a layered structure consisting of a machine learning algorithm, a risk management overlay and a dynamic utility optimization layer. An existing machine-learning method called recurrent reinforcement learning (RRL) was chosen as the underlying algorithm for ARL. One of the strengths of our approach is that the dynamic optimization layer makes a fixed choice of model tuning parameters unnecessary. It also allows for a risk-return trade-off to be made by the user within the system. The trading system is able to make consistent gains out-of-sample while avoiding large draw-downs. q 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "2653554c6dec7e9cfa0f5a4080d251e2", "text": "Clustering is a key technique within the KDD process, with k-means, and the more general k-medoids, being well-known incremental partition-based clustering algorithms. A fundamental issue within this class of algorithms is to find an initial set of medians (or medoids) that improves the efficiency of the algorithms (e.g., accelerating its convergence to a solution), at the same time that it improves its effectiveness (e.g., finding more meaningful clusters). Thus, in this article we aim at providing a technique that, given a set of elements, quickly finds a very small number of elements as medoid candidates for this set, allowing to improve both the efficiency and effectiveness of existing clustering algorithms. We target the class of k-medoids algorithms in general, and propose a technique that selects a well-positioned subset of central elements to serve as the initial set of medoids for the clustering process. Our technique leads to a substantially smaller amount of distance calculations, thus improving the algorithm’s efficiency when compared to existing methods, without sacrificing effectiveness. A salient feature of our proposed technique is that it is not a new k-medoid clustering algorithm per se, rather, it can be used in conjunction with any existing clustering algorithm that is based on the k-medoid paradigm. Experimental results, using both synthetic and real datasets, confirm the efficiency, effectiveness and scalability of the proposed technique.", "title": "" }, { "docid": "5c95665b5608a40d1dc2499c6fd6d21e", "text": "Camber is one of the most significant defects in the first stages of hot rolling of steel plates. This kind of defect may cause the clogging of finishing mills, but it is also the visible effect of alterations in the process. In this paper we describe the design and implementation of a computer vision system for real-time measurement of camber in a hot rolling mill. Our goal is to provide valuable feedback information to improve AGC operation. As ground truth values are almost impossible to obtain, we have analyzed the relationship among measured camber and other process variables in order to validate our results. The system has proved to be robust, and at the same time there is a strong relationship between known problems in the mill and system readings.", "title": "" }, { "docid": "da7cc08e5fd7275d2f4194f83f1e7365", "text": "Recursive neural networks (RNN) and their recently proposed extension recursive long short term memory networks (RLSTM) are models that compute representations for sentences, by recursively combining word embeddings according to an externally provided parse tree. Both models thus, unlike recurrent networks, explicitly make use of the hierarchical structure of a sentence. In this paper, we demonstrate that RNNs nevertheless suffer from the vanishing gradient and long distance dependency problem, and that RLSTMs greatly improve over RNN’s on these problems. We present an artificial learning task that allows us to quantify the severity of these problems for both models. We further show that a ratio of gradients (at the root node and a focal leaf node) is highly indicative of the success of backpropagation at optimizing the relevant weights low in the tree. This paper thus provides an explanation for existing, superior results of RLSTMs on tasks such as sentiment analysis, and suggests that the benefits of including hierarchical structure and of including LSTM-style gating are complementary.", "title": "" }, { "docid": "9006ecc6ff087d6bdaf90bdb73860133", "text": "Next-generation datacenters (DCs) built on virtualization technologies are pivotal to the effective implementation of the cloud computing paradigm. To deliver the necessary services and quality of service, cloud DCs face major reliability and robustness challenges.", "title": "" }, { "docid": "2ff15076533d1065209e0e62776eaa69", "text": "In less than a decade, Cubesats have evolved from purely educational tools to a standard platform for technology demonstration and scientific instrumentation. The use of COTS (Commercial-Off-The-Shelf) components and the ongoing miniaturization of several technologies have already led to scattered instances of missions with promising scientific value. Furthermore, advantages in terms of development cost and development time with respect to larger satellites, as well as the possibility of launching several dozens of Cubesats with a single rocket launch, have brought forth the potential for radically new mission architectures consisting of very large constellations or clusters of Cubesats. These architectures promise to combine the temporal resolution of GEO missions with the spatial resolution of LEO missions, thus breaking a traditional tradeoff in Earth observation mission design. This paper assesses the current capabilities of Cubesats with respect to potential employment in Earth observation missions. A thorough review of Cubesat bus technology capabilities is performed, identifying potential limitations and their implications on 17 different Earth observation payload technologies. These results are matched to an exhaustive review of scientific requirements in the field of Earth observation, assessing the possibilities of Cubesats to cope with the requirements set for each one of 21 measurement categories. Based on this review, several Earth observation measurements are identified that can potentially be compatible with the current state-of-the-art of Cubesat technology although some of them have actually never been addressed by any Cubesat mission. Simultaneously, other measurements are identified which are unlikely to be performed by Cubesats in the next few years due to insuperable constraints. Ultimately, this paper is intended to supply a box of ideas for universities to design future Cubesat missions with high", "title": "" }, { "docid": "197dfd6fdcb600c2dec6aefcbf8dfd1f", "text": "In this paper, We propose a formalized method to improve the performance of Contextual Anomaly Detection (CAD) for detecting stock market manipulation using Big Data techniques. The method aims to improve the CAD algorithm by capturing the expected behaviour of stocks through sentiment analysis of tweets about stocks. The extracted insights are aggregated per day for each stock and transformed to a time series. The time series is used to eliminate false positives from anomalies that are detected by CAD. We present a case study and explore developing sentiment analysis models to improve anomaly detection in the stock market. The experimental results confirm the proposed method is effective in improving CAD through removing irrelevant anomalies by correctly identifying 28% of false positives.", "title": "" }, { "docid": "d6b221435bb3953b087e7aaca1e3be6a", "text": "This paper reports on AnnieWAY, an autonomous vehicle that is capable of driving through urban scenarios and that has successfully entered the finals of the DARPA Urban Challenge 2007 competition. After describing the main challenges imposed and the major hardware components, we outline the underlying software structure and focus on selected algorithms. A recent laser scanner plays the prominent role in the perception of the environment. It measures range and reflectivity for each pixel. While the former is used to provide 3D scene geometry, the latter allows robust lane marker detection. Mission and maneuver selection is conducted via a concurrent hierarchical state machine that specifically ascertains behavior in accordance with California traffic rules. We conclude with a report of the results achieved during the competition.", "title": "" }, { "docid": "c17e6363762e0e9683b51c0704d43fa7", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "8cff1a60fd0eeb60924333be5641ca83", "text": "Since Wireless Sensor Networks (WSNs) are composed of a set of sensor nodes that limit resource constraints such as energy constraints, energy consumption in WSNs is one of the challenges of these networks. One of the solutions to reduce energy consumption in WSNs is to use clustering. In clustering, cluster members send their data to their Cluster Head (CH), and the CH after collecting the data, sends them to the Base Station (BS). In clustering, choosing CHs is very important; so many methods have proposed to choose the CH. In this study, a hesitant fuzzy method with three input parameters namely, remaining energy, distance to the BS, distance to the center of cluster is proposed for efficient cluster head selection in WSNs. We define different scenarios and simulate them, then investigate the results of simulation.", "title": "" }, { "docid": "f8d06c65acdbec0a41fe49fc4e7aef09", "text": "We present an exhaustive review of research on automatic classification of sounds from musical instruments. Two different but complementary approaches are examined, the perceptual approach and the taxonomic approach. The former is targeted to derive perceptual similarity functions in order to use them for timbre clustering and for searching and retrieving sounds by timbral similarity. The latter is targeted to derive indexes for labeling sounds after cultureor user-biased taxonomies. We review the relevant features that have been used in the two areas and then we present and discuss different techniques for similarity-based clustering of sounds and for classification into pre-defined instrumental categories.", "title": "" }, { "docid": "61e9b6cc62373ae7a882fbad0f2aa97c", "text": "BACKGROUND\nDespite many progresses in the improvement of care status and the management of acute coronary syndrome, cares quality is far from the desirable conditions. Today, due to the great emphasis on resources management, costs control, the effectiveness of patient care, improving quality and responsibility, the good patient care is necessary. Two dimensions are referred for improving the quality: process (standard- based and safe services) and resultant (client satisfaction). The present study, aimed at determining the impact of Synergy Model on nurses' performance and the satisfaction of the patients with acute coronary syndrome.\n\n\nMATERIALS AND METHODS\nIn a quasi- experimental study in a two-group and two-step form, a sample of 22 nurses and 64 patients with acute coronary syndrome in cardiac intensive care units of some university hospitals in 2010-2011 were recruited. Synergy Model was explained and carried out for the studied groups in a workshop and its impact on nurses performance in different areas and patients' satisfaction was examined by using two checklists: examining the nurses' performance quality and examining the patients satisfaction.\n\n\nFINDINGS\nDifferences between the mean scores of the nurses in communicative, supportive, care and educational domains and total performance were statistically significant before and after the intervention (p < 0.001). However, in therapeutic domain, changes were not significant. There was a statistically significant difference between the average satisfaction score of the two groups (p < 0.001).\n\n\nCONCLUSIONS\nApplying Synergy Model as a basis for receiving nursing cares was effective in increasing patient satisfaction and in the performance of nurses of cardiac intensive care units.", "title": "" } ]
scidocsrr
1521d0592da89ec6ac685808262e2f09
Three-Dimensional Bipedal Walking Control Based on Divergent Component of Motion
[ { "docid": "7d014f64578943f8ec8e5e27d313e148", "text": "In this paper, we extend the Divergent Component of Motion (DCM, also called `Capture Point') to 3D. We introduce the “Enhanced Centroidal Moment Pivot point” (eCMP) and the “Virtual Repellent Point” (VRP), which allow for the encoding of both direction and magnitude of the external (e.g. leg) forces and the total force (i.e. external forces plus gravity) acting on the robot. Based on eCMP, VRP and DCM, we present a method for real-time planning and control of DCM trajectories in 3D. We address the problem of underactuation and propose methods to guarantee feasibility of the finally commanded forces. The capabilities of the proposed control framework are verified in simulations.", "title": "" } ]
[ { "docid": "4c5700a65040c08534d6d8cbac449073", "text": "The proliferation of social media in the recent past has provided end users a powerful platform to voice their opinions. Businesses (or similar entities) need to identify the polarity of these opinions in order to understand user orientation and thereby make smarter decisions. One such application is in the field of politics, where political entities need to understand public opinion and thus determine their campaigning strategy. Sentiment analysis on social media data has been seen by many as an effective tool to monitor user preferences and inclination. Popular text classification algorithms like Naive Bayes and SVM are Supervised Learning Algorithms which require a training data set to perform Sentiment analysis. The accuracy of these algorithms is contingent upon the quantity as well as the quality (features and contextual relevance) of the labeled training data. Since most applications suffer from lack of training data, they resort to cross domain sentiment analysis which misses out on features relevant to the target data. This, in turn, takes a toll on the overall accuracy of text classification. In this paper, we propose a two stage framework which can be used to create a training data from the mined Twitter data without compromising on features and contextual relevance. Finally, we propose a scalable machine learning model to predict the election results using our two stage framework.", "title": "" }, { "docid": "cb011c7e0d4d5f6d05e28c07ff02e18b", "text": "The legendary wealth in gold of ancient Egypt seems to correspond with an unexpected high number of gold production sites in the Eastern Desert of Egypt and Nubia. This contribution introduces briefly the general geology of these vast regions and discusses the geology of the different varieties of the primary gold occurrences (always related to auriferous quartz mineralization in veins or shear zones) as well as the variable physico-chemical genesis of the gold concentrations. The development of gold mining over time, from Predynastic (ca. 3000 BC) until the end of Arab gold production times (about 1350 AD), including the spectacular Pharaonic periods is outlined, with examples of its remaining artefacts, settlements and mining sites in remote regions of the Eastern Desert of Egypt and Nubia. Finally, some estimates on the scale of gold production are presented. 2002 Published by Elsevier Science Ltd.", "title": "" }, { "docid": "fdbf20917751369d7ffed07ecedc9722", "text": "In order to evaluate the effect of static magnetic field (SMF) on morphological and physiological responses of soybean to water stress, plants were grown under well-watered (WW) and water-stress (WS) conditions. The adverse effects of WS given at different growth stages was found on growth, yield, and various physiological attributes, but WS at the flowering stage severely decreased all of above parameters in soybean. The result indicated that SMF pretreatment to the seeds significantly increased the plant growth attributes, biomass accumulation, and photosynthetic performance under both WW and WS conditions. Chlorophyll a fluorescence transient from SMF-treated plants gave a higher fluorescence yield at J–I–P phase. Photosynthetic pigments, efficiency of PSII, performance index based on absorption of light energy, photosynthesis, and nitrate reductase activity were also higher in plants emerged from SMF-pretreated seeds which resulted in an improved yield of soybean. Thus SMF pretreatment mitigated the adverse effects of water stress in soybean.", "title": "" }, { "docid": "272ea79af6af89977a2d58a3014b5067", "text": "The development of cloud computing and virtualization techniques enables mobile devices to overcome the severity of scarce resource constrained by allowing them to offload computation and migrate several computation parts of an application to powerful cloud servers. A mobile device should judiciously determine whether to offload computation as well as what portion of an application should be offloaded to the cloud. This paper considers a mobile computation offloading problem where multiple mobile services in workflows can be invoked to fulfill their complex requirements and makes decision on whether the services of a workflow should be offloaded. Due to the mobility of portable devices, unstable connectivity of mobile networks can impact the offloading decision. To address this issue, we propose a novel offloading system to design robust offloading decisions for mobile services. Our approach considers the dependency relations among component services and aims to optimize execution time and energy consumption of executing mobile services. To this end, we also introduce a mobility model and a trade-off fault-tolerance mechanism for the offloading system. A genetic algorithm (GA) based offloading method is then designed and implemented after carefully modifying parts of a generic GA to match our special needs for the stated problem. Experimental results are promising and show nearoptimal solutions for all of our studied cases with almost linear algorithmic complexity with respect to the problem size.", "title": "" }, { "docid": "75f4945b1631c60608808c4977cede7f", "text": "The validity of nine neoclassical formulas of facial proportions was tested in a group of 153 young adult North American Caucasians. Age-related qualities were investigated in six of the nine canons in 100 six-year-old, 105 twelve-year-old, and 103 eighteen-year-old healthy subjects divided equally between the sexes. The two canons found to be valid most often in young adults were both horizontal proportions (interorbital width equals nose width in 40 percent and nose width equals 1/4 face width in 37 percent). The poorest correspondences are found in the vertical profile proportions, showing equality of no more than two parts of the head and face. Sex does not influence the findings significantly, but age-related differences were observed. Twenty-four variations derived from three vertical profile, four horizontal facial, and two nasoaural neoclassical canons were identified in the group of young adults. For each of the new proportions, the mean absolute and relative differences were calculated. The absolute differences were greater between the facial profile sections (vertical canons) and smaller between the horizontally oriented facial proportions. This study shows a large variability in size of facial features in a normal face. While some of the neoclassical canons may fit a few cases, they do not represent the average facial proportions and their interpretation as a prescription for ideal facial proportions must be tested.", "title": "" }, { "docid": "438a1fd8b90c3cd663aaf122a1e2c35d", "text": "Analysis of social content for understanding people's sentiments towards topics of interest that change over time has become an attractive and challenging research area. Natural Language Processing (NLP) techniques are being adapted to deal with streams of social content. New visualization approaches need also to be proposed to express, in a user friendly and reactive manner, individual as well as collective sentiments. In this paper, we present Expression, an integrated framework that allows users to express their opinions through a social platform and to see others' comments. We introduce the Sentiment Card concept: a live representation of a topic of interest. The Sentiment Card is a space that allows users to express their comments and to understand the trend of selected topics of interest expressed by other users. The design of Expression is presented, describing in particular, the sentiment classification module as well as the sentiment card visualization component. Results of the evaluation of our prototype by a usability study are also discussed and considered for motivating future research.", "title": "" }, { "docid": "53a7aff5f5409e3c2187a5d561ff342e", "text": "We present a study focused on constructing models of players for the major commercial title Tomb Raider: Underworld (TRU). Emergent self-organizing maps are trained on high-level playing behavior data obtained from 1365 players that completed the TRU game. The unsupervised learning approach utilized reveals four types of players which are analyzed within the context of the game. The proposed approach automates, in part, the traditional user and play testing procedures followed in the game industry since it can inform game developers, in detail, if the players play the game as intended by the game design. Subsequently, player models can assist the tailoring of game mechanics in real-time for the needs of the player type identified.", "title": "" }, { "docid": "1dbb34265c9b01f69262b3270fa24e97", "text": "Binary content-addressable memory (BiCAM) is a popular high speed search engine in hardware, which provides output typically in one clock cycle. But speed of CAM comes at the cost of various disadvantages, such as high latency, low storage density, and low architectural scalability. In addition, field-programmable gate arrays (FPGAs), which are used in many applications because of its advantages, do not have hard IPs for CAM. Since FPGAs have embedded IPs for random-access memories (RAMs), several RAM-based CAM architectures on FPGAs are available in the literature. However, these architectures are especially targeted for ternary CAMs, not for BiCAMs; thus, the available RAM-based CAMs may not be fully beneficial for BiCAMs in terms of architectural design. Since modern FPGAs are enriched with logical resources, why not to configure them to design BiCAM on FPGA? This letter presents a logic-based high performance BiCAM architecture (LH-CAM) using Xilinx FPGA. The proposed CAM is composed of CAM words and associated comparators. A sample of LH-CAM of size ${64\\times 36}$ is implemented on Xilinx Virtex-6 FPGA. Compared with the latest prior work, the proposed CAM is much simpler in architecture, storage efficient, reduces power consumption by 40.92%, and improves speed by 27.34%.", "title": "" }, { "docid": "8a5ae40bc5921d7614ca34ddf53cebbc", "text": "In natural language processing community, sentiment classification based on insufficient labeled data is a well-known challenging problem. In this paper, a novel semi-supervised learning algorithm called active deep network (ADN) is proposed to address this problem. First, we propose the semi-supervised learning framework of ADN. ADN is constructed by restricted Boltzmann machines (RBM) with unsupervised fine-tuned by gradient-descent based supervised learning with an exponential loss function. Second, in the semi-supervised learning framework, we apply active learning to identify reviews that should be labeled as training data, then using the selected labeled reviews and all unlabeled reviews to train ADN architecture. Moreover, we combine the information density with ADN, and propose information ADN (IADN) method, which can apply the information density of all unlabeled reviews in choosing the manual labeled reviews. Experiments on five sentiment classification datasets show that ADN and IADN outperform classical semi-supervised learning algorithms, and deep learning techniques applied for sentiment classification. & 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "e790824ac08ceb82000c3cda024dc329", "text": "Cellulolytic bacteria were isolated from manure wastes (cow dung) and degrading soil (municipal solid waste). Nine bacterial strains were screened the cellulolytic activities. Six strains showed clear zone formation on Berg’s medium. CMC (carboxyl methyl cellulose) and cellulose were used as substrates for cellulase activities. Among six strains, cd3 and mw7 were observed in quantitative measurement determined by dinitrosalicylic acid (DNS) method. Maximum enzyme producing activity showed 1.702mg/ml and 1.677mg/ml from cd3 and mw7 for 1% CMC substrate. On the other hand, it was expressed 0.563mg/ml and 0.415mg/ml for 1% cellulose substrate respectively. It was also studied for cellulase enzyme producing activity optimizing with kinetic growth parameters such as different carbon source including various concentration of cellulose, incubation time, temperature, and pH. Starch substrate showed 0.909mg/ml and 0.851mg/ml in enzyme producing activity. The optimum substrate concentration of cellulose was 0.25% for cd3 but 1% for mw7 showing the amount of reducing sugar formation 0.628mg/ml and 0.669mg/ml. The optimum incubation parameters for cd3 were 84 hours, 40C and pH 6. Mw7 also had optimum parameters 60 hours, 40 C and pH6.", "title": "" }, { "docid": "e2060b183968f81342df4f636a141a3b", "text": "This paper presents automatic parallel parking for a passenger vehicle, with highlights on a path-planning method and on experimental results. The path-planning method consists of two parts. First, the kinematic model of the vehicle, with corresponding geometry, is used to create a path to park the vehicle in one or more maneuvers if the spot is very narrow. This path is constituted of circle arcs. Second, this path is transformed into a continuous-curvature path using clothoid curves. To execute the generated path, control inputs for steering angle and longitudinal velocity depending on the traveled distance are generated. Therefore, the traveled distance and the vehicle pose during a parking maneuver are estimated. Finally, the parking performance is tested on a prototype vehicle.", "title": "" }, { "docid": "db2b1fe1cc8e6c267a058a747f8dab03", "text": "Conventional program analyses have made great strides by leveraging logical reasoning. However, they cannot handle uncertain knowledge, and they lack the ability to learn and adapt. This in turn hinders the accuracy, scalability, and usability of program analysis tools in practice. We seek to address these limitations by proposing a methodology and framework for incorporating probabilistic reasoning directly into existing program analyses that are based on logical reasoning. We demonstrate that the combined approach can benefit a number of important applications of program analysis and thereby facilitate more widespread adoption of this technology.", "title": "" }, { "docid": "7aa1df89f94fe1f653f1680fbf33e838", "text": "Several modes of vaccine delivery have been developed in the last 25 years, which induce strong immune responses in pre-clinical models and in human clinical trials. Some modes of delivery include, adjuvants (aluminum hydroxide, Ribi formulation, QS21), liposomes, nanoparticles, virus like particles, immunostimulatory complexes (ISCOMs), dendrimers, viral vectors, DNA delivery via gene gun, electroporation or Biojector 2000, cell penetrating peptides, dendritic cell receptor targeting, toll-like receptors, chemokine receptors and bacterial toxins. There is an enormous amount of information and vaccine delivery methods available for guiding vaccine and immunotherapeutics development against diseases.", "title": "" }, { "docid": "14d77d118aad5ee75b82331dc3db8afd", "text": "Graphical passwords are an alternative to alphanumeric passwords in which users click on images to authenticate themselves rather than type alphanumeric strings. We have developed one such system, called PassPoints, and evaluated it with human users. The results of the evaluation were promising with respect to rmemorability of the graphical password. In this study we expand our human factors testing by studying two issues: the effect of tolerance, or margin of error, in clicking on the password points and the effect of the image used in the password system. In our tolerance study, results show that accurate memory for the password is strongly reduced when using a small tolerance (10 x 10 pixels) around the user's password points. This may occur because users fail to encode the password points in memory in the precise manner that is necessary to remember the password over a lapse of time. In our image study we compared user performance on four everyday images. The results indicate that there were few significant differences in performance of the images. This preliminary result suggests that many images may support memorability in graphical password systems.", "title": "" }, { "docid": "797c9e6319a375a179e9ab182ef23e8d", "text": "We describe an offset-canceling low-noise lock-in architecture for capacitive sensing. We take advantage of the properties of modulation and demodulation to separate the signal from the dc offset and use nonlinear multiplicative feedback to cancel the offset. The feedback also attenuates out-of-band noise and further enhances the power of a lock-in technique. Experimentally, in a 1.5m BiCMOS chip, a fabrication dc offset of 2 mV and an intentional offset of 100 mV were attenuated to 9 V. Our offsetcanceling technique could also be useful for practical multipliers that need tolerance to fabrication errors. We present a detailed theoretical noise analysis of our architecture that is confirmed by experiment. As an example application, we demonstrate the use of our architecture in a simple capacitive surface-microelectromechanical-system vibration sensor where the performance is limited by mechanical Brownian noise. However, we show that our electronics limits us to 30 g Hz, which is at least six times lower than the noise floor of commercial state-of-the-art surface-micromachined inertial sensors. Our architecture could, thus, be useful in high-performance inertial sensors with low mechanical noise. In a 1–100-Hz bandwidth, our electronic detection threshold corresponds to a one-part-per-eight-million change in capacitance.", "title": "" }, { "docid": "a8bd9e8470ad414c38f5616fb14d433d", "text": "Detecting hidden communities from observed interactions is a classical problem. Theoretical analysis of community detection has so far been mostly limited to models with non-overlapping communities such as the stochastic block model. In this paper, we provide guaranteed community detection for a family of probabilistic network models with overlapping communities, termed as the mixed membership Dirichlet model, first introduced in Airoldi et al. (2008). This model allows for nodes to have fractional memberships in multiple communities and assumes that the community memberships are drawn from a Dirichlet distribution. Moreover, it contains the stochastic block model as a special case. We propose a unified approach to learning communities in these models via a tensor spectral decomposition approach. Our estimator uses low-order moment tensor of the observed network, consisting of 3-star counts. Our learning method is based on simple linear algebraic operations such as singular value decomposition and tensor power iterations. We provide guaranteed recovery of community memberships and model parameters, and present a careful finite sample analysis of our learning method. Additionally, our results match the best known scaling requirements for the special case of the (homogeneous) stochastic block model.", "title": "" }, { "docid": "b8bb4d195738e815430d146ac110df49", "text": "Software testing is an effective way to find software errors. Generating a good test suite is the key. A program invariant is a property that is true at a particular program point or points. The property could reflect the program’s execution over a test suite. Based on this point, we integrate the random test case generation technique and the invariant extraction technique, achieving automatic test case generation and selection. With the same invariants, compared with the traditional random test case generation technique, the experimental results show that the approach this paper describes can generate a smaller test suite. Keywords-software testing; random testing; test case; program invariant", "title": "" }, { "docid": "a6e95047159a203e00487a12f1dc85b7", "text": "During life, many personal changes occur. These include changing house, school, work, and even friends and partners. However, the daily experience shows clearly that, in some situations, subjects are unable to change even if they want to. The recent advances in psychology and neuroscience are now providing a better view of personal change, the change affecting our assumptive world: (a) the focus of personal change is reducing the distance between self and reality (conflict); (b) this reduction is achieved through (1) an intense focus on the particular experience creating the conflict or (2) an internal or external reorganization of this experience; (c) personal change requires a progression through a series of different stages that however happen in discontinuous and non-linear ways; and (d) clinical psychology is often used to facilitate personal change when subjects are unable to move forward. Starting from these premises, the aim of this paper is to review the potential of virtuality for enhancing the processes of personal and clinical change. First, the paper focuses on the two leading virtual technologies - augmented reality (AR) and virtual reality (VR) - exploring their current uses in behavioral health and the outcomes of the 28 available systematic reviews and meta-analyses. Then the paper discusses the added value provided by VR and AR in transforming our external experience by focusing on the high level of personal efficacy and self-reflectiveness generated by their sense of presence and emotional engagement. Finally, it outlines the potential future use of virtuality for transforming our inner experience by structuring, altering, and/or replacing our bodily self-consciousness. The final outcome may be a new generation of transformative experiences that provide knowledge that is epistemically inaccessible to the individual until he or she has that experience, while at the same time transforming the individual's worldview.", "title": "" }, { "docid": "ed23845ded235d204914bd1140f034c3", "text": "We propose a general framework to learn deep generative models via Variational Gradient Flow (VGrow) on probability spaces. The evolving distribution that asymptotically converges to the target distribution is governed by a vector field, which is the negative gradient of the first variation of the f -divergence between them. We prove that the evolving distribution coincides with the pushforward distribution through the infinitesimal time composition of residual maps that are perturbations of the identity map along the vector field. The vector field depends on the density ratio of the pushforward distribution and the target distribution, which can be consistently learned from a binary classification problem. Connections of our proposed VGrow method with other popular methods, such as VAE, GAN and flow-based methods, have been established in this framework, gaining new insights of deep generative learning. We also evaluated several commonly used divergences, including KullbackLeibler, Jensen-Shannon, Jeffrey divergences as well as our newly discovered “logD” divergence which serves as the objective function of the logD-trick GAN. Experimental results on benchmark datasets demonstrate that VGrow can generate high-fidelity images in a stable and efficient manner, achieving competitive performance with stateof-the-art GANs. ∗Yuling Jiao ([email protected]) †Can Yang ([email protected]) 1 ar X iv :1 90 1. 08 46 9v 2 [ cs .L G ] 7 F eb 2 01 9", "title": "" } ]
scidocsrr
6d6c312f60e1d5718a0ecd55a741afea
Comparison of low- and high-level visual features for audio-visual continuous automatic speech recognition
[ { "docid": "eebc97e1de5545b6f33b1d483cde19c1", "text": "This paper describes a speech recognition system that uses both acoustic and visual speech information to improve the recognition performance in noisy environments. The system consists of three components: 1) a visual module; 2) an acoustic module; and 3) a sensor fusion module. The visual module locates and tracks the lip movements of a given speaker and extracts relevant speech features. This task is performed with an appearance-based lip model that is learned from example images. Visual speech features are represented by contour information of the lips and grey-level information of the mouth area. The acoustic module extracts noise-robust features from the audio signal. Finally, the sensor fusion module is responsible for the joint temporal modeling of the acoustic and visual feature streams and is realized using multistream hidden Markov models (HMMs). The multistream method allows the definition of different temporal topologies and levels of stream integration and hence enables the modeling of temporal dependencies more accurately than traditional approaches. We present two different methods to learn the asynchrony between the two modalities and how to incorporate them in the multistream models. The superior performance for the proposed system is demonstrated on a large multispeaker database of continuously spoken digits. On a recognition task at 15 dB acoustic signal-to-noise ratio (SNR), acoustic perceptual linear prediction (PLP) features lead to 56% error rate, noise robust RASTA-PLP (Relative Spectra) acoustic features to 7.2% error rate and combined noise robust acoustic features and visual features to 2.5% error rate.", "title": "" }, { "docid": "34627572a319dfdfcea7277d2650d0f5", "text": "Visual speech information from the speaker’s mouth region has been successfully shown to improve noise robustness of automatic speech recognizers, thus promising to extend their usability in the human computer interface. In this paper, we review the main components of audio-visual automatic speech recognition and present novel contributions in two main areas: First, the visual front end design, based on a cascade of linear image transforms of an appropriate video region-of-interest, and subsequently, audio-visual speech integration. On the latter topic, we discuss new work on feature and decision fusion combination, the modeling of audio-visual speech asynchrony, and incorporating modality reliability estimates to the bimodal recognition process. We also briefly touch upon the issue of audio-visual adaptation. We apply our algorithms to three multi-subject bimodal databases, ranging from smallto largevocabulary recognition tasks, recorded in both visually controlled and challenging environments. Our experiments demonstrate that the visual modality improves automatic speech recognition over all conditions and data considered, though less so for visually challenging environments and large vocabulary tasks.", "title": "" } ]
[ { "docid": "e1958dc823feee7f88ab5bf256655bee", "text": "We describe an approach for testing a software system for possible securi ty flaws. Traditionally, security testing is done using penetration analysis and formal methods. Based on the observation that most security flaws are triggered due to a flawed interaction with the envi ronment, we view the security testing problem as the problem of testing for the fault-tolerance prop erties of a software system. We consider each environment perturbation as a fault and the resulting security ompromise a failure in the toleration of such faults. Our approach is based on the well known techn ique of fault-injection. Environment faults are injected into the system under test and system beha vior observed. The failure to tolerate faults is an indicator of a potential security flaw in the syst em. An Environment-Application Interaction (EAI) fault model is proposed. EAI allows us to decide what f aults to inject. Based on EAI, we present a security-flaw classification scheme. This scheme was used to classif y 142 security flaws in a vulnerability database. This classification revealed that 91% of the security flaws in the database are covered by the EAI model.", "title": "" }, { "docid": "39debcb0aa41eec73ff63a4e774f36fd", "text": "Automatically segmenting unstructured text strings into structured records is necessary for importing the information contained in legacy sources and text collections into a data warehouse for subsequent querying, analysis, mining and integration. In this paper, we mine tables present in data warehouses and relational databases to develop an automatic segmentation system. Thus, we overcome limitations of existing supervised text segmentation approaches, which require comprehensive manually labeled training data. Our segmentation system is robust, accurate, and efficient, and requires no additional manual effort. Thorough evaluation on real datasets demonstrates the robustness and accuracy of our system, with segmentation accuracy exceeding state of the art supervised approaches.", "title": "" }, { "docid": "c1ca3f495400a898da846bdf20d23833", "text": "It is very useful to integrate human knowledge and experience into traditional neural networks for faster learning speed, fewer training samples and better interpretability. However, due to the obscured and indescribable black box model of neural networks, it is very difficult to design its architecture, interpret its features and predict its performance. Inspired by human visual cognition process, we propose a knowledge-guided semantic computing network which includes two modules: a knowledge-guided semantic tree and a data-driven neural network. The semantic tree is pre-defined to describe the spatial structural relations of different semantics, which just corresponds to the tree-like description of objects based on human knowledge. The object recognition process through the semantic tree only needs simple forward computing without training. Besides, to enhance the recognition ability of the semantic tree in aspects of the diversity, randomicity and variability, we use the traditional neural network to aid the semantic tree to learn some indescribable features. Only in this case, the training process is needed. The experimental results on MNIST and GTSRB datasets show that compared with the traditional data-driven network, our proposed semantic computing network can achieve better performance with fewer training samples and lower computational complexity. Especially, Our model also has better adversarial robustness than traditional neural network with the help of human knowledge.", "title": "" }, { "docid": "a5f80f6f36f8db1673ccc57de9044b5e", "text": "Nowadays, many modern applications, e.g. autonomous system, and cloud data services need to capture and process a big amount of raw data at runtime that ultimately necessitates a high-performance computing model. Deep Neural Network (DNN) has already revealed its learning capabilities in runtime data processing for modern applications. However, DNNs are becoming more deep sophisticated models for gaining higher accuracy which require a remarkable computing capacity. Considering high-performance cloud infrastructure as a supplier of required computational throughput is often not feasible. Instead, we intend to find a near-sensor processing solution which will lower the need for network bandwidth and increase privacy and power efficiency, as well as guaranteeing worst-case response-times. Toward this goal, we introduce ADONN framework, which aims to automatically design a highly robust DNN architecture for embedded devices as the closest processing unit to the sensors. ADONN adroitly searches the design space to find improved neural architectures. Our proposed framework takes advantage of a multi-objective evolutionary approach, which exploits a pruned design space inspired by a dense architecture. Unlike recent works that mainly have tried to generate highly accurate networks, ADONN also considers the network size factor as the second objective to build a highly optimized network fitting with limited computational resource budgets while delivers comparable accuracy level. In comparison with the best result on CIFAR-10 dataset, a generated network by ADONN presents up to 26.4 compression rate while loses only 4% accuracy. In addition, ADONN maps the generated DNN on the commodity programmable devices including ARM Processor, High-Performance CPU, GPU, and FPGA.", "title": "" }, { "docid": "6be44677f42b5a6aaaea352e11024cfa", "text": "In this paper, we intend to discuss if and in what sense semiosis (meaning process, cf. C.S. Peirce) can be regarded as an “emergent” process in semiotic systems. It is not our problem here to answer when or how semiosis emerged in nature. As a prerequisite for the very formulation of these problems, we are rather interested in discussing the conditions which should be fulfilled for semiosis to be characterized as an emergent process. The first step in this work is to summarize a systematic analysis of the variety of emergence theories and concepts, elaborated by Achim Stephan. Along the summary of this analysis, we pose fundamental questions that have to be answered in order to ascribe a precise meaning to the term “emergence” in the context of an understanding of semiosis. After discussing a model for explaining emergence based on Salthe’s hierarchical structuralism, which considers three levels at a time in a semiotic system, we present some tentative answers to those questions.", "title": "" }, { "docid": "ac0b562db18fac38663b210f599c2deb", "text": "This paper proposes a fast and stable image-based modeling method which generates 3D models with high-quality face textures in a semi-automatic way. The modeler guides untrained users to quickly obtain 3D model data via several steps of simple user interface operations using predefined 3D primitives. The proposed method contains an iterative non-linear error minimization technique in the model estimation step with an error function based on finite line segments instead of infinite lines. The error corresponds to the difference between the observed structure and the predicted structure from current model parameters. Experimental results on real images validate the robustness and the accuracy of the algorithm. 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "d86d7f10c386969e0aef2c9a5eaf2845", "text": "E-government services require certain service levels to be achieved as they replace traditional channels. E-government also increases the dependence of government agencies on information technology based services. High quality services entail high performance, availability and scalability among other service characteristics. Strict measures are required to help e-governments evaluate the service level and assess the quality of the service. In this paper we introduce the IT Infrastructure Library (ITIL) framework - a set of best practices to achieve quality service and overcome difficulties associated with the growth of IT systems [17][21]. We conducted an in depth assessment and gap analysis for both of the service support and service delivery processes [16], in a government institution, which allowed us to assess its maturity level within the context of ITIL. We then proposed and modeled these processes in accordance to ITIL best practices and based upon agency aspirations and environment constraints.", "title": "" }, { "docid": "4567c899b8c06394397c8fc7cbd8c347", "text": "Many real-world systems, such as social networks, rely on mining efficiently large graphs, with hundreds of millions of vertices and edges. This volume of information requires partitioning the graph across multiple nodes in a distributed system. This has a deep effect on performance, as traversing edges cut between partitions incurs a significant performance penalty due to the cost of communication. Thus, several systems in the literature have attempted to improve computational performance by enhancing graph partitioning, but they do not support another characteristic of realworld graphs: graphs are inherently dynamic, their topology evolves continuously, and subsequently the optimum partitioning also changes over time. In this work, we present the first system that dynamically repartitions massive graphs to adapt to structural changes. The system optimises graph partitioning to prevent performance degradation without using data replication. The system adopts an iterative vertex migration algorithm that relies on local information only, making complex coordination unnecessary. We show how the improvement in graph partitioning reduces execution time by over 50%, while adapting the partitioning to a large number of changes to the graph in three real-world scenarios.", "title": "" }, { "docid": "1c4e4f0ffeae8b03746ca7de184989ef", "text": "Applications written in low-level languages without type or memory safety are prone to memory corruption. Attackers gain code execution capabilities through memory corruption despite all currently deployed defenses. Control-Flow Integrity (CFI) is a promising security property that restricts indirect control-flow transfers to a static set of well-known locations. We present Lockdown, a modular, fine-grained CFI policy that protects binary-only applications and libraries without requiring sourcecode. Lockdown adaptively discovers the control-flow graph of a running process based on the executed code. The sandbox component of Lockdown restricts interactions between different shared objects to imported and exported functions by enforcing fine-grained CFI checks using information from a trusted dynamic loader. A shadow stack enforces precise integrity for function returns. Our prototype implementation shows that Lockdown results in low performance overhead and a security analysis discusses any remaining gadgets.", "title": "" }, { "docid": "a53c16d1fb3882441977d353665cffa1", "text": "[1] The time evolution of rip currents in the nearshore is studied by numerical experiments. The generation of rip currents is due to waves propagating and breaking over alongshore variable topography. Our main focus is to examine the significance of wave-current interaction as it affects the subsequent development of the currents, in particular when the currents are weak compared to the wave speed. We describe the dynamics of currents using the shallow water equations with linear bottom friction and wave forcing parameterized utilizing the radiation stress concept. The slow variations of the wave field, in terms of local wave number, frequency, and energy (wave amplitude), are described using the ray theory with the inclusion of energy dissipation due to breaking. The results show that the offshore directed rip currents interact with the incident waves to produce a negative feedback on the wave forcing, hence to reduce the strength and offshore extent of the currents. In particular, this feedback effect supersedes the bottom friction such that the circulation patterns become less sensitive to a change of the bottom friction parameterization. The two physical processes arising from refraction by currents, bending of wave rays and changes of wave energy, are both found to be important. The onset of instabilities of circulations occurs at the nearshore region where rips are ‘‘fed,’’ rather than offshore at rip heads as predicted with no wave-current interaction. The unsteady flows are characterized by vortex shedding, pairing, and offshore migration. Instabilities are sensitive to the angle of wave incidence and the spacing of rip channels.", "title": "" }, { "docid": "5a898d79de6cedebae4ff7acc4fabc34", "text": "Education-job mismatches are reported to have serious effects on wages and other labour market outcomes. Such results are often cited in support of assignment theory, but can also be explained by institutional and human capital models. To test the assignment explanation, we examine the relation between educational mismatches and skill mismatches. In line with earlier research, educational mismatches affect wages strongly. Contrary to the assumptions of assignment theory, this effect is not explained by skill mismatches. Conversely, skill mismatches are much better predictors of job satisfaction and on-the-job search than are educational mismatches.", "title": "" }, { "docid": "3caa8fc1ea07fcf8442705c3b0f775c5", "text": "Recent research in the field of computational social science have shown how data resulting from the widespread adoption and use of social media channels such as twitter can be used to predict outcomes such as movie revenues, election winners, localized moods, and epidemic outbreaks. Underlying assumptions for this research stream on predictive analytics are that social media actions such as tweeting, liking, commenting and rating are proxies for user/consumer's attention to a particular object/product and that the shared digital artefact that is persistent can create social influence. In this paper, we demonstrate how social media data from twitter can be used to predict the sales of iPhones. Based on a conceptual model of social data consisting of social graph (actors, actions, activities, and artefacts) and social text (topics, keywords, pronouns, and sentiments), we develop and evaluate a linear regression model that transforms iPhone tweets into a prediction of the quarterly iPhone sales with an average error close to the established prediction models from investment banks. This strong correlation between iPhone tweets and iPhone sales becomes marginally stronger after incorporating sentiments of tweets. We discuss the findings and conclude with implications for predictive analytics with big social data.", "title": "" }, { "docid": "e66f2052a2e9a7e870f8c1b4f2bfb56d", "text": "New algorithms with previous native palm pdf reader approaches, with gains of over an order of magnitude using.We present two new algorithms for solving this problem. Regularities, association rules, and gave an algorithm for finding such rules. 4 An.fast discovery of association rules based on our ideas in 33, 35. New algorithms with previous approaches, with gains of over an order of magnitude using.", "title": "" }, { "docid": "8588a3317d4b594d8e19cb005c3d35c7", "text": "Histograms of Oriented Gradients (HOG) is one of the wellknown features for object recognition. HOG features are calculated by taking orientation histograms of edge intensity in a local region. N.Dalal et al. proposed an object detection algorithm in which HOG features were extracted from all locations of a dense grid on a image region and the combined features are classified by using linear Support Vector Machine (SVM). In this paper, we employ HOG features extracted from all locations of a grid on the image as candidates of the feature vectors. Principal Component Analysis (PCA) is applied to these HOG feature vectors to obtain the score (PCA-HOG) vectors. Then a proper subset of PCA-HOG feature vectors is selected by using Stepwise Forward Selection (SFS) algorithm or Stepwise Backward Selection (SBS) algorithm to improve the generalization performance. The selected PCA-HOG feature vectors are used as an input of linear SVM to classify the given input into pedestrian/non-pedestrian. The improvement of the recognition rates are confirmed through experiments using MIT pedestrian dataset.", "title": "" }, { "docid": "5218f1ddf65b9bc1db335bb98d7e71b4", "text": "The popular Biometric used to authenticate a person is Fingerprint which is unique and permanent throughout a person’s life. A minutia matching is widely used for fingerprint recognition and can be classified as ridge ending and ridge bifurcation. In this paper we projected Fingerprint Recognition using Minutia Score Matching method (FRMSM). For Fingerprint thinning, the Block Filter is used, which scans the image at the boundary to preserves the quality of the image and extract the minutiae from the thinned image. The false matching ratio is better compared to the existing algorithm. Key-words:-Fingerprint Recognition, Binarization, Block Filter Method, Matching score and Minutia.", "title": "" }, { "docid": "6fcaea5228ea964854ab92cca69859d7", "text": "The well-characterized cellular and structural components of the kidney show distinct regional compositions and distribution of lipids. In order to more fully analyze the renal lipidome we developed a matrix-assisted laser desorption/ionization mass spectrometry approach for imaging that may be used to pinpoint sites of changes from normal in pathological conditions. This was accomplished by implanting sagittal cryostat rat kidney sections with a stable, quantifiable and reproducible uniform layer of silver using a magnetron sputtering source to form silver nanoparticles. Thirty-eight lipid species including seven ceramides, eight diacylglycerols, 22 triacylglycerols, and cholesterol were detected and imaged in positive ion mode. Thirty-six lipid species consisting of seven sphingomyelins, 10 phosphatidylethanolamines, one phosphatidylglycerol, seven phosphatidylinositols, and 11 sulfatides were imaged in negative ion mode for a total of seventy-four high-resolution lipidome maps of the normal kidney. Thus, our approach is a powerful tool not only for studying structural changes in animal models of disease, but also for diagnosing and tracking stages of disease in human kidney tissue biopsies.", "title": "" }, { "docid": "ef8b5fde7d4a941b7f16fb92218f0527", "text": "Network security is of primary concerned now days for large organizations. The intrusion detection systems (IDS) are becoming indispensable for effective protection against attacks that are constantly changing in magnitude and complexity. With data integrity, confidentiality and availability, they must be reliable, easy to manage and with low maintenance cost. Various modifications are being applied to IDS regularly to detect new attacks and handle them. This paper proposes a fuzzy genetic algorithm (FGA) for intrusion detection. The FGA system is a fuzzy classifier, whose knowledge base is modeled as a fuzzy rule such as \"if-then\" and improved by a genetic algorithm. The method is tested on the benchmark KDD'99 intrusion dataset and compared with other existing techniques available in the literature. The results are encouraging and demonstrate the benefits of the proposed approach. Keywordsgenetic algorithm, fuzzy logic, classification, intrusion detection, DARPA data set", "title": "" }, { "docid": "0b5f0cd5b8d49d57324a0199b4925490", "text": "Deep brain stimulation (DBS) has an increasing role in the treatment of idiopathic Parkinson's disease. Although, the subthalamic nucleus (STN) is the commonly chosen target, a number of groups have reported that the most effective contact lies dorsal/dorsomedial to the STN (region of the pallidofugal fibres and the rostral zona incerta) or at the junction between the dorsal border of the STN and the latter. We analysed our outcome data from Parkinson's disease patients treated with DBS between April 2002 and June 2004. During this period we moved our target from the STN to the region dorsomedial/medial to it and subsequently targeted the caudal part of the zona incerta nucleus (cZI). We present a comparison of the motor outcomes between these three groups of patients with optimal contacts within the STN (group 1), dorsomedial/medial to the STN (group 2) and in the cZI nucleus (group 3). Thirty-five patients with Parkinson's disease underwent MRI directed implantation of 64 DBS leads into the STN (17), dorsomedial/medial to STN (20) and cZI (27). The primary outcome measure was the contralateral Unified Parkinson's Disease Rating Scale (UPDRS) motor score (off medication/off stimulation versus off medication/on stimulation) measured at follow-up (median time 6 months). The secondary outcome measures were the UPDRS III subscores of tremor, bradykinesia and rigidity. Dyskinesia score, L-dopa medication reduction and stimulation parameters were also recorded. The mean adjusted contralateral UPDRS III score with cZI stimulation was 3.1 (76% reduction) compared to 4.9 (61% reduction) in group 2 and 5.7 (55% reduction) in the STN (P-value for trend <0.001). There was a 93% improvement in tremor with cZI stimulation versus 86% in group 2 versus 61% in group 1 (P-value = 0.01). Adjusted 'off-on' rigidity scores were 1.0 for the cZI group (76% reduction), 2.0 for group 2 (52% reduction) and 2.1 for group 1 (50% reduction) (P-value for trend = 0.002). Bradykinesia was more markedly improved in the cZI group (65%) compared to group 2 (56%) or STN group (59%) (P-value for trend = 0.17). There were no statistically significant differences in the dyskinesia scores, L-dopa medication reduction and stimulation parameters between the three groups. Stimulation related complications were seen in some group 2 patients. High frequency stimulation of the cZI results in greater improvement in contralateral motor scores in Parkinson's disease patients than stimulation of the STN. We discuss the implications of this finding and the potential role played by the ZI in Parkinson's disease.", "title": "" }, { "docid": "6284f941fde73bdcd07687f731fbea16", "text": "The article describes the students' experiences of taking a blended learning postgraduate programme in a school of nursing and midwifery. The indications to date are that blended learning as a pedagogical tool has the potential to contribute and improve nursing and midwifery practice and enhance student learning. Little is reported about the students' experiences to date. Focus groups were conducted with students in the first year of introducing blended learning. The two main themes that were identified from the data were (1) the benefits of blended learning and (2) the challenges to blended learning. The blended learning experience was received positively by the students. A significant finding that was not reported in previous research was that the online component meant little time away from study for the students suggesting that it was more invasive on their everyday life. It is envisaged that the outcomes of the study will assist educators who are considering delivering programmes through blended learning. It should provide guidance for further developments and improvements in using Virtual Learning Environment (VLE) and blended learning in nurse education.", "title": "" }, { "docid": "1ff317c5514dfc1179ee7c474187d4e5", "text": "The emergence and spread of antibiotic resistance among pathogenic bacteria has been a rising problem for public health in recent decades. It is becoming increasingly recognized that not only antibiotic resistance genes (ARGs) encountered in clinical pathogens are of relevance, but rather, all pathogenic, commensal as well as environmental bacteria-and also mobile genetic elements and bacteriophages-form a reservoir of ARGs (the resistome) from which pathogenic bacteria can acquire resistance via horizontal gene transfer (HGT). HGT has caused antibiotic resistance to spread from commensal and environmental species to pathogenic ones, as has been shown for some clinically important ARGs. Of the three canonical mechanisms of HGT, conjugation is thought to have the greatest influence on the dissemination of ARGs. While transformation and transduction are deemed less important, recent discoveries suggest their role may be larger than previously thought. Understanding the extent of the resistome and how its mobilization to pathogenic bacteria takes place is essential for efforts to control the dissemination of these genes. Here, we will discuss the concept of the resistome, provide examples of HGT of clinically relevant ARGs and present an overview of the current knowledge of the contributions the various HGT mechanisms make to the spread of antibiotic resistance.", "title": "" } ]
scidocsrr
45761c8848e1e46ce9d9f595a1b83ff9
Unmanned aircraft systems in maritime operations: Challenges addressed in the scope of the SEAGULL project
[ { "docid": "238b49907eb577647354e4145f4b1e7e", "text": "The work here presented contributes to the development of ground target tracking control systems for fixed wing unmanned aerial vehicles (UAVs). The control laws are derived at the kinematic level, relying on a commercial inner loop controller onboard that accepts commands in indicated air speed and bank, and appropriately sets the control surface deflections and thrust in order to follow those references in the presence of unknown wind. Position and velocity of the target on the ground is assumed to be known. The algorithm proposed derives from a path following control law that enables the UAV to converge to a circumference centered at the target and moving with it, thus keeping the UAV in the vicinity of the target even if the target moves at a velocity lower than the UAV stall speed. If the target speed is close to the UAV speed, the control law behaves similar to a controller that tracks a particular T. Oliveira Science Laboratory, Portuguese Air Force Academy, Sintra, 2715-021, Portugal e-mail: [email protected] P. Encarnação (B) Faculty of Engineering, Catholic University of Portugal, Rio de Mouro, 2635-631, Portugal e-mail: [email protected] point on the circumference centered at the target position. Real flight tests results show the good performance of the control scheme presented.", "title": "" } ]
[ { "docid": "a5aa074c27add29fd038a83f02582fd1", "text": "We develop an efficient general-purpose blind/no-reference image quality assessment (IQA) algorithm using a natural scene statistics (NSS) model of discrete cosine transform (DCT) coefficients. The algorithm is computationally appealing, given the availability of platforms optimized for DCT computation. The approach relies on a simple Bayesian inference model to predict image quality scores given certain extracted features. The features are based on an NSS model of the image DCT coefficients. The estimated parameters of the model are utilized to form features that are indicative of perceptual quality. These features are used in a simple Bayesian inference approach to predict quality scores. The resulting algorithm, which we name BLIINDS-II, requires minimal training and adopts a simple probabilistic model for score prediction. Given the extracted features from a test image, the quality score that maximizes the probability of the empirically determined inference model is chosen as the predicted quality score of that image. When tested on the LIVE IQA database, BLIINDS-II is shown to correlate highly with human judgments of quality, at a level that is competitive with the popular SSIM index.", "title": "" }, { "docid": "0421752e1a7790a61da486cdd2db8012", "text": "This article describes the development of a new instrument, the Life Experiences Survey, for the measurement of life changes. It was designed to eliminate certain shortcomings of previous life stress measures and allows for separate assessment of positive and negative life experiences as well as individualized ratings of the impact of events. Several studies bearing on the usefulness of the Life Experiences Survey are presented, and the implications of the findings are discussed.", "title": "" }, { "docid": "d29f2b03b3ebe488a935e19d87c37226", "text": "Log analysis shows that PubMed users frequently use author names in queries for retrieving scientific literature. However, author name ambiguity may lead to irrelevant retrieval results. To improve the PubMed user experience with author name queries, we designed an author name disambiguation system consisting of similarity estimation and agglomerative clustering. A machine-learning method was employed to score the features for disambiguating a pair of papers with ambiguous names. These features enable the computation of pairwise similarity scores to estimate the probability of a pair of papers belonging to the same author, which drives an agglomerative clustering algorithm regulated by 2 factors: name compatibility and probability level. With transitivity violation correction, high precision author clustering is achieved by focusing on minimizing false-positive pairing. Disambiguation performance is evaluated with manual verification of random samples of pairs from clustering results. When compared with a state-of-the-art system, our evaluation shows that among all the pairs the lumping error rate drops from 10.1% to 2.2% for our system, while the splitting error rises from 1.8% to 7.7%. This results in an overall error rate of 9.9%, compared with 11.9% for the state-of-the-art method. Other evaluations based on gold standard data also show the increase in accuracy of our clustering. We attribute the performance improvement to the machine-learning method driven by a large-scale training set and the clustering algorithm regulated by a name compatibility scheme preferring precision. With integration of the author name disambiguation system into the PubMed search engine, the overall click-through-rate of PubMed users on author name query results improved from 34.9% to 36.9%.", "title": "" }, { "docid": "69789348f1e05c5de8b68ba7dbaa2c73", "text": "For hands-free man-machine audio interfaces with multi-channel sound reproduction and automatic speech recognition (ASR), both a multi-channel acoustic echo canceller (M-C AEC) and a beamforming microphone array are necessary for sufficient recognition rates. Based on known strategies for combining single-channel AEC and adaptive beamforming microphone arrays, we discuss special aspects for the extension to multi-channel AEC and propose an efficient system that can be implemented on a regular PC.", "title": "" }, { "docid": "cd35c6e2763b634d23de1903a3261c59", "text": "We investigate the Belousov-Zhabotinsky (BZ) reaction in an attempt to establish a basis for computation using chemical oscillators coupled via inhibition. The system consists of BZ droplets suspended in oil. Interdrop coupling is governed by the non-polar communicator of inhibition, Br2. We consider a linear arrangement of three droplets to be a NOR gate, where the center droplet is the output and the other two are inputs. Oxidation spikes in the inputs, which we define to be TRUE, cause a delay in the next spike of the output, which we read to be FALSE. Conversely, when the inputs do not spike (FALSE) there is no delay in the output (TRUE), thus producing the behavior of a NOR gate. We are able to reliably produce NOR gates with this behavior in microfluidic experiment.", "title": "" }, { "docid": "ec2d9c12a906eb999e7a178d0f672073", "text": "Passive-dynamic walkers are simple mechanical devices, composed of solid parts connected by joints, that walk stably down a slope. They have no motors or controllers, yet can have remarkably humanlike motions. This suggests that these machines are useful models of human locomotion; however, they cannot walk on level ground. Here we present three robots based on passive-dynamics, with small active power sources substituted for gravity, which can walk on level ground. These robots use less control and less energy than other powered robots, yet walk more naturally, further suggesting the importance of passive-dynamics in human locomotion.", "title": "" }, { "docid": "7f1ad50ce66c855776aaacd0d53279aa", "text": "A method to synchronize and control a system of parallel single-phase inverters without communication is presented. Inspired by the phenomenon of synchronization in networks of coupled oscillators, we propose that each inverter be controlled to emulate the dynamics of a nonlinear dead-zone oscillator. As a consequence of the electrical coupling between inverters, they synchronize and share the load in proportion to their ratings. We outline a sufficient condition for global asymptotic synchronization and formulate a methodology for controller design such that the inverter terminal voltages oscillate at the desired frequency, and the load voltage is maintained within prescribed bounds. We also introduce a technique to facilitate the seamless addition of inverters controlled with the proposed approach into an energized system. Experimental results for a system of three inverters demonstrate power sharing in proportion to power ratings for both linear and nonlinear loads.", "title": "" }, { "docid": "f2aff84f10b59cbc127dab6266cee11c", "text": "This paper extends the Argument Interchange Format to enable it to represent dialogic argumentation. One of the challenges is to tie together the rules expressed in dialogue protocols with the inferential relations between premises and conclusions. The extensions are founded upon two important analogies which minimise the extra ontological machinery required. First, locutions in a dialogue are analogous to AIF Inodes which capture propositional data. Second, steps between locutions are analogous to AIF S-nodes which capture inferential movement. This paper shows how these two analogies combine to allow both dialogue protocols and dialogue histories to be represented alongside monologic arguments in a single coherent system.", "title": "" }, { "docid": "244360e0815243d6a04d64a974da1b89", "text": "The life-history of Haplorchoides mehrai Pande & Shukla, 1976 is elucidated. The cercariae occurred in the thiarid snail Melanoides tuberculatus (Muller) collected from Chilka Lake, Orissa State. Metacercariae were found beneath the scales of Puntius sophore (Hamilton). Several species of catfishes in the lake served as definitive hosts. All stages in the life-cycle were successfully established under experimental conditions in the laboratory. The cercariae are of opisthorchioid type with a large globular and highly granular excretory bladder and seven pairs of pre-vesicular penetration glands. The adult flukes are redescribed to include details of the ventro-genital complex. Only three Indian species of the genus, i.e. H. attenuatus (Srivastava, 1935), H. pearsoni Pande & Shukla, 1976 and H. mehrai Pande & Shukla, 1976, are considered valid, and the remaining Indian species of the genus are considered as species inquirendae. The generic diagnosis of Haplorchoides is amended and the genus is included in the subfamily Haplorchiinae and the family Heterophyidae.", "title": "" }, { "docid": "c57c69fd1858b50998ec9706e34f6c46", "text": "Hashing has recently attracted considerable attention for large scale similarity search. However, learning compact codes with good performance is still a challenge. In many cases, the real-world data lies on a low-dimensional manifold embedded in high-dimensional ambient space. To capture meaningful neighbors, a compact hashing representation should be able to uncover the intrinsic geometric structure of the manifold, e.g., the neighborhood relationships between subregions. Most existing hashing methods only consider this issue during mapping data points into certain projected dimensions. When getting the binary codes, they either directly quantize the projected values with a threshold, or use an orthogonal matrix to refine the initial projection matrix, which both consider projection and quantization separately, and will not well preserve the locality structure in the whole learning process. In this paper, we propose a novel hashing algorithm called Locality Preserving Hashing to effectively solve the above problems. Specifically, we learn a set of locality preserving projections with a joint optimization framework, which minimizes the average projection distance and quantization loss simultaneously. Experimental comparisons with other state-of-the-art methods on two large scale datasets demonstrate the effectiveness and efficiency of our method.", "title": "" }, { "docid": "ee9bccbfecd58151569449911c624221", "text": "Hand motion capture is a popular research field, recently gaining more attention due to the ubiquity of RGB-D sensors. However, even most recent approaches focus on the case of a single isolated hand. In this work, we focus on hands that interact with other hands or objects and present a framework that successfully captures motion in such interaction scenarios for both rigid and articulated objects. Our framework combines a generative model with discriminatively trained salient points to achieve a low tracking error and with collision detection and physics simulation to achieve physically plausible estimates even in case of occlusions and missing visual data. Since all components are unified in a single objective function which is almost everywhere differentiable, it can be optimized with standard optimization techniques. Our approach works for monocular RGB-D sequences as well as setups with multiple synchronized RGB cameras. For a qualitative and quantitative evaluation, we captured 29 sequences with a large variety of interactions and up to 150 degrees of freedom.", "title": "" }, { "docid": "c49dbeeeb1ce4d0d5a528caf8fd595ff", "text": "Interpretation of medical images for diagnosis and treatment of complex disease from highdimensional and heterogeneous data remains a key challenge in transforming healthcare. In the last few years, both supervised and unsupervised deep learning achieved promising results in the area of medical imaging and image analysis. Unlike supervised learning which is biased towards how it is being supervised and manual efforts to create class label for the algorithm, unsupervised learning derive insights directly from the data itself, group the data and help to make data driven decisions without any external bias. This review systematically presents various unsupervised models applied to medical image analysis, including autoencoders and its several variants, Restricted Boltzmann machines, Deep belief networks, Deep Boltzmann machine and Generative adversarial network. Future research opportunities and challenges of unsupervised techniques for medical image analysis have also been discussed.", "title": "" }, { "docid": "f0d5a4bb917a8dd40f0f38fcc9460d3b", "text": "Simple decisions arise from the evaluation of sensory evidence. But decisions are determined by more than just evidence. Individuals establish internal decision criteria that influence how they respond. Where or how decision criteria are established in the brain remains poorly understood. Here, we show that neuronal activity in the superior colliculus (SC) predicts changes in decision criteria. Using a novel \"Yes-No\" task that isolates changes in decision criterion from changes in decision sensitivity, and computing neuronal measures of sensitivity and criterion, we find that SC neuronal activity correlates with the decision criterion regardless of the location of the choice report. We also show that electrical manipulation of activity within the SC produces changes in decisions consistent with changes in decision criteria and are largely independent of the choice report location. Our correlational and causal results together provide strong evidence that SC activity signals the position of a decision criterion. VIDEO ABSTRACT.", "title": "" }, { "docid": "da8a41e844c519842de524d791527ace", "text": "Advances in NLP techniques have led to a great demand for tagging and analysis of the sentiments from unstructured natural language data over the last few years. A typical approach to sentiment analysis is to start with a lexicon of positive and negative words and phrases. In these lexicons, entries are tagged with their prior out of context polarity. Unfortunately all efforts found in literature deal mostly with English texts. In this squib, we propose a computational technique of generating an equivalent SentiWordNet (Bengali) from publicly available English Sentiment lexicons and English-Bengali bilingual dictionary. The target language for the present task is Bengali, though the methodology could be replicated for any new language. There are two main lexical resources widely used in English for Sentiment analysis: SentiWordNet (Esuli et. al., 2006) and Subjectivity Word List (Wilson et. al., 2005). SentiWordNet is an automatically constructed lexical resource for English which assigns a positivity score and a negativity score to each WordNet synset. The subjectivity lexicon was compiled from manually developed resources augmented with entries learned from corpora. The entries in the Subjectivity lexicon have been labelled for part of speech (POS) as well as either strong or weak subjective tag depending on reliability of the subjective nature of the entry.", "title": "" }, { "docid": "ee37a743edd1b87d600dcf2d0050ca18", "text": "Recommender systems play a crucial role in mitigating the problem of information overload by suggesting users' personalized items or services. The vast majority of traditional recommender systems consider the recommendation procedure as a static process and make recommendations following a fixed strategy. In this paper, we propose a novel recommender system with the capability of continuously improving its strategies during the interactions with users. We model the sequential interactions between users and a recommender system as a Markov Decision Process (MDP) and leverage Reinforcement Learning (RL) to automatically learn the optimal strategies via recommending trial-and-error items and receiving reinforcements of these items from users' feedback. Users' feedback can be positive and negative and both types of feedback have great potentials to boost recommendations. However, the number of negative feedback is much larger than that of positive one; thus incorporating them simultaneously is challenging since positive feedback could be buried by negative one. In this paper, we develop a novel approach to incorporate them into the proposed deep recommender system (DEERS) framework. The experimental results based on real-world e-commerce data demonstrate the effectiveness of the proposed framework. Further experiments have been conducted to understand the importance of both positive and negative feedback in recommendations.", "title": "" }, { "docid": "3cd7523afa1b648516b86c5221a630e7", "text": "MOTIVATION\nAdvances in Next-Generation Sequencing technologies and sample preparation recently enabled generation of high-quality jumping libraries that have a potential to significantly improve short read assemblies. However, assembly algorithms have to catch up with experimental innovations to benefit from them and to produce high-quality assemblies.\n\n\nRESULTS\nWe present a new algorithm that extends recently described exSPAnder universal repeat resolution approach to enable its applications to several challenging data types, including jumping libraries generated by the recently developed Illumina Nextera Mate Pair protocol. We demonstrate that, with these improvements, bacterial genomes often can be assembled in a few contigs using only a single Nextera Mate Pair library of short reads.\n\n\nAVAILABILITY AND IMPLEMENTATION\nDescribed algorithms are implemented in C++ as a part of SPAdes genome assembler, which is freely available at bioinf.spbau.ru/en/spades.\n\n\nCONTACT\[email protected]\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online.", "title": "" }, { "docid": "ff27912cfef17e66266bfcd013a874ee", "text": "The purpose of this note is to describe a useful lesson we learned on authentication protocol design. In a recent article [9], we presented a simple authentication protocol to illustrate the concept of a trusted server. The protocol has a flaw, which was brought to our attention by Mart~n Abadi of DEC. In what follows, we first describe the protocol and its flaw, and how the flaw-was introduced in the process of deriving the protocol from its correct full information version. We then introduce a principle, called the Principle of Full Information, and explain how its use could have prevented the protocol flaw. We believe the Principle of Full Information is a useful authentication protocol design principle, and advocate its use. Lastly, we present several heuristics for simplifying full information protocols and illustrate their application to a mutual authentication protocol.", "title": "" }, { "docid": "d71c8d9f5fed873937d6a645f17c9b47", "text": "Yang, C.-C., Prasher, S.O., Landry, J.-A., Perret, J. and Ramaswamy, H.S. 2000. Recognition of weeds with image processing and their use with fuzzy logic for precision farming. Can. Agric. Eng. 42:195200. Herbicide use can be reduced if the spatial distribution of weeds in the field is taken into account. This paper reports the initial stages of development of an image capture/processing system to detect weeds, as well as a fuzzy logic decision-making system to determine where and how much herbicide to apply in an agricultural field. The system used a commercially available digital camera and a personal computer. In the image processing stage, green objects in each image were identified using a greenness method that compared the red, green, and blue (RGB) intensities. The RGB matrix was reduced to a binary form by applying the following criterion: if the green intensity of a pixel was greater than the red and the blue intensities, then the pixel was assigned a value of one; otherwise the pixel was given a value of zero. The resulting binary matrix was used to compute greenness area for weed coverage, and greenness distribution of weeds (weed patch). The values of weed coverage and weed patch were inputs to the fuzzy logic decision-making system, which used the membership functions to control the herbicide application rate at each location. Simulations showed that a graduated fuzzy strategy could potentially reduce herbicide application by 5 to 24%, and that an on/off strategy resulted in an even greater reduction of 15 to 64%.", "title": "" }, { "docid": "54c6e02234ce1c0f188dcd0d5ee4f04c", "text": "The World Wide Web is a vast resource for information. At the same time it is extremely distributed. A particular type of data such as restaurant lists may be scattered across thousands of independent information sources in many di erent formats. In this paper, we consider the problem of extracting a relation for such a data type from all of these sources automatically. We present a technique which exploits the duality between sets of patterns and relations to grow the target relation starting from a small sample. To test our technique we use it to extract a relation of (author,title) pairs from the World Wide Web.", "title": "" }, { "docid": "46ad3ffba69ccf8f41fa598891e571d8", "text": "Spectral band selection is a fundamental problem in hyperspectral data processing. In this letter, a new band-selection method based on mutual information (MI) is proposed. MI measures the statistical dependence between two random variables and can therefore be used to evaluate the relative utility of each band to classification. A new strategy is described to estimate the MI using a priori knowledge of the scene, reducing reliance on a \"ground truth\" reference map, by retaining bands with high associated MI values (subject to the so-called \"complementary\" conditions). Simulations of classification performance on 16 classes of vegetation from the AVIRIS 92AV3C data set show the effectiveness of the method, which outperforms an MI-based method using the associated reference map, an entropy-based method, and a correlation-based method. It is also competitive with the steepest ascent algorithm at much lower computational cost", "title": "" } ]
scidocsrr
08ea010038540781b99e6b6cdf27111c
Role of hope in academic and sport achievement.
[ { "docid": "e49e74c4104116b54d49147028c3392d", "text": "Defining hope as a cognitive set comprising agency (belief in one's capacity to initiate and sustain actions) and pathways (belief in one's capacity to generate routes) to reach goals, the Hope Scale was developed and validated previously as a dispositional self-report measure of hope (Snyder et al., 1991). The present 4 studies were designed to develop and validate a measure of state hope. The 6-item State Hope Scale is internally consistent and reflects the theorized agency and pathways components. The relationships of the State Hope Scale to other measures demonstrate concurrent and discriminant validity; moreover, the scale is responsive to events in the lives of people as evidenced by data gathered through both correlational and causal designs. The State Hope Scale offers a brief, internally consistent, and valid self-report measure of ongoing goal-directed thinking that may be useful to researchers and applied professionals.", "title": "" } ]
[ { "docid": "8994470e355b5db188090be731ee4fe9", "text": "A system that allows museums to build and manage Virtual and Augmented Reality exhibitions based on 3D models of artifacts is presented. Dynamic content creation based on pre-designed visualization templates allows content designers to create virtual exhibitions very efficiently. Virtual Reality exhibitions can be presented both inside museums, e.g. on touch-screen displays installed inside galleries and, at the same time, on the Internet. Additionally, the presentation based on Augmented Reality technologies allows museum visitors to interact with the content in an intuitive and exciting manner.", "title": "" }, { "docid": "f7aceafa35aaacb5b2b854a8b7e275b6", "text": "In this paper, the study and implementation of a high frequency pulse LED driver with self-oscillating circuit is presented. The self-oscillating half-bridge series resonant inverter is adopted in this LED driver and the circuit characteristics of LED with high frequency pulse driving voltage is also discussed. LED module is connected with full bridge diode rectifier but without low pass filter and this LED module is driven with high frequency pulse. In additional, the self-oscillating resonant circuit with saturable core is used to achieve zero voltage switching and to control the LED current. The LED equivalent circuit of resonant circuit and the operating principle of the self-oscillating half-bridge inverter are discussed in detail. Finally, an 18 W high frequency pulse LED driver is implemented to verify the feasibility. Experimental results show that the circuit efficiency is over 86.5% when input voltage operating within AC 110 ± 10 Vrms and the maximum circuit efficiency is up to 89.2%.", "title": "" }, { "docid": "14f727c752ad03f52e9811c17a3b0302", "text": "In this paper, we describe the development of a modular semiconductor switch, based on the compact IGBT switch, presented at the 14th EML. Using a discrete 18-kV IGBT switching module, we tested first two and then three of these modules connected in series. The goal was to handle 30 kV with two switches and 50 kV with three switches. In order to have safe operating conditions during the experiments, the current was limited by the load resistor and should not exceed 500 A. All the tests were carried out as single-shot tests. To get a safe synchronous switching, it was necessary to have a trigger unit that is capable of creating a signal triggering all the modules at the same time. For this purpose, we used a trigger unit, which was inductively coupled by means of a ferrite core system to the modules, to create the gate signals for all the discrete IGBTs on the board simultaneously. With that experimental setup, we have shown that, by using the modular IGBT-based semiconductor switch, it is possible to handle 50 kV. Depending on the available load resistor, the current which could be handled amounted to 450 A at the maximum.", "title": "" }, { "docid": "8a564e77710c118e4de86be643b061a6", "text": "SOAR is a cognitive architecture named from state, operator and result, which is adopted to portray the drivers’ guidance compliance behavior on variable message sign VMS in this paper. VMS represents traffic conditions to drivers by three colors: red, yellow, and green. Based on the multiagent platform, SOAR is introduced to design the agent with the detailed description of the working memory, long-term memory, decision cycle, and learning mechanism. With the fixed decision cycle, agent transforms state through four kinds of operators, including choosing route directly, changing the driving goal, changing the temper of driver, and changing the road condition of prediction. The agent learns from the process of state transformation by chunking and reinforcement learning. Finally, computerized simulation program is used to study the guidance compliance behavior. Experiments are simulated many times under given simulation network and conditions. The result, including the comparison between guidance and no guidance, the state transition times, and average chunking times are analyzed to further study the laws of guidance compliance and learning mechanism.", "title": "" }, { "docid": "65d60131b1ceba50399ceffa52de7e8a", "text": "Cox, Matthew L. Miller, and Jeffrey A. Bloom. San Diego, CA: Academic Press, 2002, 576 pp. $69.96 (hardbound). A key ingredient to copyright protection, digital watermarking provides a solution to the illegal copying of material. It also has broader uses in recording and electronic transaction tracking. This book explains “the principles underlying digital watermarking technologies, describes the requirements that have given rise to them, and discusses the diverse ends to which these technologies are being applied.” [book notes] The authors are extensively experienced in digital watermarking technologies. Cox recently joined the NEC Research Institute after a five-year stint at AT&T Bell Labs. Miller’s interest began at AT&T Bell Labs in 1979. He also is employed at NEC. Bloom is a researcher in digital watermarking at the Sarnoff Corporation. His acquaintance with the field began at Signafy, Inc. and continued through his employment at NEC Research Institute. The book features the following: Review of the underlying principles of watermarking relevant for image, video, and audio; Discussion of a wide variety of applications, theoretical principles, detection and embedding concepts, and key properties; Examination of copyright protection and other applications; Presentation of a series of detailed examples that illustrate watermarking concepts and practices; Appendix, in print and on the Web, containing the source code for the examples; Comprehensive glossary of terms. “The authors provide a comprehensive overview of digital watermarking, rife with detailed examples and grounded within strong theoretical framework. Digital Watermarking will serve as a valuable introduction as well as a useful reference for those engaged in the field.”—Walter Bender, Director, M.I.T. Media Lab", "title": "" }, { "docid": "fb655a622c2e299b8d7f8b85769575b4", "text": "With the substantial development of digital technologies in multimedia, network communication and user interfaces, we are seeing an increasing number of applications of these technologies, in particular in the entertainment domain. They include computer gaming, elearning, high-definition and interactive TVs, and virtual environments. The development of these applications typically involves the integration of existing technologies as well as the development of new technologies. This Introduction summarizes latest interactive entertainment technologies and applications, and briefly highlights some potential research directions. It also introduces the seven papers that are accepted to the special issue. Hopefully, this will provide the readers some insights into future research topics in interactive entertainment technologies and applications.", "title": "" }, { "docid": "3f0c5833a1637e1795a8316334d1cf9b", "text": "If human social anxiety is not predominately about the fear of physical injury or attack, as it is in other animals, then, to understand human social anxiety (i.e., fear of evaluation), it is necessary to consider why certain types of relationships are so important. Why do humans need to court the good feelings of others and fear not doing so? And why, when people wish to appear attractive to others (e.g., to make friends, date a desired sexual partner, or give a good presentation), do some people become so overwhelmed with anxiety that they behave submissively and fearfully (which can be seen as unattractive) or are avoidant? This article has suggested that humans have evolved to compete for attractiveness to make good impressions because these are related to eliciting important social resources and investments from others. These, in turn, have been linked to inclusive fitness and have physiological regulating effects. Being allocated a low social rank or ostracized carries many negative consequences for controlling social resources and physiological regulation. Social anxiety, like shame, can be adaptive to the extent that it helps people to \"stay on track\" with what is socially acceptable and what is not and could result in social sanction and exclusion. However, dysfunctional social anxiety is the result of activation of basic defensive mechanisms (and modules for) for threat detection and response (e.g., inhibition, eye-gaze avoidance, flight, or submission) that can be recruited rapidly for dealing with immediate threats, override conscious wishes, and interfere with being seen as a \"useful associate.\" Second, this article has suggested that socially anxious people are highly attuned to the competitive dynamics of trying to elicit approval and investment from others but that they perceive themselves to start from an inferior (i.e., low-rank) position and, because of this, activate submissive defensives when attempting to present themselves as confident, able, and attractive to others. These submissive defenses (which evolved to inhibit animals in low-rank positions from making claims on resources or up-rank bids) interfere with confident performance, leading to a failure cycle. While psychological therapies may target specific modules, cognitions, and behaviors (e.g., damage limitation behaviors, eyes gaze avoidance, theory of mind beliefs) that underpin social anxiety, drugs may work by having a more generalized effect on the threat-safety balance such that there is a different \"weighting\" given to various social threats and opportunities. If social anxiety (and disorders associated with it) are increasing in the modern age, one reason may be invigorated competition for social prestige, attractiveness, and resources.", "title": "" }, { "docid": "65aed4d07ba558da05d3458884d8b67b", "text": "This paper proposes an input voltage sensorless control algorithm for three-phase active boost rectifiers. Using this approach, the input ac-phase voltages can be accurately estimated from the fluctuations of other measured state variables and preceding switching state information from converter dynamics. Furthermore, the proposed control strategy reduces the input current harmonics of an ac–dc three-phase boost power factor correction (PFC) converter by injecting an additional common-mode duty ratio term to the feedback controllers’ outputs. This additional duty compensation term cancels the unwanted input harmonics, caused by the floating potential between ac source neutral and dc link negative, without requiring any access to the neutral point. A 6-kW (continuous power)/10-kW (peak power) three-phase boost PFC prototype using SiC-based semiconductor switching devices is designed and developed to validate the proposed control algorithm. The experimental results show that an input power factor of 0.999 with a conversion efficiency of 98.3%, total harmonic distortion as low as 4%, and a tightly regulated dc-link voltage with 1% ripple can be achieved.", "title": "" }, { "docid": "c347f649a6a183d7ee3f5abddfcbc2a1", "text": "Concern has grown regarding possible harm to the social and psychological development of children and adolescents exposed to Internet pornography. Parents, academics and researchers have documented pornography from the supply side, assuming that its availability explains consumption satisfactorily. The current paper explored the user's dimension, probing whether pornography consumers differed from other Internet users, as well as the social characteristics of adolescent frequent pornography consumers. Data from a 2004 survey of a national representative sample of the adolescent population in Israel were used (n=998). Adolescent frequent users of the Internet for pornography were found to differ in many social characteristics from the group that used the Internet for information, social communication and entertainment. Weak ties to mainstream social institutions were characteristic of the former group but not of the latter. X-rated material consumers proved to be a distinct sub-group at risk of deviant behaviour.", "title": "" }, { "docid": "86d725fa86098d90e5e252c6f0aaab3c", "text": "This paper illustrates the manner in which UML can be used to study mappings to different types of database systems. After introducing UML through a comparison to the EER model, UML diagrams are used to teach different approaches for mapping conceptual designs to the relational model. As we cover object-oriented and object-relational database systems, different features of UML are used over the same enterprise example to help students understand mapping alternatives for each model. Students are required to compare and contrast the mappings in each model as part of the learning process. For object-oriented and object-relational database systems, we address mappings to the ODMG and SQL99 standards in addition to specific commercial implementations.", "title": "" }, { "docid": "3323feaddbdf0937cef4ecf7dcedc263", "text": "Cloud storage services have become increasingly popular. Because of the importance of privacy, many cloud storage encryption schemes have been proposed to protect data from those who do not have access. All such schemes assumed that cloud storage providers are safe and cannot be hacked; however, in practice, some authorities (i.e., coercers) may force cloud storage providers to reveal user secrets or confidential data on the cloud, thus altogether circumventing storage encryption schemes. In this paper, we present our design for a new cloud storage encryption scheme that enables cloud storage providers to create convincing fake user secrets to protect user privacy. Since coercers cannot tell if obtained secrets are true or not, the cloud storage providers ensure that user privacy is still securely protected.", "title": "" }, { "docid": "18d8fe3f77ab8878ae2eb72b04fa8a48", "text": "A new magneto-electric dipole antenna with a unidirectional radiation pattern is proposed. A novel differential feeding structure is designed to provide an ultra-wideband impedance matching. A stable gain of 8.25±1.05 dBi is realized by introducing two slots in the magneto-electric dipole and using a rectangular box-shaped reflector, instead of a planar reflector. The antenna can achieve an impedance bandwidth of 114% for SWR ≤ 2 from 2.95 to 10.73 GHz. Radiation patterns with low cross polarization, low back radiation, fixing broadside direction mainbeam and symmetrical E- and H -plane patterns are obtained over the operating frequency range. Moreover, the correlation factor between the transmitting antenna input signal and the receiving antenna output signal is calculated for evaluating the time-domain characteristic. The proposed antenna, which is small in size, can be constructed easily by using PCB fabrication technique.", "title": "" }, { "docid": "0892815a2c9fb257faad12ca4c64a47d", "text": "Evidence indicates that, despite some critical successes, current conservation approaches are not slowing the overall rate of biodiversity loss. The field of synthetic biology, which is capable of altering natural genomes with extremely precise editing, might offer the potential to resolve some intractable conservation problems (e.g., invasive species or pathogens). However, it is our opinion that there has been insufficient engagement by the conservation community with practitioners of synthetic biology. We contend that rapid, large-scale engagement of these two communities is urgently needed to avoid unintended and deleterious ecological consequences. To this point we describe case studies where synthetic biology is currently being applied to conservation, and we highlight the benefits to conservation biologists from engaging with this emerging technology.", "title": "" }, { "docid": "faca51b6762e4d7c3306208ad800abd3", "text": "Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3×3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two images, and its determination is very important in many applications such as scene modeling and vehicle navigation. This paper gives an introduction to the epipolar geometry, and provides a complete review of the current techniques for estimating the fundamental matrix and its uncertainty. A well-founded measure is proposed to compare these techniques. Projective reconstruction is also reviewed. The software which we have developed for this review is available on the Internet.", "title": "" }, { "docid": "ffa325ffa524b529e0b0af2776c19ab0", "text": "The proliferation of the Internet of Things has increased reliance on voice-controlled devices to perform everyday tasks. Although these devices rely on accurate speechrecognition for correct functionality, many users experience frequent misinterpretations in normal use. In this work, we conduct an empirical analysis of interpretation errors made by Amazon Alexa, the speech-recognition engine that powers the Amazon Echo family of devices. We leverage a dataset of 11,460 speech samples containing English words spoken by American speakers and identify where Alexa misinterprets the audio inputs, how often, and why. We find that certain misinterpretations appear consistently in repeated trials and are systematic. Next, we present and validate a new attack, called skill squatting. In skill squatting, an attacker leverages systematic errors to route a user to malicious application without their knowledge. In a variant of the attack we call spear skill squatting, we further demonstrate that this attack can be targeted at specific demographic groups. We conclude with a discussion of the security implications of speech interpretation errors, countermeasures, and future work.", "title": "" }, { "docid": "2e5ce96ba3c503704a9152ae667c24ec", "text": "We use methods of classical and quantum mechanics for mathematical modeling of price dynamics at the financial market. The Hamiltonian formalism on the price/price-change phase space is used to describe the classical-like evolution of prices. This classical dynamics of prices is determined by ”hard” conditions (natural resources, industrial production, services and so on). These conditions as well as ”hard” relations between traders at the financial market are mathematically described by the classical financial potential. At the real financial market ”hard” conditions are not the only source of price changes. The information exchange and market psychology play important (and sometimes determining) role in price dynamics. We propose to describe this ”soft” financial factors by using the pilot wave (Bohmian) model of quantum mechanics. The theory of financial mental (or psychological) waves is used to take into account market psychology. The real trajectories of prices are determined (by the financial analogue of the second Newton law) by two financial potentials: classical-like (”hard” market conditions) and quantum-like (”soft” market conditions).", "title": "" }, { "docid": "58640b446a3c03ab8296302498e859a5", "text": "With Islands of Music we present a system which facilitates exploration of music libraries without requiring manual genre classification. Given pieces of music in raw audio format we estimate their perceived sound similarities based on psychoacoustic models. Subsequently, the pieces are organized on a 2-dimensional map so that similar pieces are located close to each other. A visualization using a metaphor of geographic maps provides an intuitive interface where islands resemble genres or styles of music. We demonstrate the approach using a collection of 359 pieces of music.", "title": "" }, { "docid": "1aa39f265d476fca4c54af341b6f2bde", "text": "Explaining the output of a complicated machine learning model like a deep neural network (DNN) is a central challenge in machine learning. Several proposed local explanation methods address this issue by identifying what dimensions of a single input are most responsible for a DNN’s output. The goal of this work is to assess the sensitivity of local explanations to DNN parameter values. Somewhat surprisingly, we find that DNNs with randomly-initialized weights produce explanations that are both visually and quantitatively similar to those produced by DNNs with learned weights. Our conjecture is that this phenomenon occurs because these explanations are dominated by the lower level features of a DNN, and that a DNN’s architecture provides a strong prior which significantly affects the representations learned at these lower layers.", "title": "" }, { "docid": "a74964a6b7818a454841c29a4d4332ec", "text": "While the main trend of 3D object recognition has been to infer object detection from single views of the scene - i.e., 2.5D data - this work explores the direction on performing object recognition on 3D data that is reconstructed from multiple viewpoints, under the conjecture that such data can improve the robustness of an object recognition system. To achieve this goal, we propose a framework whichreal-time segmentation is able (i) to carry out incremental real-time segmentation of a 3D scene while being reconstructed via Simultaneous Localization And Mapping (SLAM), and (ii) to simultaneously and incrementally carry out 3D object recognition and pose estimation on the reconstructed and segmented 3D representations. Experimental results demonstrate the advantages of our approach with respect to traditional single view-based object recognition and pose estimation approaches, as well as its usefulness in robotic perception and augmented reality applications.", "title": "" } ]
scidocsrr
60bc47418b5db1817047279549229a84
Sales Promotion and Purchasing Intention : Applying the Technology Acceptance Model in Consumer-to-Consumer
[ { "docid": "86e0f8065b473b9fac8b869bdb126685", "text": "The technology acceptance model (Davis 1989) is one of the most widely used models of IT adoption. According to TAM, IT adoption is influenced by two perceptions: usefulness and ease-of-use. Research has shown that perceived usefulness (PU) affects intended adoption of IT, but has mostly failed to do so regarding perceived ease of use (PEOU). The basic proposition of this study is that this varying importance of PEOU may be related to the nature of the task. PEOU relates to assessments of the intrinsic characteristics of IT, such as the ease of use, ease of learning, flexibility, and clarity of its interface. PU, on the other hand, is a response to user assessment of its extrinsic, i.e., task-oriented, outcomes: how IT", "title": "" } ]
[ { "docid": "7247eb6b90d23e2421c0d2500359d247", "text": "The large-scale collection and exploitation of personal information to drive targeted online advertisements has raised privacy concerns. As a step towards understanding these concerns, we study the relationship between how much information is collected and how valuable it is for advertising. We use HTTP traces consisting of millions of users to aid our study and also present the first comparative study between aggregators. We develop a simple model that captures the various parameters of today's advertising revenues, whose values are estimated via the traces. Our results show that per aggregator revenue is skewed (5% accounting for 90% of revenues), while the contribution of users to advertising revenue is much less skewed (20% accounting for 80% of revenue). Google is dominant in terms of revenue and reach (presence on 80% of publishers). We also show that if all 5% of the top users in terms of revenue were to install privacy protection, with no corresponding reaction from the publishers, then the revenue can drop by 30%.", "title": "" }, { "docid": "1b92575dd7c34c3d89fe3b1629731c40", "text": "In this paper we give an overview of our research on nonphotorealistic rendering methods for computer-generated pencil drawing. Our approach to the problem of simulating pencil drawings was to break it down into the subproblems of (1) simulating first the drawing materials (graphite pencil and drawing paper, blenders and kneaded eraser), (2) developing drawing primitives (individual pencil strokes and mark-making to create tones and textures), (3) simulating the basic rendering techniques (outlining and shading of 3D models) used by artists and illustrators familiar with pencil rendering, and (4) implementing the control of drawing steps from preparatory sketches to finished rendering results. We demonstrate the capabilities of our approach with a variety of images generated from reference images and 3D models. CR Categories: I.3.3 [Computer Graphics]: Picture/Image Generation—Display algorithms; I.6.3 [Simulation and Modeling]: Applications—.", "title": "" }, { "docid": "3e66d3e2674bdaa00787259ac99c3f68", "text": "Dempster-Shafer theory offers an alternative to traditional probabilistic theory for the mathematical representation of uncertainty. The significant innovation of this framework is that it allows for the allocation of a probability mass to sets or intervals. DempsterShafer theory does not require an assumption regarding the probability of the individual constituents of the set or interval. This is a potentially valuable tool for the evaluation of risk and reliability in engineering applications when it is not possible to obtain a precise measurement from experiments, or when knowledge is obtained from expert elicitation. An important aspect of this theory is the combination of evidence obtained from multiple sources and the modeling of conflict between them. This report surveys a number of possible combination rules for Dempster-Shafer structures and provides examples of the implementation of these rules for discrete and interval-valued data.", "title": "" }, { "docid": "c5ae1d66d31128691e7e7d8e2ccd2ba8", "text": "The scope of this paper is two-fold: firstly it proposes the application of a 1-2-3 Zones approach to Internet of Things (IoT)-related Digital Forensics (DF) investigations. Secondly, it introduces a Next-Best-Thing Triage (NBT) Model for use in conjunction with the 1-2-3 Zones approach where necessary and vice versa. These two `approaches' are essential for the DF process from an IoT perspective: the atypical nature of IoT sources of evidence (i.e. Objects of Forensic Interest - OOFI), the pervasiveness of the IoT environment and its other unique attributes - and the combination of these attributes - dictate the necessity for a systematic DF approach to incidents. The two approaches proposed are designed to serve as a beacon to incident responders, increasing the efficiency and effectiveness of their IoT-related investigations by maximizing the use of the available time and ensuring relevant evidence identification and acquisition. The approaches can also be applied in conjunction with existing, recognised DF models, methodologies and frameworks.", "title": "" }, { "docid": "2701f46ac9a473cb809773df5ae1d612", "text": "Testing and measuring the security of software system architectures is a difficult task. An attempt is made in this paper to analyze the issues of architecture security of object-oriented software’s using common security concepts to evaluate the security of a system under design. Object oriented systems are based on various architectures like COM, DCOM, CORBA, MVC and Broker. In object oriented technology the basic system component is an object. Individual system component is posing it own risk in the system. Security policies and the associated risk in these software architectures can be calculated for the individual component. Overall risk can be calculated based on the context and risk factors in the architecture. Small risk factors get accumulated together and form a major risk in the systems and can damage the systems.", "title": "" }, { "docid": "7f0a6e9a1bcdf8b12ac4273138eb7523", "text": "The graph-search algorithms developed between 60s and 80s were widely used in many fields, from robotics to video games. The A* algorithm shall be mentioned between some of the most important solutions explicitly oriented to motion-robotics, improving the logic of graph search with heuristic principles inside the loop. Nevertheless, one of the most important drawbacks of the A* algorithm resides in the heading constraints connected with the grid characteristics. Different solutions were developed in the last years to cope with this problem, based on postprocessing algorithms or on improvements of the graph-search algorithm itself. A very important one is Theta* that refines the graph search allowing to obtain paths with “any” heading. In the last two years, the Flight Mechanics Research Group of Politecnico di Torino studied and implemented different path planning algorithms. A L. De Filippis (B) · G. Guglieri · F. Quagliotti Dipartimento di Ingegneria Aeronautica e Spaziale, Politecnico di Torino, Corso Duca Degli Abruzzi 24, 10129 Turin, Italy e-mail: [email protected] G. Guglieri e-mail: [email protected] F. Quagliotti e-mail: [email protected] Matlab based planning tool was developed, collecting four separate approaches: geometric predefined trajectories, manual waypoint definition, automatic waypoint distribution (i.e. optimizing camera payload capabilities) and a comprehensive A*-based algorithm used to generate paths, minimizing risk of collision with orographic obstacles. The tool named PCube exploits Digital Elevation Maps (DEMs) to assess the risk maps and it can be used to generate waypoint sequences for UAVs autopilots. In order to improve the A*-based algorithm, the solution is extended to tri-dimensional environments implementing a more effective graph search (based on Theta*). In this paper the application of basic Theta* to tridimensional path planning will be presented. Particularly, the algorithm is applied to orographic obstacles and in urban environments, to evaluate the solution for different kinds of obstacles. Finally, a comparison with the A* algorithm will be introduced as a metric of the algorithm", "title": "" }, { "docid": "2615de62d2b2fa8a15e79ca2a3a57a3b", "text": "Recent evidence has shown that entrants into self-employment are disproportionately drawn from the tails of the earnings and ability distributions. This observation is explained by a multi-task model of occupational choice in which frictions in the labor market induces mismatches between firms and workers, and mis-assignment of workers to tasks. The model also yields distinctive predictions relating prior work histories to earnings and to the probability of entry into self-employment. These predictions are tested with the Korean Labor and Income Panel Study, from which we find considerable support for the model.", "title": "" }, { "docid": "9dfef5bc76b78e7577b9eb377b830a9e", "text": "Patients with Parkinson's disease may have difficulties in speaking because of the reduced coordination of the muscles that control breathing, phonation, articulation and prosody. Symptoms that may occur because of changes are weakening of the volume of the voice, voice monotony, changes in the quality of the voice, speed of speech, uncontrolled repetition of words. The evaluation of some of the disorders mentioned can be achieved through measuring the variation of parameters in an objective manner. It may be done to evaluate the response to the treatments with intra-daily frequency pre / post-treatment, as well as in the long term. Software systems allow these measurements also by recording the patient's voice. This allows to carry out a large number of tests by means of a larger number of patients and a higher frequency of the measurements. The main goal of our work was to design and realize Voxtester, an effective and simple to use software system useful to measure whether changes in voice emission are sensitive to pharmacologic treatments. Doctors and speech therapists can easily use it without going into the technical details, and we think that this goal is reached only by Voxtester, up to date.", "title": "" }, { "docid": "72c0cef98023dd5b6c78e9c347798545", "text": "Several works have shown that Convolutional Neural Networks (CNNs) can be easily adapted to different datasets and tasks. However, for extracting the deep features from these pre-trained deep CNNs a fixedsize (e.g., 227×227) input image is mandatory. Now the state-of-the-art datasets like MIT-67 and SUN-397 come with images of different sizes. Usage of CNNs for these datasets enforces the user to bring different sized images to a fixed size either by reducing or enlarging the images. The curiosity is obvious that “Isn’t the conversion to fixed size image is lossy ?”. In this work, we provide a mechanism to keep these lossy fixed size images aloof and process the images in its original form to get set of varying size deep feature maps, hence being lossless. We also propose deep spatial pyramid match kernel (DSPMK) which amalgamates set of varying size deep feature maps and computes a matching score between the samples. Proposed DSPMK act as a dynamic kernel in the classification framework of scene dataset using support vector machine. We demonstrated the effectiveness of combining the power of varying size CNN-based set of deep feature maps with dynamic kernel by achieving state-of-the-art results for high-level visual recognition tasks such as scene classification on standard datasets like MIT67 and SUN397.", "title": "" }, { "docid": "ffc2401f4f6b22edf48814f2388620a1", "text": "QR factorization is a ubiquitous operation in many engineering and scientific applications. In this paper, we present efficient realization of Householder Transform (HT) based QR factorization through algorithm-architecture co-design where we achieve performance improvement of 3-90x in-terms of Gflops/watt over state-of-the-art multicore, General Purpose Graphics Processing Units (GPGPUs), Field Programmable Gate Arrays (FPGAs), and ClearSpeed CSX700. Theoretical and experimental analysis of classical HT is performed for opportunities to exhibit higher degree of parallelism where parallelism is quantified as a number of parallel operations per level in the Directed Acyclic Graph (DAG) of the transform. Based on theoretical analysis of classical HT, an opportunity to re-arrange computations in the classical HT is identified that results in Modified HT (MHT) where it is shown that MHT exhibits 1.33x times higher parallelism than classical HT. Experiments in off-the-shelf multicore and General Purpose Graphics Processing Units (GPGPUs) for HT and MHT suggest that MHT is capable of achieving slightly better or equal performance compared to classical HT based QR factorization realizations in the optimized software packages for Dense Linear Algebra (DLA). We implement MHT on a customized platform for Dense Linear Algebra (DLA) and show that MHT achieves 1.3x better performance than native implementation of classical HT on the same accelerator. For custom realization of HT and MHT based QR factorization, we also identify macro operations in the DAGs of HT and MHT that are realized on a Reconfigurable Data-path (RDP). We also observe that due to re-arrangement in the computations in MHT, custom realization of MHT is capable of achieving 12 percent better performance improvement over multicore and GPGPUs than the performance improvement reported by General Matrix Multiplication (GEMM) over highly tuned DLA software packages for multicore and GPGPUs which is counter-intuitive.", "title": "" }, { "docid": "392352757f21eaf00e0f909b13452181", "text": "While hotels come up with various discount strategies to attract consumers, especially during a recession, both hotels and consumers seem to favor dynamic pricing. Previous studies also indicated that price discounts give consumers not only monetary benefits but also positive emotional responses. The purpose of this study was to investigate how uniform pricing and dynamic pricing influence consumers’ emotion and behavior, identifying the role of their involvement. Results of study suggested that high involvement consumers responded more positively to dynamic pricing than uniform pricing. Moreover, younger and female consumers were more likely to be involved in obtaining a discount, and high involvement consumers showed more positive feelings, and were more likely to tell others, and make repeat purchases from a discount as compared to low involvement consumers.", "title": "" }, { "docid": "742f115d2ba9b9ee8862fe5a0c5497f6", "text": "This paper targets the design of a high dynamic range lowpower, low-noise pixel readout integrated circuit (ROIC) that handles the infrared (IR) detector’s output signal of the uncooled thermal IR camera. Throughout the paper, both the optics and the IR detector modules of the IR camera are modeled using the analogue hardware description language (AHDL) to enable extracting the proper input signal required for the ROIC design. A capacitive trans-impedance amplifier (CTIA) is selected for design as a column level ROIC. The core of the CTIA is designed for minimum power consumption by operation in the sub-threshold region. In addition, a design of correlated double sampling (CDS) technique is applied to the CTIA to minimize the noise and the offset levels. The presented CTIA design achieves a power consumption of 5.2μW and root mean square (RMS) output noise of 6.9μV. All the circuits were implemented in 0.13μm CMOS process technology. The design rule check (DRC), layout versus schematic (LVS), parasitic extraction (PE), Process-voltage-temperature (PVT) analysis and post-layout simulation are performed for all designed circuits. The postlayout simulation results illustrate enhancement of the power consumption and noise performance compared to other published ROIC designs. Finally, a new widening dynamic range (WDR) technique is applied to the CTIA with the CDS circuit designs to increase the dynamic range (DR).", "title": "" }, { "docid": "581ba39d86678aa23cd9348bbd997c72", "text": "We present a system to track the positions of multiple persons in a scene from overlapping cameras. The distinguishing aspect of our method is a novel, two-step approach that jointly estimates person position and track assignment. The proposed approach keeps solving the assignment problem tractable, while taking into account how different assignments influence feature measurement. In a hypothesis generation stage, the similarity between a person at a particular position and an active track is based on a subset of cues (appearance, motion) that are guaranteed observable in the camera views. This allows for efficient computation of the K-best joint estimates for person position and track assignment under an approximation of the likelihood function. In a subsequent hypothesis verification stage, the known person positions associated with these K-best solutions are used to define a larger set of actually visible cues, which enables a re-ranking of the found assignments using the full likelihood function. We demonstrate that our system outperforms the state-of-the-art on four challenging multi-person datasets (indoor and outdoor), involving 3–5 overlapping cameras and up to 23 persons simultaneously. Two of these datasets are novel: we make the associated images and annotations public to facilitate", "title": "" }, { "docid": "763b8982d13b0637a17347b2c557f1f8", "text": "This paper describes an application of Case-Based Reasonin g to the problem of reducing the number of final-line fraud investigation s i the credit approval process. The performance of a suite of algorithms whi ch are applied in combination to determine a diagnosis from a set of retriev ed cases is reported. An adaptive diagnosis algorithm combining several neighbourhoodbased and probabilistic algorithms was found to have the bes t performance, and these results indicate that an adaptive solution can pro vide fraud filtering and case ordering functions for reducing the number of fin al-li e fraud investigations necessary.", "title": "" }, { "docid": "45be297c4363996ae0c4fa7930dd8e12", "text": "AIMS\nTo examine exposure to workplace bullying as a risk factor for cardiovascular disease and depression in employees.\n\n\nMETHODS\nLogistic regression models were related to prospective data from two surveys in a cohort of 5432 hospital employees (601 men and 4831 women), aged 18-63 years. Outcomes were new reports of doctor diagnosed cardiovascular disease and depression during the two year follow up among those who were free from these diseases at baseline.\n\n\nRESULTS\nThe prevalence of bullying was 5% in the first survey and 6% in the second survey. Two per cent reported bullying experiences in both surveys, an indication of prolonged bullying. After adjustment for sex, age, and income, the odds ratio of incident cardiovascular disease for victims of prolonged bullying compared to non-bullied employees was 2.3 (95% CI 1.2 to 4.6). A further adjustment for overweight at baseline attenuated the odds ratio to 1.6 (95% CI 0.8 to 3.5). The association between prolonged bullying and incident depression was significant, even after these adjustments (odds ratio 4.2, 95% CI 2.0 to 8.6).\n\n\nCONCLUSIONS\nA strong association between workplace bullying and subsequent depression suggests that bullying is an aetiological factor for mental health problems. The victims of bullying also seem to be at greater risk of cardiovascular disease, but this risk may partly be attributable to overweight.", "title": "" }, { "docid": "e13dcab3abbd1abf159ed87ba67dc490", "text": "A virtual keyboard takes a large portion of precious screen real estate. We have investigated whether an invisible keyboard is a feasible design option, how to support it, and how well it performs. Our study showed users could correctly recall relative key positions even when keys were invisible, although with greater absolute errors and overlaps between neighboring keys. Our research also showed adapting the spatial model in decoding improved the invisible keyboard performance. This method increased the input speed by 11.5% over simply hiding the keyboard and using the default spatial model. Our 3-day multi-session user study showed typing on an invisible keyboard could reach a practical level of performance after only a few sessions of practice: the input speed increased from 31.3 WPM to 37.9 WPM after 20 - 25 minutes practice on each day in 3 days, approaching that of a regular visible keyboard (41.6 WPM). Overall, our investigation shows an invisible keyboard with adapted spatial model is a practical and promising interface option for the mobile text entry systems.", "title": "" }, { "docid": "553dc62182acef2b7ef226d6c951229b", "text": "The key intent of this work is to present a comprehensive comparative literature survey of the state-of-art in software agent-based computing technology and its incorporationwithin themodelling and simulation domain. The original contribution of this survey is two-fold: (1) Present a concise characterization of almost the entire spectrum of agent-based modelling and simulation tools, thereby highlighting the salient features, merits, and shortcomings of such multi-faceted application software; this article covers eighty five agent-based toolkits that may assist the system designers and developers with common tasks, such as constructing agent-based models and portraying the real-time simulation outputs in tabular/graphical formats and visual recordings. (2) Provide a usable reference that aids engineers, researchers, learners and academicians in readily selecting an appropriate agent-based modelling and simulation toolkit for designing and developing their system models and prototypes, cognizant of both their expertise and those requirements of their application domain. In a nutshell, a significant synthesis of Agent Based Modelling and Simulation (ABMS) resources has been performed in this review that stimulates further investigation into this topic. © 2017 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "69ddedba98e93523f698529716cf2569", "text": "A fast and scalable graph processing method becomes increasingly important as graphs become popular in a wide range of applications and their sizes are growing rapidly. Most of distributed graph processing methods require a lot of machines equipped with a total of thousands of CPU cores and a few terabyte main memory for handling billion-scale graphs. Meanwhile, GPUs could be a promising direction toward fast processing of large-scale graphs by exploiting thousands of GPU cores. All of the existing methods using GPUs, however, fail to process large-scale graphs that do not fit in main memory of a single machine. Here, we propose a fast and scalable graph processing method GTS that handles even RMAT32 (64 billion edges) very efficiently only by using a single machine. The proposed method stores graphs in PCI-E SSDs and executes a graph algorithm using thousands of GPU cores while streaming topology data of graphs to GPUs via PCI-E interface. GTS is fast due to no communication overhead and scalable due to no data duplication from graph partitioning among machines. Through extensive experiments, we show that GTS consistently and significantly outperforms the major distributed graph processing methods, GraphX, Giraph, and PowerGraph, and the state-of-the-art GPU-based method TOTEM.", "title": "" }, { "docid": "b03b34dc9708693f06ee4786c48ce9b5", "text": "Mobile Cloud Computing (MCC) enables smartphones to offload compute-intensive codes and data to clouds or cloudlets for energy conservation. Thus, MCC liberates smartphones from battery shortage and embraces more versatile mobile applications. Most pioneering MCC research work requires a consistent network performance for offloading. However, such consistency is challenged by frequent mobile user movements and unstable network quality, thereby resulting in a suboptimal offloading decision. To embrace network inconsistency, we propose ENDA, a three-tier architecture that leverages user track prediction, realtime network performance and server loads to optimize offloading decisions. On cloud tier, we first design a greedy searching algorithm to predict user track using historical user traces stored in database servers. We then design a cloud-enabled Wi-Fi access point (AP) selection scheme to find the most energy efficient AP for smartphone offloading. We evaluate the performance of ENDA through simulations under a real-world scenario. The results demonstrate that ENDA can generate offloading decisions with optimized energy efficiency, desirable response time, and potential adaptability to a variety of scenarios. ENDA outperforms existing offloading techniques that do not consider user mobility and server workload balance management.", "title": "" }, { "docid": "03a6656158a24606ee4ad6be0592e850", "text": "It is well known that earthquakes are a regional event, strongly controlled by local geological structures and circumstances. Reducing the research area can reduce the influence of other irrelevant seismotectonics. A new sub regiondividing scheme, considering the seismotectonics influence, was applied for the artificial neural network (ANN) earthquake prediction model in the northeast seismic region of China (NSRC). The improved set of input parameters and prediction time duration are also discussed in this work. The new dividing scheme improved the prediction accuracy for different prediction time frames. Three different research regions were analyzed as an earthquake data source for the ANN model under different prediction time duration frames. The results show: (1) dividing the research region into smaller subregions can improve the prediction accuracies in NSRC, (2) larger research regions need shorter prediction durations to obtain better performance, (3) different areas have different sets of input parameters in NSRC, and (4) the dividing scheme, considering the seismotectonics frame of the region, yields better results.", "title": "" } ]
scidocsrr
5505552a7dfff96dc09c95ab89a50749
Malware classification using self organising feature maps and machine activity data
[ { "docid": "0952701dd63326f8a78eb5bc9a62223f", "text": "The self-organizing map (SOM) is an automatic data-analysis method. It is widely applied to clustering problems and data exploration in industry, finance, natural sciences, and linguistics. The most extensive applications, exemplified in this paper, can be found in the management of massive textual databases and in bioinformatics. The SOM is related to the classical vector quantization (VQ), which is used extensively in digital signal processing and transmission. Like in VQ, the SOM represents a distribution of input data items using a finite set of models. In the SOM, however, these models are automatically associated with the nodes of a regular (usually two-dimensional) grid in an orderly fashion such that more similar models become automatically associated with nodes that are adjacent in the grid, whereas less similar models are situated farther away from each other in the grid. This organization, a kind of similarity diagram of the models, makes it possible to obtain an insight into the topographic relationships of data, especially of high-dimensional data items. If the data items belong to certain predetermined classes, the models (and the nodes) can be calibrated according to these classes. An unknown input item is then classified according to that node, the model of which is most similar with it in some metric used in the construction of the SOM. A new finding introduced in this paper is that an input item can even more accurately be represented by a linear mixture of a few best-matching models. This becomes possible by a least-squares fitting procedure where the coefficients in the linear mixture of models are constrained to nonnegative values.", "title": "" }, { "docid": "cb2d42347e676950bef013b19c8eef70", "text": "One of the major and serious threats on the Internet today is malicious software, often referred to as a malware. The malwares being designed by attackers are polymorphic and metamorphic which have the ability to change their code as they propagate. Moreover, the diversity and volume of their variants severely undermine the effectiveness of traditional defenses which typically use signature based techniques and are unable to detect the previously unknown malicious executables. The variants of malware families share typical behavioral patterns reflecting their origin and purpose. The behavioral patterns obtained either statically or dynamically can be exploited to detect and classify unknown malwares into their known families using machine learning techniques. This survey paper provides an overview of techniques for analyzing and classifying the malwares.", "title": "" }, { "docid": "f1ce50e0b787c1d10af44252b3a7e656", "text": "This paper proposes a scalable approach for distinguishing malicious files from clean files by investigating the behavioural features using logs of various API calls. We also propose, as an alternative to the traditional method of manually identifying malware files, an automated classification system using runtime features of malware files. For both projects, we use an automated tool running in a virtual environment to extract API call features from executables and apply pattern recognition algorithms and statistical methods to differentiate between files. Our experimental results, based on a dataset of 1368 malware and 456 cleanware files, provide an accuracy of over 97% in distinguishing malware from cleanware. Our techniques provide a similar accuracy for classifying malware into families. In both cases, our results outperform comparable previously published techniques.", "title": "" } ]
[ { "docid": "bd44d77e255837497d5026e87a46548d", "text": "Social media technologies let people connect by creating and sharing content. We examine the use of Twitter by famous people to conceptualize celebrity as a practice. On Twitter, celebrity is practiced through the appearance and performance of ‘backstage’ access. Celebrity practitioners reveal what appears to be personal information to create a sense of intimacy between participant and follower, publicly acknowledge fans, and use language and cultural references to create affiliations with followers. Interactions with other celebrity practitioners and personalities give the impression of candid, uncensored looks at the people behind the personas. But the indeterminate ‘authenticity’ of these performances appeals to some audiences, who enjoy the game playing intrinsic to gossip consumption. While celebrity practice is theoretically open to all, it is not an equalizer or democratizing discourse. Indeed, in order to successfully practice celebrity, fans must recognize the power differentials intrinsic to the relationship.", "title": "" }, { "docid": "be009b972c794d01061c4ebdb38cc720", "text": "The existing efforts in computer assisted semen analysis have been focused on high speed imaging and automated image analysis of sperm motility. This results in a large amount of data, and it is extremely challenging for both clinical scientists and researchers to interpret, compare and correlate the multidimensional and time-varying measurements captured from video data. In this work, we use glyphs to encode a collection of numerical measurements taken at a regular interval and to summarize spatio-temporal motion characteristics using static visual representations. The design of the glyphs addresses the needs for (a) encoding some 20 variables using separable visual channels, (b) supporting scientific observation of the interrelationships between different measurements and comparison between different sperm cells and their flagella, and (c) facilitating the learning of the encoding scheme by making use of appropriate visual abstractions and metaphors. As a case study, we focus this work on video visualization for computer-aided semen analysis, which has a broad impact on both biological sciences and medical healthcare. We demonstrate that glyph-based visualization can serve as a means of external memorization of video data as well as an overview of a large set of spatiotemporal measurements. It enables domain scientists to make scientific observation in a cost-effective manner by reducing the burden of viewing videos repeatedly, while providing them with a new visual representation for conveying semen statistics.", "title": "" }, { "docid": "ec0bc46d3ea048bb6f6ff44b135ca914", "text": "In what ways should we include future humanoid robots, and other kinds of artificial agents, in our moral universe? We consider the Organic view, which maintains that artificial humanoid agents, based on current computational technologies, could not count as full-blooded moral agents, nor as appropriate targets of intrinsic moral concern. On this view, artificial humanoids lack certain key properties of biological organisms, which preclude them from having full moral status. Computationally controlled systems, however advanced in their cognitive or informational capacities, are, it is proposed, unlikely to possess sentience and hence will fail to be able to exercise the kind of empathic rationality that is a prerequisite for being a moral agent. The organic view also argues that sentience and teleology require biologically based forms of self-organization and autonomous self-maintenance. The organic view may not be correct, but at least it needs to be taken seriously in the future development of the field of Machine Ethics.", "title": "" }, { "docid": "5207b424fcaab6ed130ccf85008f1d46", "text": "We describe a component of a document analysis system for constructing ontologies for domain-specific web tables imported into Excel. This component automates extraction of the Wang Notation for the column header of a table. Using column-header specific rules for XY cutting we convert the geometric structure of the column header to a linear string denoting cell attributes and directions of cuts. The string representation is parsed by a context-free grammar and the parse tree is further processed to produce an abstract data-type representation (the Wang notation tree) of each column category. Experiments were carried out to evaluate this scheme on the original and edited column headers of Excel tables drawn from a collection of 200 used in our earlier work. The transformed headers were obtained by editing the original column headers to conform to the format targeted by our grammar. Forty-four original headers and their reformatted versions were submitted as input to our software system. Our grammar was able to parse and the extract Wang notation tree for all the edited headers, but for only four of the original headers. We suggest extensions to our table grammar that would enable processing a larger fraction of headers without manual editing.", "title": "" }, { "docid": "ea1b0f4e82ac9ad8593c5e4ba1567a59", "text": "This paper describes an emerging shared repository of large-text resources for creating word vectors, including pre-processed corpora and pre-trained vectors for a range of frameworks and configurations. This will facilitate reuse, rapid experimentation, and replicability of results.", "title": "" }, { "docid": "ae221a05368d54bbfefc2f471161962a", "text": "This paper reports on the evaluation of adaptive cruise control (ACC) from a psychological perspective. It was anticipated that ACC would have an effect upon the psychology of driving, i.e. make the driver feel like they have less control, reduce the level of trust in the vehicle, make drivers less situationally aware, but workload might be reduced and driving might be less stressful. Drivers were asked to drive in a driving simulator under manual and ACC conditions. Analysis of variance techniques were used to determine the effects of workload (i.e. amount of traffic) and feedback (i.e. degree of information from the ACC system) on the psychological variables measured (i.e. locus of control, trust, workload, stress, mental models and situation awareness). The results showed that: locus of control and trust were unaffected by ACC, whereas situation awareness, workload and stress were reduced by ACC. Ways of improving situation awareness could include cues to help the driver predict vehicle trajectory and identify conflicts.", "title": "" }, { "docid": "7cd13c840dbdf96951e74294f740553c", "text": "Interactions such as double negation in sentences and scene interactions in images are common forms of complex dependencies captured by state-of-the-art machine learning models. We propose Mahé, a novel approach to provide Model-agnostic hierarchical éxplanations of how powerful machine learning models, such as deep neural networks, capture these interactions as either dependent on or free of the context of data instances. Specifically, Mahé provides context-dependent explanations by a novel local interpretation algorithm that effectively captures any-order interactions, and obtains context-free explanations through generalizing contextdependent interactions to explain global behaviors. Experimental results show that Mahé obtains improved local interaction interpretations over state-of-the-art methods and successfully provides explanations of interactions that are contextfree.", "title": "" }, { "docid": "8510bcbee74c99c39a5220d54ebf4d97", "text": "We propose a novel algorithm to detect visual saliency from video signals by combining both spatial and temporal information and statistical uncertainty measures. The main novelty of the proposed method is twofold. First, separate spatial and temporal saliency maps are generated, where the computation of temporal saliency incorporates a recent psychological study of human visual speed perception. Second, the spatial and temporal saliency maps are merged into one using a spatiotemporally adaptive entropy-based uncertainty weighting approach. The spatial uncertainty weighing incorporates the characteristics of proximity and continuity of spatial saliency, while the temporal uncertainty weighting takes into account the variations of background motion and local contrast. Experimental results show that the proposed spatiotemporal uncertainty weighting algorithm significantly outperforms state-of-the-art video saliency detection models.", "title": "" }, { "docid": "74d6c2fff4b67d05871ca0debbc4ec15", "text": "There is great interest in developing rechargeable lithium batteries with higher energy capacity and longer cycle life for applications in portable electronic devices, electric vehicles and implantable medical devices. Silicon is an attractive anode material for lithium batteries because it has a low discharge potential and the highest known theoretical charge capacity (4,200 mAh g(-1); ref. 2). Although this is more than ten times higher than existing graphite anodes and much larger than various nitride and oxide materials, silicon anodes have limited applications because silicon's volume changes by 400% upon insertion and extraction of lithium which results in pulverization and capacity fading. Here, we show that silicon nanowire battery electrodes circumvent these issues as they can accommodate large strain without pulverization, provide good electronic contact and conduction, and display short lithium insertion distances. We achieved the theoretical charge capacity for silicon anodes and maintained a discharge capacity close to 75% of this maximum, with little fading during cycling.", "title": "" }, { "docid": "0966aa29291705b44a338692fed9fffc", "text": "Code-Mixing (CM) is defined as the embedding of linguistic units such as phrases, words, and morphemes of one language into an utterance of another language. CM is a natural phenomenon observed in many multilingual societies. It helps in speeding-up communication and allows wider variety of expression due to which it has become a popular mode of communication in social media forums like Facebook and Twitter. However, current Question Answering (QA) research and systems only support expressing a question in a single language which is an unrealistic and hard proposition especially for certain domains like health and technology. In this paper, we take the first step towards the development of a full-fledged QA system in CM language which is building a Question Classification (QC) system. The QC system analyzes the user question and infers the expected Answer Type (AType). The AType helps in locating and verifying the answer as it imposes certain type-specific constraints. In this paper, we present our initial efforts towards building a full-fledged QA system for CM language. We learn a basic Support Vector Machine (SVM) based QC system for English-Hindi CM questions. Due to the inherent complexities involved in processing CM language and also the unavailability of language processing resources such POS taggers, Chunkers, Parsers, we design our current system using only word-level resources such as language identification, transliteration and lexical translation. To reduce data sparsity and leverage resources available in a resource-rich language, in stead of extracting features directly from the original CM words, we translate them commonly into English and then perform featurization. We created an evaluation dataset for this task and our system achieves an accuracy of 63% and 45% in coarse-grained and fine-grained categories of the question taxanomy. The idea of translating features into English indeed helps in improving accuracy over the unigram baseline.", "title": "" }, { "docid": "7b548e0e1e02e3a3150d0fac19d6f6fd", "text": "The paper presents a new torque-controlled lightweight robot for medical procedures developed at the Institute of Robotics and Mechatronics of the German Aerospace Center. Based on the experiences in lightweight robotics and anthropomorphic robotic hands, a small robot arm with 7 axis and torque-controlled joints tailored to surgical procedures has been designed. With an optimized anthropomorphic kinematics, integrated multi-modal sensors and flexible robot control architecture, the first prototype KINEMEDIC and the new generation MIRO, enhanced for endoscopic surgery, can easily be adapted to a wide range of different medical procedures and scenarios by the use of specialized instruments and compiling workflows within the robot control. With the options of both, Cartesian impedance and position control, MIRO is suited for tele-manipulation, shared autonomy and completely autonomous procedures. This paper focuses on system and hardware design of the robot, supplemented with a brief description on new specific control methods for the MIRO robot.", "title": "" }, { "docid": "b9c821ae8ba7f11d64aba9ff9f9cd960", "text": "From last 5 decades, we are scaling down the CMOS devices to achieve the better performance in terms of speed, power dissipation, size and reliability. Our focus is to make the general use device like computer more compact in terms of size, better speed, less power consumption and so we are moving towards the new technology. That can be done by making memories compact and faster and so the scaling CMOS is done to attain high speed and decrease size of memory i.e. SRAM. Due to scaling of device we are facing new challenges day by day like oxide thickness fluctuation, intrinsic parameter fluctuation. Working on low threshold voltage and leakage energy also became main concern. SRAM (Static Random Access Memory) is memory used to store data. The comparison of different SRAM cell on the basis of different parameter is done. 6T, 8T and 9T SRAM cell are compared on basis of followings:1) Read delay, 2)Write delay, 3)Power dissipation.The technology used to implement the 6T (T stands for transistor), 8T and 9T SRAM is 90 nm technology and the software used is ORCAD PSPICE. Schematics of these SRAM have been implemented using ORCAD CAPTURE and analysis is done using PSPICE A/D tool of ORCAD PSPICE.", "title": "" }, { "docid": "1a10e38cfc5cad20c64709c59053ffad", "text": "Corporate and product brands are increasingly accepted as valuable intangible assets of organisations, evidence of which is apparent in the reported fi nancial value that strong brands fetch when traded in the mergers and acquisitions markets. However, while much attention is paid to conceptualising brand equity, less is paid to how brands should be managed and delivered in order to create and safeguard brand equity. In this article we develop a conceptual model of corporate brand management for creating and safeguarding brand equity. We argue that while legal protection of the brand is important, by itself it is insuffi cient to protect brand equity in the long term. We suggest that brand management ought to play an important role in safeguarding brand equity and propose a three-stage conceptual model for building and sustaining brand equity comprising: (1) adopting a brandorientation mindset, (2) developing internal branding capabilities, and (3) consistent delivery of the brand. We put forward propositions, which, taken together, form a theory of brand management for building and safeguarding brand equity. We illustrate the theory using 14 cases of award-winning service companies. Their use serves as a demonstration of how our model applies to brand management", "title": "" }, { "docid": "f8c4fd23f163c0a604569b5ecf4bdefd", "text": "The goal of interactive machine learning is to help scientists and engineers exploit more specialized data from within their deployed environment in less time, with greater accuracy and fewer costs. A basic introduction to the main components is provided here, untangling the many ideas that must be combined to produce practical interactive learning systems. This article also describes recent developments in machine learning that have significantly advanced the theoretical and practical foundations for the next generation of interactive tools.", "title": "" }, { "docid": "331df0bd161470558dd5f5061d2b1743", "text": "The work on large-scale graph analytics to date has largely focused on the study of static properties of graph snapshots. However, a static view of interactions between entities is often an oversimplification of several complex phenomena like the spread of epidemics, information diffusion, formation of online communities, and so on. Being able to find temporal interaction patterns, visualize the evolution of graph properties, or even simply compare them across time, adds significant value in reasoning over graphs. However, because of lack of underlying data management support, an analyst today has to manually navigate the added temporal complexity of dealing with large evolving graphs. In this paper, we present a system, called Historical Graph Store, that enables users to store large volumes of historical graph data and to express and run complex temporal graph analytical tasks against that data. It consists of two key components: a Temporal Graph Index (TGI), that compactly stores large volumes of historical graph evolution data in a partitioned and distributed fashion; it provides support for retrieving snapshots of the graph as of any timepoint in the past or evolution histories of individual nodes or neighborhoods; and a Spark-based Temporal Graph Analysis Framework (TAF), for expressing complex temporal analytical tasks and for executing them in an efficient and scalable manner. Our experiments demonstrate our system’s efficient storage, retrieval and analytics across a wide variety of queries on large volumes of historical graph data.", "title": "" }, { "docid": "0a5f1c3e15e9547c79cb030b821b12e3", "text": "Illumination preprocessing is an effective and efficient approach in handling lighting variations for face recognition. Despite much attention to face illumination preprocessing, there is seldom systemic comparative study on existing approaches that presents fascinating insights and conclusions in how to design better illumination preprocessing methods. To fill this vacancy, we provide a comparative study of 12 representative illumination preprocessing methods (HE, LT, GIC, DGD, LoG, SSR, GHP, SQI, LDCT, LTV, LN and TT) from two novel perspectives: (1) localization for holistic approach and (2) integration of large-scale and small-scale feature bands. Experiments on public face databases (YaleBExt, CMU-PIE, CAS-PEAL and FRGC V2.0) with illumination variations suggest that localization for holistic illumination preprocessing methods (HE, GIC, LTV and TT) further improves the performance. Integration of large-scale and small-scale feature bands for reflectance field estimation based illumination preprocessing approaches (SSR, GHP, SQI, LDCT, LTV and TT) is also found helpful for illumination-insensitive face recognition. & 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "5575b83290bec9b0c5da065f30762eb5", "text": "The resource-based view can be positioned relative to at least three theoretical traditions: SCPbased theories of industry determinants of firm performance, neo-classical microeconomics, and evolutionary economics. In the 1991 article, only the first of these ways of positioning the resourcebased view is explored. This article briefly discusses some of the implications of positioning the resource-based view relative to these other two literatures; it also discusses some of the empirical implications of each of these different resource-based theories. © 2001 Elsevier Science Inc. All rights reserved.", "title": "" }, { "docid": "5ac35b82792de409fbf76a709b912373", "text": "Extraction of bone contours from radiographs plays an important role in disease diagnosis, preoperative planning, and treatment analysis. We present a fully automatic method to accurately segment the proximal femur in anteroposterior pelvic radiographs. A number of candidate positions are produced by a global search with a detector. Each is then refined using a statistical shape model together with local detectors for each model point. Both global and local models use Random Forest regression to vote for the optimal positions, leading to robust and accurate results. The performance of the system is evaluated using a set of 839 images of mixed quality. We show that the local search significantly outperforms a range of alternative matching techniques, and that the fully automated system is able to achieve a mean point-to-curve error of less than 0.9 mm for 99% of all 839 images. To the best of our knowledge, this is the most accurate automatic method for segmenting the proximal femur in radiographs yet reported.", "title": "" }, { "docid": "6d6390e51589f5258deeb420547dd63c", "text": "Solar and wind energy systems are omnipresent, freely available, environmental friendly, and they are considered as promising power generating sources due to their availability and topological advantages for local power generations. Hybrid solar–wind energy systems, uses two renewable energy sources, allow improving the system efficiency and power reliability and reduce the energy storage requirements for stand-alone applications. The hybrid solar–wind systems are becoming popular in remote area power generation applications due to advancements in renewable energy technologies and substantial rise in prices of petroleum products. This paper is to review the current state of the simulation, optimization and control technologies for the stand-alone hybrid solar–wind energy systems with battery storage. It is found that continued research and development effort in this area is still needed for improving the systems’ performance, establishing techniques for accurately predicting their output and reliably integrating them with other renewable or conventional power generation sources. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "3faad0857fb0e355c7846d52bd2f5e8c", "text": "The issue of cultural universality of waist-to-hip ratio (WHR) attractiveness in women is currently under debate. We tested men's preferences for female WHR in traditional society of Tsimane'(Native Amazonians) of the Bolivian rainforest (N = 66). Previous studies showed preferences for high WHR in traditional populations, but they did not control for the women's body mass.We used a method of stimulus creation that enabled us to overcome this problem. We found that WHR lower than the average WHR in the population is preferred independent of cultural conditions. Our participants preferred the silhouettes of low WHR, but high body mass index (BMI), which might suggest that previous results could be an artifact related to employed stimuli. We found also that preferences for female BMI are changeable and depend on environmental conditions and probably acculturation (distance from the city). Interestingly, the Tsimane' men did not associate female WHR with age, health, physical strength or fertility. This suggests that men do not have to be aware of the benefits associated with certain body proportions - an issue that requires further investigation.", "title": "" } ]
scidocsrr
494da992d658dd3ffcc1528a55292256
Big data in tourism industry
[ { "docid": "d813c010b5c70b11912ada93f0e3b742", "text": "The rapid development of technologies introduces smartness to all organisations and communities. The Smart Tourism Destinations (STD) concept emerges from the development of Smart Cities. With technology being embedded on all organisations and entities, destinations will exploit synergies between ubiquitous sensing technology and their social components to support the enrichment of tourist experiences. By applying smartness concept to address travellers’ needs before, during and after their trip, destinations could increase their competitiveness level. This paper aims to take advantage from the development of Smart Cities by conceptualising framework for Smart Tourism Destinations through exploring tourism applications in destination and addressing both opportunities and challenges it possessed.", "title": "" } ]
[ { "docid": "8508162ac44f56aaaa9c521e6628b7b2", "text": "Pervasive or ubiquitous computing was developed thanks to the technological evolution of embedded systems and computer communication means. Ubiquitous computing has given birth to the concept of smart spaces that facilitate our daily life and increase our comfort where devices provide proactively adpated services. In spite of the significant previous works done in this domain, there still a lot of work and enhancement to do in particular the taking into account of current user's context when providing adaptable services. In this paper we propose an approach for context-aware services adaptation for a smart living room using two machine learning methods.", "title": "" }, { "docid": "ba7cb71cf07765f915d548f2a01e7b98", "text": "Existing data storage systems offer a wide range of functionalities to accommodate an equally diverse range of applications. However, new classes of applications have emerged, e.g., blockchain and collaborative analytics, featuring data versioning, fork semantics, tamper-evidence or any combination thereof. They present new opportunities for storage systems to efficiently support such applications by embedding the above requirements into the storage. In this paper, we present ForkBase, a storage engine designed for blockchain and forkable applications. By integrating core application properties into the storage, ForkBase not only delivers high performance but also reduces development effort. The storage manages multiversion data and supports two variants of fork semantics which enable different fork worklflows. ForkBase is fast and space efficient, due to a novel index class that supports efficient queries as well as effective detection of duplicate content across data objects, branches and versions. We demonstrate ForkBase’s performance using three applications: a blockchain platform, a wiki engine and a collaborative analytics application. We conduct extensive experimental evaluation against respective state-of-the-art solutions. The results show that ForkBase achieves superior performance while significantly lowering the development effort. PVLDB Reference Format: Sheng Wang, Tien Tuan Anh Dinh, Qian Lin, Zhongle Xie, Meihui Zhang, Qingchao Cai, Gang Chen, Beng Chin Ooi, Pingcheng Ruan. ForkBase: An Efficient Storage Engine for Blockchain and Forkable Applications. PVLDB, 11(10): 1137-1150, 2018. DOI: https://doi.org/10.14778/3231751.3231762", "title": "" }, { "docid": "d7ea7f669ada1ae6cb52ad33ab150837", "text": "Description Given an undirected graph G = ( V, E ), a clique S is a subset of V such that for any two elements u, v ∈ S, ( u, v ) ∈ E. Using the notation ES to represent the subset of edges which have both endpoints in clique S, the induced graph GS = ( S, ES ) is complete. Finding the largest clique in a graph is an NP-hard problem, called the maximum clique problem (MCP). Cliques are intimately related to vertex covers and independent sets. Given a graph G, and defining E* to be the complement of E, S is a maximum independent set in the complementary graph G* = ( V, E* ) if and only if S is a maximum clique in G. It follows that V – S is a minimum vertex cover in G*. There is a separate weighted form of MCP that we will not consider further here.", "title": "" }, { "docid": "9ba3fb8585c674003494c6c17abe9563", "text": "s grammatical structure from all irrelevant contexts, from its", "title": "" }, { "docid": "577bdd2d53ddac7d59b7e1f8655bcecb", "text": "Thoughtful leaders increasingly recognize that we are not only failing to solve the persistent problems we face, but are in fact causing them. System dynamics is designed to help avoid such policy resistance and identify high-leverage policies for sustained improvement. What does it take to be an effective systems thinker, and to teach system dynamics fruitfully? Understanding complex systems requires mastery of concepts such as feedback, stocks and flows, time delays, and nonlinearity. Research shows that these concepts are highly counterintuitive and poorly understood. It also shows how they can be taught and learned. Doing so requires the use of formal models and simulations to test our mental models and develop our intuition about complex systems. Yet, though essential, these concepts and tools are not sufficient. Becoming an effective systems thinker also requires the rigorous and disciplined use of scientific inquiry skills so that we can uncover our hidden assumptions and biases. It requires respect and empathy for others and other viewpoints. Most important, and most difficult to learn, systems thinking requires understanding that all models are wrong and humility about the limitations of our knowledge. Such humility is essential in creating an environment in which we can learn about the complex systems in which we are embedded and work effectively to create the world we truly desire. The paper is based on the talk the author delivered at the 2002 International System Dynamics Conference upon presentation of the Jay W. Forrester Award. Copyright  2002 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "bc018ef7cbcf7fc032fe8556016d08b1", "text": "This paper presents a simple, efficient, yet robust approach, named joint-scale local binary pattern (JLBP), for texture classification. In the proposed approach, the joint-scale strategy is developed firstly, and the neighborhoods of different scales are fused together by a simple arithmetic operation. And then, the descriptor is extracted from the mutual integration of the local patches based on the conventional local binary pattern (LBP). The proposed scheme can not only describe the micro-textures of a local structure, but also the macro-textures of a larger area because of the joint of multiple scales. Further, motivated by the completed local binary pattern (CLBP) scheme, the completed JLBP (CJLBP) is presented to enhance its power. The proposed descriptor is evaluated in relation to other recent LBP-based patterns and non-LBP methods on popular benchmark texture databases, Outex, CURet and UIUC. Generally, the experimental results show that the new method performs better than the state-of-the-art techniques.", "title": "" }, { "docid": "60f9a34771b844228e1d8da363e89359", "text": "3-mercaptopyruvate sulfurtransferase (3-MST) was a novel hydrogen sulfide (H2S)-synthesizing enzyme that may be involved in cyanide degradation and in thiosulfate biosynthesis. Over recent years, considerable attention has been focused on the biochemistry and molecular biology of H2S-synthesizing enzyme. In contrast, there have been few concerted attempts to investigate the changes in the expression of the H2S-synthesizing enzymes with disease states. To investigate the changes of 3-MST after traumatic brain injury (TBI) and its possible role, mice TBI model was established by controlled cortical impact system, and the expression and cellular localization of 3-MST after TBI was investigated in the present study. Western blot analysis revealed that 3-MST was present in normal mice brain cortex. It gradually increased, reached a peak on the first day after TBI, and then reached a valley on the third day. Importantly, 3-MST was colocalized with neuron. In addition, Western blot detection showed that the first day post injury was also the autophagic peak indicated by the elevated expression of LC3. Importantly, immunohistochemistry analysis revealed that injury-induced expression of 3-MST was partly colabeled by LC3. However, there was no colocalization of 3-MST with propidium iodide (cell death marker) and LC3 positive cells were partly colocalized with propidium iodide. These data suggested that 3-MST was mainly located in living neurons and may be implicated in the autophagy of neuron and involved in the pathophysiology of brain after TBI.", "title": "" }, { "docid": "107436d5f38f3046ef28495a14cc5caf", "text": "There is a universal standard for facial beauty regardless of race, age, sex and other variables. Beautiful faces have ideal facial proportion. Ideal proportion is directly related to divine proportion, and that proportion is 1 to 1.618. All living organisms, including humans, are genetically encoded to develop to this proportion because there are extreme esthetic and physiologic benefits. The vast majority of us are not perfectly proportioned because of environmental factors. Establishment of a universal standard for facial beauty will significantly simplify the diagnosis and treatment of facial disharmonies and abnormalities. More important, treating to this standard will maximize facial esthetics, TMJ health, psychologic and physiologic health, fertility, and quality of life.", "title": "" }, { "docid": "43a24625e781e8cb6824f61d59e9333d", "text": "In this work, we present a new software environment for the comparative evaluation of algorithms for grasping and dexterous manipulation. The key aspect in its development is to provide a tool that allows the reproduction of well-defined experiments in real-life scenarios in every laboratory and, hence, benchmarks that pave the way for objective comparison and competition in the field of grasping. In order to achieve this, experiments are performed on a sound open-source software platform with an extendable structure in order to be able to include a wider range of benchmarks defined by robotics researchers. The environment is integrated into the OpenGRASP toolkit that is built upon the OpenRAVE project and includes grasp-specific extensions and a tool for the creation/integration of new robot models. Currently, benchmarks for grasp and motion planningare included as case studies, as well as a library of domestic everyday objects models, and a real-life scenario that features a humanoid robot acting in a kitchen.", "title": "" }, { "docid": "17c1d82f041ef2390063850e9facfbb0", "text": "Most of the recent progresses on visual question answering are based on recurrent neural networks (RNNs) with attention. Despite the success, these models are often timeconsuming and having difficulties in modeling long range dependencies due to the sequential nature of RNNs. We propose a new architecture, Positional Self-Attention with Coattention (PSAC), which does not require RNNs for video question answering. Specifically, inspired by the success of self-attention in machine translation task, we propose a Positional Self-Attention to calculate the response at each position by attending to all positions within the same sequence, and then add representations of absolute positions. Therefore, PSAC can exploit the global dependencies of question and temporal information in the video, and make the process of question and video encoding executed in parallel. Furthermore, in addition to attending to the video features relevant to the given questions (i.e., video attention), we utilize the co-attention mechanism by simultaneously modeling “what words to listen to” (question attention). To the best of our knowledge, this is the first work of replacing RNNs with selfattention for the task of visual question answering. Experimental results of four tasks on the benchmark dataset show that our model significantly outperforms the state-of-the-art on three tasks and attains comparable result on the Count task. Our model requires less computation time and achieves better performance compared with the RNNs-based methods. Additional ablation study demonstrates the effect of each component of our proposed model.", "title": "" }, { "docid": "86353e0272a3d6fed220eaa85f95e8de", "text": "Large volumes of electronic health records, including free-text documents, are extensively generated within various sectors of healthcare. Medical concept annotation systems are designed to enrich these documents with key concepts in the domain using reference terminologies. Although there is a wide range of annotation systems, there is a lack of comparative analysis that enables thorough understanding of the effectiveness of both the concept extraction and concept recognition components of these systems, especially within the clinical domain. This paper analyses and evaluates four annotation systems (i.e., MetaMap, NCBO annotator, Ontoserver, and QuickUMLS) for the task of extracting medical concepts from clinical free-text documents. Empirical findings have shown that each annotator exhibits various levels of strengths in terms of overall precision or recall. The concept recognition component of each system, however, was found to be highly sensitive to the quality of the text spans output by the concept extraction component of the annotation system. The effects of these components on each other are quantified in such way as to provide evidence for an informed choice of an annotation system as well as avenues for future research.", "title": "" }, { "docid": "41e04cbe2ca692cb65f2909a11a4eb5b", "text": "Bitcoin’s core innovation is its solution to double-spending, called Nakamoto consensus. This mechanism provides a probabilistic guarantee that transactions will not be reversed once they are sufficiently deep in the blockchain, assuming an attacker controls a bounded fraction of mining power in the network. We show, however, that when miners are rational this guarantee can be undermined by a whale attack in which an attacker issues an off-theblockchain whale transaction with an anomalously large transaction fee in an effort to convince miners to fork the current chain. We carry out a game-theoretic analysis and simulation of this attack, and show conditions under which it yields an expected positive payoff for the attacker.", "title": "" }, { "docid": "63a8d0acbfb51977410632941c8b203d", "text": "Paper Indicator: early detection and measurement of ground-breaking research. In: Jeffery, Keith G; Dvořák, Jan (eds.): EInfrastructures for Research and Innovation: Linking Information Systems to Improve Scientific Knowledge Production: Proceedings of the 11th International Conference on Current Research Information Systems (June 6-9, 2012, Prague, Czech Republic). Pp. 295-304. ISBN 978-80-86742-33-5. Available from: www.eurocris.org.", "title": "" }, { "docid": "466f4ed7a59f9b922a8b87685d8f3a77", "text": "Ten cases of oral hairy leukoplakia (OHL) in HIV- negative patients are presented. Eight of the 10 patients were on steroid treatment for chronic obstructive pulmonary disease, 1 patient was on prednisone as part of a therapeutic regimen for gastrointestinal stromal tumor, and 1 patient did not have any history of immunosuppression. There were 5 men and 5 women, ages 32-79, with mean age being 61.8 years. Nine out of 10 lesions were located unilaterally on the tongue, whereas 1 lesion was located at the junction of the hard and soft palate. All lesions were described as painless, corrugated, nonremovable white plaques (leukoplakias). Histologic features were consistent with Epstein-Barr virus-associated hyperkeratosis suggestive of OHL, and confirmatory in situ hybridization was performed in all cases. Candida hyphae and spores were present in 8 cases. Pathologists should be aware of OHL presenting not only in HIV-positive and HIV-negative organ transplant recipients but also in patients receiving steroid treatment, and more important, certain histologic features should raise suspicion for such diagnosis without prior knowledge of immunosuppression.", "title": "" }, { "docid": "a8858713a7040ce6dd25706c9b72b45c", "text": "A new type of wearable button antenna for wireless local area network (WLAN) applications is proposed. The antenna is composed of a button with a diameter of circa 16 mm incorporating a patch on top of a dielectric disc. The button is located on top of a textile substrate and a conductive textile ground that are to be incorporated in clothing. The main characteristic feature of this antenna is that it shows two different types of radiation patterns, a monopole type pattern in the 2.4 GHz band for on-body communications and a broadside type pattern in the 5 GHz band for off-body communications. A very high efficiency of about 90% is obtained, which is much higher than similar full textile solutions in the literature. A prototype has been fabricated and measured. The effect of several real-life situations such as a tilted button and bending of the textile ground have been studied. Measurements agree very well with simulations.", "title": "" }, { "docid": "c9e87ff548ae938c1dbab1528cb550ac", "text": "Due to their many advantages over their hardwarebased counterparts, Software Defined Radios are becoming the new paradigm for radio and radar applications. In particular, Automatic Dependent Surveillance-Broadcast (ADS-B) is an emerging software defined radar technology, which has been already deployed in Europe and Australia. Deployment in the US is underway as part of the Next Generation Transportation Systems (NextGen). In spite of its several benefits, this technology has been widely criticized for being designed without security in mind, making it vulnerable to numerous attacks. Most approaches addressing this issue fail to adopt a holistic viewpoint, focusing only on part of the problem. In this paper, we propose a methodology that uses semantic technologies to address the security requirements definition from a systemic perspective. More specifically, knowledge engineering focused on misuse scenarios is applied for building customized resilient software defined radar applications, as well as classifying cyber attack severity according to measurable security metrics. We showcase our ideas using an ADS-B-related scenario developed to evaluate", "title": "" }, { "docid": "a494d6d9c8919ade3590ed7f6cf44451", "text": "Most algorithms commonly exploited for radar imaging are based on linear models that describe only direct scattering events from the targets in the investigated scene. This assumption is rarely verified in practical scenarios where the objects to be imaged interact with each other and with surrounding environment producing undesired multipath signals. These signals manifest in radar images as “ghosts\" that usually impair the reliable identification of the targets. The recent literature in the field is attempting to provide suitable techniques for multipath suppression from one side and from the other side is focusing on the exploitation of the additional information conveyed by multipath to improve target detection and localization. This work addresses the first problem with a specific focus on multipath ghosts caused by target-to-target interactions. In particular, the study is performed with regard to metallic scatterers by means of the linearized inverse scattering approach based on the physical optics (PO) approximation. A simple model is proposed in the case of point-like targets to gain insight into the ghosts problem so as to devise possible measurement and processing strategies for their mitigation. Finally, the effectiveness of these methods is assessed by reconstruction results obtained from full-wave synthetic data.", "title": "" }, { "docid": "4fb6b884b22962c6884bd94f8b76f6f2", "text": "This paper describes a novel motion estimation algorithm for floating base manipulators that utilizes low-cost inertial measurement units (IMUs) containing a three-axis gyroscope and a three-axis accelerometer. Four strap-down microelectromechanical system (MEMS) IMUs are mounted on each link to form a virtual IMU whose body's fixed frame is located at the center of the joint rotation. An extended Kalman filter (EKF) and a complementary filter are used to develop a virtual IMU by fusing together the output of four IMUs. The novelty of the proposed algorithm is that no forward kinematic model that requires data flow from previous joints is needed. The measured results obtained from the planar motion of a hydraulic arm show that the accuracy of the estimation of the joint angle is within ± 1 degree and that the root mean square error is less than 0.5 degree.", "title": "" }, { "docid": "dc64fa6178f46a561ef096fd2990ad3d", "text": "Forest fires cost millions of dollars in damages and claim many human lives every year. Apart from preventive measures, early detection and suppression of fires is the only way to minimize the damages and casualties. We present the design and evaluation of a wireless sensor network for early detection of forest fires. We first present the key aspects in modeling forest fires. We do this by analyzing the Fire Weather Index (FWI) System, and show how its different components can be used in designing efficient fire detection systems. The FWI System is one of the most comprehensive forest fire danger rating systems in North America, and it is backed by several decades of forestry research. The analysis of the FWI System could be of interest in its own right to researchers working in the sensor network area and to sensor manufacturers who can optimize the communication and sensing modules of their products to better fit forest fire detection systems. Then, we model the forest fire detection problem as a coverage problem in wireless sensor networks, and we present a distributed algorithm to solve it. In addition, we show how our algorithm can achieve various coverage degrees at different subareas of the forest, which can be used to provide unequal monitoring quality of forest zones. Unequal monitoring is important to protect residential and industrial neighborhoods close to forests. Finally, we present a simple data aggregation scheme based on the FWI System. This data aggregation scheme significantly prolongs the network lifetime, because it only delivers the data that is of interest to the application. We validate several aspects of our design using simulation.", "title": "" }, { "docid": "c94e5133c083193227b26a9fb35a1fbd", "text": "Modern computer vision algorithms typically require expensive data acquisition and accurate manual labeling. In this work, we instead leverage the recent progress in computer graphics to generate fully labeled, dynamic, and photo-realistic proxy virtual worlds. We propose an efficient real-to-virtual world cloning method, and validate our approach by building and publicly releasing a new video dataset, called \"Virtual KITTI\", automatically labeled with accurate ground truth for object detection, tracking, scene and instance segmentation, depth, and optical flow. We provide quantitative experimental evidence suggesting that (i) modern deep learning algorithms pre-trained on real data behave similarly in real and virtual worlds, and (ii) pre-training on virtual data improves performance. As the gap between real and virtual worlds is small, virtual worlds enable measuring the impact of various weather and imaging conditions on recognition performance, all other things being equal. We show these factors may affect drastically otherwise high-performing deep models for tracking.", "title": "" } ]
scidocsrr
f886049accf824a5599bb53774780dc1
How green is multipath TCP for mobile devices?
[ { "docid": "6e8b6f8d0d69d7fcdec560a536c5cd57", "text": "Networks have become multipath: mobile devices have multiple radio interfaces, datacenters have redundant paths and multihoming is the norm for big server farms. Meanwhile, TCP is still only single-path. Is it possible to extend TCP to enable it to support multiple paths for current applications on today’s Internet? The answer is positive. We carefully review the constraints—partly due to various types of middleboxes— that influenced the design of Multipath TCP and show how we handled them to achieve its deployability goals. We report our experience in implementing Multipath TCP in the Linux kernel and we evaluate its performance. Our measurements focus on the algorithms needed to efficiently use paths with different characteristics, notably send and receive buffer tuning and segment reordering. We also compare the performance of our implementation with regular TCP on web servers. Finally, we discuss the lessons learned from designing MPTCP.", "title": "" } ]
[ { "docid": "26541f268cb26b040226d7087ac1b890", "text": "Context: Continuous Delivery and Deployment (CD) practices aim to deliver software features more frequently and reliably. While some efforts have been made to study different aspects of CD practices, a little empirical work has been reported on the impact of CD on team structures, collaboration and team members' responsibilities. Goal: Our goal is to empirically investigate how Development (Dev) and Operations (Ops) teams are organized in software industry for adopting CD practices. Furthermore, we explore the potential impact of practicing CD on collaboration and team members' responsibilities. Method:We conducted a mixed-method empirical study, which collected data from 21 in-depth, semi-structured interviews in 19 organizations and a survey with 93 software practitioners. Results: There are four common types of team structures (i.e., (1) separate Dev and Ops teams with higher collaboration; (2) separate Dev and Ops teams with facilitator(s) in the middle; (3) small Ops team with more responsibilities for Dev team; (4) no visible Ops team) for organizing Dev and Ops teams to effectively initiate and adopt CD practices. Our study also provides insights into how software organizations actually improve collaboration among teams and team members for practicing CD. Furthermore, we highlight new responsibilities and skills (e.g., monitoring and logging skills), which are needed in this regard.", "title": "" }, { "docid": "2d4a4ffe5218a4db968357fc90a3b696", "text": "This paper documents the main results of studies that have been carried out, during a period of more than a decade, at University of Pisa in co-operation with other technical Italian institutions, about models of electrochemical batteries suitable for the use of the electrical engineer, in particular for the analysis of electrical systems with batteries. The problem of simulating electrochemical batteries by means of equivalent electric circuits is defined in a general way; then particular attention is then devoted to the problem of modeling of Lead–Acid batteries. For this kind of batteries general model structure is defined from which specific models can be inferred, having different degrees of complexity and simulation quality. In particular, the implementation of the third-order model, that shows a good compromise between complexity and precision, is developed in detail. The behavior of the proposed models is compared with results obtained with extensive lab tests on different types of lead–acid batteries.", "title": "" }, { "docid": "862d6e15fcf6768c0cff5e4a8fb2227c", "text": "The number of immune cells, especially dendritic cells and cytotoxic tumor infiltrating lymphocytes (TIL), particularly Th1 cells, CD8 T cells, and NK cells is associated with increased survival of cancer patients. Such antitumor cellular immune responses can be greatly enhanced by adoptive transfer of activated type 1 lymphocytes. Recently, adoptive cell therapy based on infusion of ex vivo expanded TILs has achieved substantial clinical success. Cytokine-induced killer (CIK) cells are a heterogeneous population of effector CD8 T cells with diverse TCR specificities, possessing non-MHC-restricted cytolytic activities against tumor cells. Preclinical studies of CIK cells in murine tumor models demonstrate significant antitumor effects against a number of hematopoietic and solid tumors. Clinical studies have confirmed benefit and safety of CIK cell-based therapy for patients with comparable malignancies. Enhancing the potency and specificity of CIK therapy via immunological and genetic engineering approaches and identifying robust biomarkers of response will significantly improve this therapy.", "title": "" }, { "docid": "dc92e3feb9ea6a20d73962c0905f623b", "text": "Software maintenance consumes around 70% of the software life cycle. Improving software maintainability could save software developers significant time and money. This paper examines whether the pattern of dependency injection significantly reduces dependencies of modules in a piece of software, therefore making the software more maintainable. This hypothesis is tested with 20 sets of open source projects from sourceforge.net, where each set contains one project that uses the pattern of dependency injection and one similar project that does not use the pattern. The extent of the dependency injection use in each project is measured by a new Number of DIs metric created specifically for this analysis. Maintainability is measured using coupling and cohesion metrics on each project, then performing statistical analysis on the acquired results. After completing the analysis, no correlation was evident between the use of dependency injection and coupling and cohesion numbers. However, a trend towards lower coupling numbers in projects with a dependency injection count of 10% or more was observed.", "title": "" }, { "docid": "86e69038661866e609d0eaf57d1e209b", "text": "The relationship of facial nerve (FN) and its branches with the retromandibular vein (RMV) has been described in adults, whereas there is no data in the literature regarding this relationship in fetuses. The study was conducted to evaluate the anatomic relationships of these structures on 61 hemi-faces of fetuses with a mean age of 26.5 ± 4.9 weeks with no visible facial abnormalities. The FN trunk was identified at its emergence at the stylomastoid foramen. It was traced till its ramification within the parotid gland. In 46 sides, FN trunk ramified before crossing RMV and ran lateral to it, while in 8 sides FN trunk ramified on the lateral aspect of the RMV. In 3 sides, FN trunk ramified after crossing the RMV at its medial aspect. In only 1 side, FN trunk trifurcated as superior, middle, and inferior divisions and RMV lied anterior to FN trunk, lateral to superior division, medial to middle and inferior divisions. In 2 sides, FN trunk bifurcated as superior and inferior divisions. Retromandibular vein was located anterior to FN trunk, medial to superior division, lateral to inferior division in both of them. In 1 side, RMV ran medial to almost all branches, except the cervical branch of FN. Variability in the relationship of FN and RMV in fetuses as presented in this study is thought to be crucial in surgical procedures particularly in early childhood.", "title": "" }, { "docid": "d0ad2b6a36dce62f650323cb5dd40bc9", "text": "If two hospitals are providing identical services in all respects, except for the brand name, why are customers willing to pay more for one hospital than the other? That is, the brand name is not just a name, but a name that contains value (brand equity). Brand equity is the value that the brand name endows to the product, such that consumers are willing to pay a premium price for products with the particular brand name. Accordingly, a company needs to manage its brand carefully so that its brand equity does not depreciate. Although measuring brand equity is important, managers have no brand equity index that is psychometrically robust and parsimonious enough for practice. Indeed, index construction is quite different from conventional scale development. Moreover, researchers might still be unaware of the potential appropriateness of formative indicators for operationalizing particular constructs. Toward this end, drawing on the brand equity literature and following the index construction procedure, this study creates a brand equity index for a hospital. The results reveal a parsimonious five-indicator brand equity index that can adequately capture the full domain of brand equity. This study also illustrates the differences between index construction and scale development.", "title": "" }, { "docid": "0da2d2c044387539fc0452ff376e33c0", "text": "Essential oils are widely used in pharmaceutical, sanitary, cosmetic, agriculture and food industries for their bactericidal, virucidal, fungicidal, antiparasitical and insecticidal properties. Their anticancer activity is well documented. Over a hundred essential oils from more than twenty plant families have been tested on more than twenty types of cancers in last past ten years. This review is focused on the activity of essential oils and their components on various types of cancers. For some of them the mechanisms involved in their anticancer activities have been carried out.", "title": "" }, { "docid": "c29a2429d6dd7bef7761daf96a29daaf", "text": "In this meta-analysis, we synthesized data from published journal articles that investigated viewers’ enjoyment of fright and violence. Given the limited research on this topic, this analysis was primarily a way of summarizing the current state of knowledge and developing directions for future research. The studies selected (a) examined frightening or violent media content; (b) used self-report measures of enjoyment or preference for such content (the dependent variable); and (c) included independent variables that were given theoretical consideration in the literature. The independent variables examined were negative affect and arousal during viewing, empathy, sensation seeking, aggressiveness, and the respondents’ gender and age. The analysis confirmed that male viewers, individuals lower in empathy, and those higher in sensation seeking and aggressiveness reported more enjoyment of fright and violence. Some support emerged for Zillmann’s (1980, 1996) model of suspense enjoyment. Overall, the results demonstrate the importance of considering how viewers interpret or appraise their reactions to fright and violence. However, the studies were so diverse in design and measurement methods that it was difficult to identify the underlying processes. Suggestions are proposed for future research that will move toward the integration of separate lines of inquiry in a unified approach to understanding entertainment. MEDIA PSYCHOLOGY, 7, 207–237 Copyright © 2005, Lawrence Erlbaum Associates, Inc.", "title": "" }, { "docid": "416f9184ae6b0c04803794b1ab2b8f50", "text": "Although hydrophilic small molecule drugs are widely used in the clinic, their rapid clearance, suboptimal biodistribution, low intracellular absorption and toxicity can limit their therapeutic efficacy. These drawbacks can potentially be overcome by loading the drug into delivery systems, particularly liposomes; however, low encapsulation efficiency usually results. Many strategies are available to improve both the drug encapsulation efficiency and delivery to the target site to reduce side effects. For encapsulation, passive and active strategies are available. Passive strategies encompass the proper selection of the composition of the formulation, zeta potential, particle size and preparation method. Moreover, many weak acids and bases, such as doxorubicin, can be actively loaded with high efficiency. It is highly desirable that once the drug is encapsulated, it should be released preferentially at the target site, resulting in an optimal therapeutic effect devoid of side effects. For this purpose, targeted and triggered delivery approaches are available. The rapidly increasing knowledge of the many overexpressed biochemical makers in pathological sites, reviewed herein, has enabled the development of liposomes decorated with ligands for cell-surface receptors and active delivery. Furthermore, many liposomal formulations have been designed to actively release their content in response to specific stimuli, such as a pH decrease, heat, external alternating magnetic field, ultrasound or light. More than half a century after the discovery of liposomes, some hydrophilic small molecule drugs loaded in liposomes with high encapsulation efficiency are available on the market. However, targeted liposomes or formulations able to deliver the drug after a stimulus are not yet a reality in the clinic and are still awaited.", "title": "" }, { "docid": "1ba6f0efdac239fa2cb32064bb743d29", "text": "This paper presents a new method for determining efficient spatial distributions of police patrol areas. This method employs a traditional maximal covering formulation and an innovative backup covering formulation to provide alternative optimal solutions to police decision makers, and to address the lack of objective quantitative methods for police area design in the literature or in practice. This research demonstrates that operations research methods can be used in police decision making, presents a new backup coverage model that is appropriate for patrol area design, and encourages the integration of geographic information systems and optimal solution procedures. The models and methods are tested with the police geography of Dallas, TX. The optimal solutions are compared with the existing police geography, showing substantial improvement in number of incidents covered as well as total distance traveled.", "title": "" }, { "docid": "b7f53aa4b1e68f05bee2205dd55b975a", "text": "We study the problem of off-policy evaluation (OPE) in reinforcement learning (RL), where the goal is to estimate the performance of a policy from the data generated by another policy(ies). In particular, we focus on the doubly robust (DR) estimators that consist of an importance sampling (IS) component and a performance model, and utilize the low (or zero) bias of IS and low variance of the model at the same time. Although the accuracy of the model has a huge impact on the overall performance of DR, most of the work on using the DR estimators in OPE has been focused on improving the IS part, and not much on how to learn the model. In this paper, we propose alternative DR estimators, called more robust doubly robust (MRDR), that learn the model parameter by minimizing the variance of the DR estimator. We first present a formulation for learning the DR model in RL. We then derive formulas for the variance of the DR estimator in both contextual bandits and RL, such that their gradients w.r.t. the model parameters can be estimated from the samples, and propose methods to efficiently minimize the variance. We prove that the MRDR estimators are strongly consistent and asymptotically optimal. Finally, we evaluate MRDR in bandits and RL benchmark problems, and compare its performance with the existing methods.", "title": "" }, { "docid": "3e94875b3229fc621ec90915414b9b22", "text": "Inflammation, endothelial dysfunction, and mineral bone disease are critical factors contributing to morbidity and mortality in hemodialysis (HD) patients. Physical exercise alleviates inflammation and increases bone density. Here, we investigated the effects of intradialytic aerobic cycling exercise on HD patients. Forty end-stage renal disease patients undergoing HD were randomly assigned to either an exercise or control group. The patients in the exercise group performed a cycling program consisting of a 5-minute warm-up, 20 minutes of cycling at the desired workload, and a 5-minute cool down during 3 HD sessions per week for 3 months. Biochemical markers, inflammatory cytokines, nutritional status, the serum endothelial progenitor cell (EPC) count, bone mineral density, and functional capacity were analyzed. After 3 months of exercise, the patients in the exercise group showed significant improvements in serum albumin levels, the body mass index, inflammatory cytokine levels, and the number of cells positive for CD133, CD34, and kinase insert domain-conjugating receptor. Compared with the exercise group, the patients in the control group showed a loss of bone density at the femoral neck and no increases in EPCs. The patients in the exercise group also had a significantly greater 6-minute walk distance after completing the exercise program. Furthermore, the number of EPCs significantly correlated with the 6-minute walk distance both before and after the 3-month program. Intradialytic aerobic cycling exercise programs can effectively alleviate inflammation and improve nutrition, bone mineral density, and exercise tolerance in HD patients.", "title": "" }, { "docid": "8eb6c74d678235a6fd4df755a133115e", "text": "We have demonstrated a 70-nm n-channel tunneling field-effect transistor (TFET) which has a subthreshold swing (SS) of 52.8 mV/dec at room temperature. It is the first experimental result that shows a sub-60-mV/dec SS in the silicon-based TFETs. Based on simulation results, the gate oxide and silicon-on-insulator layer thicknesses were scaled down to 2 and 70 nm, respectively. However, the ON/ OFF current ratio of the TFET was still lower than that of the MOSFET. In order to increase the on current further, the following approaches can be considered: reduction of effective gate oxide thickness, increase in the steepness of the gradient of the source to channel doping profile, and utilization of a lower bandgap channel material", "title": "" }, { "docid": "dffe5305558e10a0ceba499f3a01f4d8", "text": "A simple framework Probabilistic Multi-view Graph Embedding (PMvGE) is proposed for multi-view feature learning with many-to-many associations so that it generalizes various existing multi-view methods. PMvGE is a probabilistic model for predicting new associations via graph embedding of the nodes of data vectors with links of their associations. Multi-view data vectors with many-to-many associations are transformed by neural networks to feature vectors in a shared space, and the probability of new association between two data vectors is modeled by the inner product of their feature vectors. While existing multi-view feature learning techniques can treat only either of many-to-many association or non-linear transformation, PMvGE can treat both simultaneously. By combining Mercer’s theorem and the universal approximation theorem, we prove that PMvGE learns a wide class of similarity measures across views. Our likelihoodbased estimator enables efficient computation of non-linear transformations of data vectors in largescale datasets by minibatch SGD, and numerical experiments illustrate that PMvGE outperforms existing multi-view methods.", "title": "" }, { "docid": "41cfa26891e28a76c1d4508ab7b60dfb", "text": "This paper analyses the digital simulation of a buck converter to emulate the photovoltaic (PV) system with focus on fuzzy logic control of buck converter. A PV emulator is a DC-DC converter (buck converter in the present case) having same electrical characteristics as that of a PV panel. The emulator helps in the real analysis of PV system in an environment where using actual PV systems can produce inconsistent results due to variation in weather conditions. The paper describes the application of fuzzy algorithms to the control of dynamic processes. The complete system is modelled in MATLAB® Simulink SimPowerSystem software package. The results obtained from the simulation studies are presented and the steady state and dynamic stability of the PV emulator system is discussed.", "title": "" }, { "docid": "81d50714ba7a53d908f6b3e3030499c2", "text": "Bit coin is widely regarded as the first broadly successful e-cash system. An oft-cited concern, though, is that mining Bit coins wastes computational resources. Indeed, Bit coin's underlying mining mechanism, which we call a scratch-off puzzle (SOP), involves continuously attempting to solve computational puzzles that have no intrinsic utility. We propose a modification to Bit coin that repurposes its mining resources to achieve a more broadly useful goal: distributed storage of archival data. We call our new scheme Perm coin. Unlike Bit coin and its proposed alternatives, Perm coin requires clients to invest not just computational resources, but also storage. Our scheme involves an alternative scratch-off puzzle for Bit coin based on Proofs-of-Retrievability (PORs). Successfully minting money with this SOP requires local, random access to a copy of a file. Given the competition among mining clients in Bit coin, this modified SOP gives rise to highly decentralized file storage, thus reducing the overall waste of Bit coin. Using a model of rational economic agents we show that our modified SOP preserves the essential properties of the original Bit coin puzzle. We also provide parameterizations and calculations based on realistic hardware constraints to demonstrate the practicality of Perm coin as a whole.", "title": "" }, { "docid": "424fe4ffd8077d390ddee2a05ff5dcea", "text": "A re-emergence of research on EEG-neurofeedback followed controlled evidence of clinical benefits and validation of cognitive/affective gains in healthy participants including correlations in support of feedback learning mediating outcome. Controlled studies with healthy and elderly participants, which have increased exponentially, are reviewed including protocols from the clinic: sensory-motor rhythm, beta1 and alpha/theta ratios, down-training theta maxima, and from neuroscience: upper-alpha, theta, gamma, alpha desynchronisation. Outcome gains include sustained attention, orienting and executive attention, the P300b, memory, spatial rotation, RT, complex psychomotor skills, implicit procedural memory, recognition memory, perceptual binding, intelligence, mood and well-being. Twenty-three of the controlled studies report neurofeedback learning indices along with beneficial outcomes, of which eight report correlations in support of a meditation link, results which will be supplemented by further creativity and the performing arts evidence in Part II. Validity evidence from optimal performance studies represents an advance for the neurofeedback field demonstrating that cross fertilisation between clinical and optimal performance domains will be fruitful. Theoretical and methodological issues are outlined further in Part III.", "title": "" }, { "docid": "0745755e5347c370cdfbeca44dc6d288", "text": "For many decades correlation and power spectrum have been primary tools for digital signal processing applications in the biomedical area. The information contained in the power spectrum is essentially that of the autocorrelation sequence; which is sufficient for complete statistical descriptions of Gaussian signals of known means. However, there are practical situations where one needs to look beyond autocorrelation of a signal to extract information regarding deviation from Gaussianity and the presence of phase relations. Higher order spectra, also known as polyspectra, are spectral representations of higher order statistics, i.e. moments and cumulants of third order and beyond. HOS (higher order statistics or higher order spectra) can detect deviations from linearity, stationarity or Gaussianity in the signal. Most of the biomedical signals are non-linear, non-stationary and non-Gaussian in nature and therefore it can be more advantageous to analyze them with HOS compared to the use of second-order correlations and power spectra. In this paper we have discussed the application of HOS for different bio-signals. HOS methods of analysis are explained using a typical heart rate variability (HRV) signal and applications to other signals are reviewed.", "title": "" }, { "docid": "96e34b9e05860a2cbed2f7464d139c5b", "text": "BACKGROUND\nFindings from family and twin studies support a genetic contribution to the development of sexual orientation in men. However, previous studies have yielded conflicting evidence for linkage to chromosome Xq28.\n\n\nMETHOD\nWe conducted a genome-wide linkage scan on 409 independent pairs of homosexual brothers (908 analyzed individuals in 384 families), by far the largest study of its kind to date.\n\n\nRESULTS\nWe identified two regions of linkage: the pericentromeric region on chromosome 8 (maximum two-point LOD = 4.08, maximum multipoint LOD = 2.59), which overlaps with the second strongest region from a previous separate linkage scan of 155 brother pairs; and Xq28 (maximum two-point LOD = 2.99, maximum multipoint LOD = 2.76), which was also implicated in prior research.\n\n\nCONCLUSIONS\nResults, especially in the context of past studies, support the existence of genes on pericentromeric chromosome 8 and chromosome Xq28 influencing development of male sexual orientation.", "title": "" }, { "docid": "92993ce699e720568d2e1b12a605bc3e", "text": "Techniques for violent scene detection and affective impact prediction in videos can be deployed in many applications. In MediaEval 2015, we explore deep learning methods to tackle this challenging problem. Our system consists of several deep learning features. First, we train a Convolutional Neural Network (CNN) model with a subset of ImageNet classes selected particularly for violence detection. Second, we adopt a specially designed two-stream CNN framework [1] to extract features on both static frames and motion optical flows. Third, Long Short Term Memory (LSTM) models are applied on top of the two-stream CNN features, which can capture the longer-term temporal dynamics. In addition, several conventional motion and audio features are also extracted as complementary information to the deep learning features. By fusing all the advanced features, we achieve a mean average precision of 0.296 in the violence detection subtask, and an accuracy of 0.418 and 0.488 for arousal and valence respectively in the induced affect detection subtask. 1. SYSTEM DESCRIPTION Figure 1 gives an overview of our system. In this short paper, we briefly describe each of the key components. For more information about the task definitions, interested readers may refer to [2].", "title": "" } ]
scidocsrr
a65e9b7f2d44433acab485f1cf53cdba
Deep Learning for Single-View Instance Recognition
[ { "docid": "8c0f455b31187a30e0b98d30dcb3adeb", "text": "Dataset bias remains a significant barrier towards solving real world computer vision tasks. Though deep convolutional networks have proven to be a competitive approach for image classification, a question remains: have these models have solved the dataset bias problem? In general, training or fine-tuning a state-ofthe-art deep model on a new domain requires a significant amount of data, which for many applications is simply not available. Transfer of models directly to new domains without adaptation has historically led to poor recognition performance. In this paper, we pose the following question: is a single image dataset, much larger than previously explored for adaptation, comprehensive enough to learn general deep models that may be effectively applied to new image domains? In other words, are deep CNNs trained on large amounts of labeled data as susceptible to dataset bias as previous methods have been shown to be? We show that a generic supervised deep CNN model trained on a large dataset reduces, but does not remove, dataset bias. Furthermore, we propose several methods for adaptation with deep models that are able to operate with little (one example per category) or no labeled domain specific data. Our experiments show that adaptation of deep models on benchmark visual domain adaptation datasets can provide a significant performance boost.", "title": "" } ]
[ { "docid": "e3ae049bd1cecbde679acdefc4ad0758", "text": "Beneficial plant–microbe interactions in the rhizosphere are primary determinants of plant health and soil fertility. Arbuscular mycorrhizas are the most important microbial symbioses for the majority of plants and, under conditions of P-limitation, influence plant community development, nutrient uptake, water relations and above-ground productivity. They also act as bioprotectants against pathogens and toxic stresses. This review discusses the mechanism by which these benefits are conferred through abiotic and biotic interactions in the rhizosphere. Attention is paid to the conservation of biodiversity in arbuscular mycorrhizal fungi (AMF). Examples are provided in which the ecology of AMF has been taken into account and has had an impact in landscape regeneration, horticulture, alleviation of desertification and in the bioremediation of contaminated soils. It is vital that soil scientists and agriculturalists pay due attention to the management of AMF in any schemes to increase, restore or maintain soil fertility.", "title": "" }, { "docid": "33bf2bd02353ee33762393cf7cba025c", "text": "We are given a collection D of text documents d1,…,dk, with ∑i = n, which may be preprocessed. In the document listing problem, we are given an online query comprising of a pattern string p of length m and our goal is to return the set of all documents that contain one or more copies of p. In the closely related occurrence listing problem, we output the set of all positions within the documents where pattern p occurs. In 1973, Weiner [24] presented an algorithm with O(n) time and space preprocessing following which the occurrence listing problem can be solved in time O(m + output) where output is the number of positions where p occurs; this algorithm is clearly optimal. In contrast, no optimal algorithm is known for the closely related document listing problem, which is perhaps more natural and certainly well-motivated.We provide the first known optimal algorithm for the document listing problem. More generally, we initiate the study of pattern matching problems that require retrieving documents matched by the patterns; this contrasts with pattern matching problems that have been studied more frequently, namely, those that involve retrieving all occurrences of patterns. We consider document retrieval problems that are motivated by online query processing in databases, Information Retrieval systems and Computational Biology. We present very efficient (optimal) algorithms for our document retrieval problems. Our approach for solving such problems involve performing \"local\" encodings whereby they are reduced to range query problems on geometric objects --- points and lines --- that have color. We present improved algorithms for these colored range query problems that arise in our reductions using the structural properties of strings. This approach is quite general and yields simple, efficient, implementable algorithms for all the document retrieval problems in this paper.", "title": "" }, { "docid": "098e2523b2e9546552708d67d1f841fb", "text": "The performance of photovoltaic (PV) array is strongly dependent on the operating conditions especially the solar irradiation. The shading on PV array by the passing clouds or neighboring buildings causes not only power losses, but also non-linearity of V-I characteristics of PV array. Under partially shaded conditions, the characteristics have more non-linearity with multiple local maxima. In spite of several researchers have studied the PV characteristics under the partial shading condition, it is difficult to simulate the partial shading condition on the simulation tool for power electronics. The C-language based PV array simulation strategy proposed in here is to simulate the effects of partial shading condition for developing more advanced maximum power point tracking algorithms or power conversion systems. Proposed modeling strategy is easily connected to simulation tools. In this paper, the modeling of photovoltaic array is based on the numerical analysis, and the verification is performed using PSIM.", "title": "" }, { "docid": "4ac69ffb880cea60dac3b24b55c9c083", "text": "Patterns of Intelligent and Mobile Agents Elizabeth A.Kendall, P.V. Murali Krishna, Chirag V. Pathak, C:B. Suresh Computer Systems Engineering, Royal Melbourne Institute Of Technology City Campus, GPO Box 2476V, Melbourne, VIC 3001 AUSTRALIA email : [email protected] 1. ABSTRACT Agent systems must have foundation; one approach that successfully applied to other so&are is patterns. This paper collection of patterns for agents. 2. MOTIVATION Almost all agent development to date has a strong has been kinds of presents a", "title": "" }, { "docid": "bce79146a0316fd10c6ee492ff0b5686", "text": "Recent advances in deep learning for object recognition in natural images has prompted a surge of interest in applying a similar set of techniques to medical images. Most of the initial attempts largely focused on replacing the input to such a deep convolutional neural network from a natural image to a medical image. This, however, does not take into consideration the fundamental differences between these two types of data. More specifically, detection or recognition of an anomaly in medical images depends significantly on fine details, unlike object recognition in natural images where coarser, more global structures matter more. This difference makes it inadequate to use the existing deep convolutional neural networks architectures, which were developed for natural images, because they rely on heavily downsampling an image to a much lower resolution to reduce the memory requirements. This hides details necessary to make accurate predictions for medical images. Furthermore, a single exam in medical imaging often comes with a set of different views which must be seamlessly fused in order to reach a correct conclusion. In our work, we propose to use a multi-view deep convolutional neural network that handles a set of more than one highresolution medical image. We evaluate this network on large-scale mammography-based breast cancer screening (BI-RADS prediction) using 103 thousand images. We focus on investigating the impact of training set sizes and image sizes on the prediction accuracy. Our results highlight that performance clearly increases with the size of training set, and that the best performance can only be achieved using the images in the original resolution. This suggests the future direction of medical imaging research using deep neural networks is to utilize as much data as possible with the least amount of potentially harmful preprocessing.", "title": "" }, { "docid": "fe97095f2af18806e7032176c6ac5d89", "text": "Targeted social engineering attacks in the form of spear phishing emails, are often the main gimmick used by attackers to infiltrate organizational networks and implant state-of-the-art Advanced Persistent Threats (APTs). Spear phishing is a complex targeted attack in which, an attacker harvests information about the victim prior to the attack. This information is then used to create sophisticated, genuine-looking attack vectors, drawing the victim to compromise confidential information. What makes spear phishing different, and more powerful than normal phishing, is this contextual information about the victim. Online social media services can be one such source for gathering vital information about an individual. In this paper, we characterize and examine a true positive dataset of spear phishing, spam, and normal phishing emails from Symantec's enterprise email scanning service. We then present a model to detect spear phishing emails sent to employees of 14 international organizations, by using social features extracted from LinkedIn. Our dataset consists of 4,742 targeted attack emails sent to 2,434 victims, and 9,353 non targeted attack emails sent to 5,912 non victims; and publicly available information from their LinkedIn profiles. We applied various machine learning algorithms to this labeled data, and achieved an overall maximum accuracy of 97.76% in identifying spear phishing emails. We used a combination of social features from LinkedIn profiles, and stylometric features extracted from email subjects, bodies, and attachments. However, we achieved a slightly better accuracy of 98.28% without the social features. Our analysis revealed that social features extracted from LinkedIn do not help in identifying spear phishing emails. To the best of our knowledge, this is one of the first attempts to make use of a combination of stylometric features extracted from emails, and social features extracted from an online social network to detect targeted spear phishing emails.", "title": "" }, { "docid": "85221954ced857c449acab8ee5cf801e", "text": "IMSI Catchers are used in mobile networks to identify and eavesdrop on phones. When, the number of vendors increased and prices dropped, the device became available to much larger audiences. Self-made devices based on open source software are available for about US$ 1,500.\n In this paper, we identify and describe multiple methods of detecting artifacts in the mobile network produced by such devices. We present two independent novel implementations of an IMSI Catcher Catcher (ICC) to detect this threat against everyone's privacy. The first one employs a network of stationary (sICC) measurement units installed in a geographical area and constantly scanning all frequency bands for cell announcements and fingerprinting the cell network parameters. These rooftop-mounted devices can cover large areas. The second implementation is an app for standard consumer grade mobile phones (mICC), without the need to root or jailbreak them. Its core principle is based upon geographical network topology correlation, facilitating the ubiquitous built-in GPS receiver in today's phones and a network cell capabilities fingerprinting technique. The latter works for the vicinity of the phone by first learning the cell landscape and than matching it against the learned data. We implemented and evaluated both solutions for digital self-defense and deployed several of the stationary units for a long term field-test. Finally, we describe how to detect recently published denial of service attacks.", "title": "" }, { "docid": "401cb3ebbc226ae117303f6a6bb6714c", "text": "Brain-related disorders such as epilepsy can be diagnosed by analyzing electroencephalograms (EEG). However, manual analysis of EEG data requires highly trained clinicians, and is a procedure that is known to have relatively low inter-rater agreement (IRA). Moreover, the volume of the data and the rate at which new data becomes available make manual interpretation a time-consuming, resource-hungry, and expensive process. In contrast, automated analysis of EEG data offers the potential to improve the quality of patient care by shortening the time to diagnosis and reducing manual error. In this paper, we focus on one of the first steps in interpreting an EEG session identifying whether the brain activity is abnormal or normal. To address this specific task, we propose a novel recurrent neural network (RNN) architecture termed ChronoNet which is inspired by recent developments from the field of image classification and designed to work efficiently with EEG data. ChronoNet is formed by stacking multiple 1D convolution layers followed by deep gated recurrent unit (GRU) layers where each 1D convolution layer uses multiple filters of exponentially varying lengths and the stacked GRU layers are densely connected in a feed-forward manner. We used the recently released TUH Abnormal EEG Corpus dataset for evaluating the performance of ChronoNet. Unlike previous studies using this dataset, ChronoNet directly takes time-series EEG as input and learns meaningful representations of brain activity patterns. ChronoNet outperforms previously reported results on this dataset thereby setting a new benchmark. Furthermore, we demonstrate the domain-independent nature of ChronoNet by successfully applying it to classify speech commands.", "title": "" }, { "docid": "7276da5180e8ac789b00c4d6f78cb55c", "text": "Recent advances in convolutional neural networks have shown promising results in 3D shape completion. But due to GPU memory limitations, these methods can only produce low-resolution outputs. To inpaint 3D models with semantic plausibility and contextual details, we introduce a hybrid framework that combines a 3D Encoder-Decoder Generative Adversarial Network (3D-ED-GAN) and a Longterm Recurrent Convolutional Network (LRCN). The 3DED- GAN is a 3D convolutional neural network trained with a generative adversarial paradigm to fill missing 3D data in low-resolution. LRCN adopts a recurrent neural network architecture to minimize GPU memory usage and incorporates an Encoder-Decoder pair into a Long Shortterm Memory Network. By handling the 3D model as a sequence of 2D slices, LRCN transforms a coarse 3D shape into a more complete and higher resolution volume. While 3D-ED-GAN captures global contextual structure of the 3D shape, LRCN localizes the fine-grained details. Experimental results on both real-world and synthetic data show reconstructions from corrupted models result in complete and high-resolution 3D objects.", "title": "" }, { "docid": "f4d6ad3b6b3a625d427defde77fc02cd", "text": "Linear stepped frequency radar is used in wide-band radar applications, such as airborne synthetic aperture radar (SAR), turntable inverse SAR, and ground penetration radar. The frequency is stepped linearly with a constant frequency change, and range cells are formed by fast Fourier transform processing. The covered bandwidth defines the range resolution, and the length of the frequency step restricts the nonambiguous range interval. A random choice of the transmitted frequencies suppresses the range ambiguity, improves covert detection, and reduces the signal interference between adjacent sensors. As a result of the random modulation, however, a noise component is added to the range/Doppler sidelobes. In this paper, relationships of random step frequency radar are compared with frequency-modulated continuous wave noise radar and the statistical characteristics of the ambiguity function and the sidelobe noise floor are analyzed. Algorithms are investigated, which reduce the sidelobes and the noise-floor contribution from strong dominating reflectors in the scene. Theoretical predictions are compared with Monte Carlo simulations and experimental data", "title": "" }, { "docid": "4d1d343f03f6a1fae94f630a64e10081", "text": "This paper describes our system participating in the aspect-based sentiment analysis task of Semeval 2014. The goal was to identify the aspects of given target entities and the sentiment expressed towards each aspect. We firstly introduce a system based on supervised machine learning, which is strictly constrained and uses the training data as the only source of information. This system is then extended by unsupervised methods for latent semantics discovery (LDA and semantic spaces) as well as the approach based on sentiment vocabularies. The evaluation was done on two domains, restaurants and laptops. We show that our approach leads to very promising results.", "title": "" }, { "docid": "3d34dc15fa11e723a52b21dc209a939f", "text": "Valuable information can be hidden in images, however, few research discuss data mining on them. In this paper, we propose a general framework based on the decision tree for mining and processing image data. Pixel-wised image features were extracted and transformed into a database-like table which allows various data mining algorithms to make explorations on it. Each tuple of the transformed table has a feature descriptor formed by a set of features in conjunction with the target label of a particular pixel. With the label feature, we can adopt the decision tree induction to realize relationships between attributes and the target label from image pixels, and to construct a model for pixel-wised image processing according to a given training image dataset. Both experimental and theoretical analyses were performed in this study. Their results show that the proposed model can be very efficient and effective for image processing and image mining. It is anticipated that by using the proposed model, various existing data mining and image processing methods could be worked on together in different ways. Our model can also be used to create new image processing methodologies, refine existing image processing methods, or act as a powerful image filter.", "title": "" }, { "docid": "714961682f183485b438334151f3d17d", "text": "The LogAnswer system is an application of automated reasoning to the field of open domain question answering. In order to find answers to natural language questions regarding arbitrary topics, the system integrates an automated theorem prover in a framework of natural language processing tools. The latter serve to construct an extensive knowledge base automatically from given textual sources, while the automated theorem prover makes it possible to derive answers by deductive reasoning. In the paper, we discuss the requirements to the prover that arise in this application, especially concerning efficiency and robustness. The proposed solution rests on incremental reasoning, relaxation of the query (if no proof of the full query is found), and other techniques. In order to improve the robustness of the approach to gaps of the background knowledge, the results of deductive processing are combined with shallow linguistic features by machine learning.", "title": "" }, { "docid": "246866da7509b2a8a2bda734a664de9c", "text": "In this paper we present an approach of procedural game content generation that focuses on a gameplay loops formal language (GLFL). In fact, during an iterative game design process, game designers suggest modifications that often require high development costs. The proposed language and its operational semantic allow reducing the gap between game designers' requirement and game developers' needs, enhancing therefore video games productivity. Using gameplay loops concept for game content generation offers a low cost solution to adjust game challenges, objectives and rewards in video games. A pilot experiment have been conducted to study the impact of this approach on game development.", "title": "" }, { "docid": "b990a21742a1db59811d636368527ab0", "text": "We describe a high-performance implementation of the lattice Boltzmann method (LBM) for sparse geometries on graphic processors. In our implementation we cover the whole geometry with a uniform mesh of small tiles and carry out calculations for each tile independently with proper data synchronization at the tile edges. For this method, we provide both a theoretical analysis of complexity and the results for real implementations involving two-dimensional (2D) and three-dimensional (3D) geometries. Based on the theoretical model, we show that tiles offer significantly smaller bandwidth overheads than solutions based on indirect addressing. For 2D lattice arrangements, a reduction in memory usage is also possible, although at the cost of diminished performance. We achieved a performance of 682 MLUPS on GTX Titan (72 percent of peak theoretical memory bandwidth) for the D3Q19 lattice arrangement and double-precision data.", "title": "" }, { "docid": "630901f1a1b25a5a2af65b566505de65", "text": "In many complex robot applications, such as grasping and manipulation, it is difficult to program desired task solutions beforehand, as robots are within an uncertain and dynamic environment. In such cases, learning tasks from experience can be a useful alternative. To obtain a sound learning and generalization performance, machine learning, especially, reinforcement learning, usually requires sufficient data. However, in cases where only little data is available for learning, due to system constraints and practical issues, reinforcement learning can act suboptimally. In this paper, we investigate how model-based reinforcement learning, in particular the probabilistic inference for learning control method (Pilco), can be tailored to cope with the case of sparse data to speed up learning. The basic idea is to include further prior knowledge into the learning process. As Pilco is built on the probabilistic Gaussian processes framework, additional system knowledge can be incorporated by defining appropriate prior distributions, e.g. a linear mean Gaussian prior. The resulting Pilco formulation remains in closed form and analytically tractable. The proposed approach is evaluated in simulation as well as on a physical robot, the Festo Robotino XT. For the robot evaluation, we employ the approach for learning an object pick-up task. The results show that by including prior knowledge, policy learning can be sped up in presence of sparse data.", "title": "" }, { "docid": "0605bdc2ca1fc11fe12794da38e0d8ad", "text": "Critical appraisal is a systematic process used to identify the strengths and weaknesses of a research article in order to assess the usefulness and validity of research findings. The most important components of a critical appraisal are an evaluation of the appropriateness of the study design for the research question and a careful assessment of the key methodological features of this design. Other factors that also should be considered include the suitability of the statistical methods used and their subsequent interpretation, potential conflicts of interest and the relevance of the research to one's own practice. This Review presents a 10-step guide to critical appraisal that aims to assist clinicians to identify the most relevant high-quality studies available to guide their clinical practice.", "title": "" }, { "docid": "534554ae5913f192d32efd93256488d6", "text": "Several unclassified web services are available in the internet which is difficult for the user to choose the correct web services. This raises service discovery cost, transforming data time between services and service searching time. Adequate methods, tools, technologies for clustering the web services have been developed. The clustering of web services is done manually. This survey is organized based on clustering of web service discovery methods, tools and technologies constructed on following list of parameters. The parameters are clustering model, graphs and environment, different technologies, advantages and disadvantages, theory and proof of concepts. Based on the user requirements results are different and better than one another. If the web service clustering is done automatically that can create an impact in the service discovery and fulfills the user requirements. This article gives the overview of the significant issues of the different methods and discusses the lack of technologies and automatic tools of the web service discovery.", "title": "" }, { "docid": "b24772af47f76db0f19ee281cccaa03f", "text": "We describe a method for assessing the visualization literacy (VL) of a user. Assessing how well people understand visualizations has great value for research (e. g., to avoid confounds), for design (e. g., to best determine the capabilities of an audience), for teaching (e. g., to assess the level of new students), and for recruiting (e. g., to assess the level of interviewees). This paper proposes a method for assessing VL based on Item Response Theory. It describes the design and evaluation of two VL tests for line graphs, and presents the extension of the method to bar charts and scatterplots. Finally, it discusses the reimplementation of these tests for fast, effective, and scalable web-based use.", "title": "" }, { "docid": "bd165a8d3882c232c37eac2b1370deb2", "text": "This paper proposes a new framework, named Generative Partition Network (GPN), for addressing the challenging multi-person pose estimation problem. Different from existing pure top-down and bottom-up solutions, the proposed GPN models the multi-person partition detection as a generative process from joint candidates and infers joint configurations for person instances from each person partition locally, resulting in both low joint detection and joint partition complexities. In particular, GPN designs a generative model based on the Generalized Hough Transform framework to detect person partitions via votes from joint candidates in the Hough space, parameterized by centroids of persons. Such generative model produces joint candidates and their corresponding person partitions by performing only one pass of joint detection. In addition, GPN formulates the inference procedure for joint configurations of human poses as a graph partition problem and optimizes it locally. Inspired by recent success of deep learning techniques for human pose estimation, GPN designs a multi-stage convolutional neural network with feature pyramid branch to jointly learn joint confidence maps and Hough transformation maps. Extensive experiments on two benchmarks demonstrate the efficiency and effectiveness of the proposed GPN.", "title": "" } ]
scidocsrr
15770b87d8f53ca4033d925048701e90
On the Accuracy of Hyper-local Geotagging of Social Media Content
[ { "docid": "152f55fd729a49ca41993aee16d70301", "text": "Real-time information from microblogs like Twitter is useful for different applications such as market research, opinion mining, and crisis management. For many of those messages, location information is required to derive useful insights. Today, however, only around 1% of all tweets are explicitly geotagged. We propose the first multi-indicator method for determining (1) the location where a tweet was created as well as (2) the location of the user’s residence. Our method is based on various weighted indicators, including the names of places that appear in the text message, dedicated location entries, and additional information from the user profile. An evaluation shows that our method is capable of locating 92% of all tweets with a median accuracy of below 30km, as well as predicting the user’s residence with a median accuracy of below 5.1km. With that level of accuracy, our approach significantly outperforms existing work.", "title": "" }, { "docid": "6195cf6b266d070cce5ff705daa84db7", "text": "The geographical properties of words have recently begun to be exploited for geolocating documents based solely on their text, often in the context of social media and online content. One common approach for geolocating texts is rooted in information retrieval. Given training documents labeled with latitude/longitude coordinates, a grid is overlaid on the Earth and pseudo-documents constructed by concatenating the documents within a given grid cell; then a location for a test document is chosen based on the most similar pseudo-document. Uniform grids are normally used, but they are sensitive to the dispersion of documents over the earth. We define an alternative grid construction using k-d trees that more robustly adapts to data, especially with larger training sets. We also provide a better way of choosing the locations for pseudo-documents. We evaluate these strategies on existing Wikipedia and Twitter corpora, as well as a new, larger Twitter corpus. The adaptive grid achieves competitive results with a uniform grid on small training sets and outperforms it on the large Twitter corpus. The two grid constructions can also be combined to produce consistently strong results across all training sets.", "title": "" }, { "docid": "52dbfe369d1875c402220692ef985bec", "text": "Geographically annotated social media is extremely valuable for modern information retrieval. However, when researchers can only access publicly-visible data, one quickly finds that social media users rarely publish location information. In this work, we provide a method which can geolocate the overwhelming majority of active Twitter users, independent of their location sharing preferences, using only publicly-visible Twitter data. Our method infers an unknown user's location by examining their friend's locations. We frame the geotagging problem as an optimization over a social network with a total variation-based objective and provide a scalable and distributed algorithm for its solution. Furthermore, we show how a robust estimate of the geographic dispersion of each user's ego network can be used as a per-user accuracy measure which is effective at removing outlying errors. Leave-many-out evaluation shows that our method is able to infer location for 101, 846, 236 Twitter users at a median error of 6.38 km, allowing us to geotag over 80% of public tweets.", "title": "" } ]
[ { "docid": "6727eb68064f73c0dc97c15b8c6e0bf9", "text": "With a focus on presenting information at the right time, the ubicomp community can benefit greatly from learning the most salient human measures of cognitive load. Cognitive load can be used as a metric to determine when or whether to interrupt a user. In this paper, we collected data from multiple sensors and compared their ability to assess cognitive load. Our focus is on visual perception and cognitive speed-focused tasks that leverage cognitive abilities common in ubicomp applications. We found that across all participants, the electrocardiogram median absolute deviation and median heat flux measurements were the most accurate at distinguishing between low and high levels of cognitive load, providing a classification accuracy of over 80% when used together. Our contribution is a real-time, objective, and generalizable method for assessing cognitive load in cognitive tasks commonly found in ubicomp systems and situations of divided attention.", "title": "" }, { "docid": "f85163403b153c7577548567405839ec", "text": "We used PIC18F452 microcontroller for hardware and software implementation of our home security system design. Based on PIC18F452, our system can monitor doors and windows of a house and can set alarm and warming signal to a nearest police station if anybody tries to break in. This security system also provides the functionality to identify the residents ID card to get access to the house without turning on the warning signal and alarm. Also, the security system provides a status that is not monitoring the door and windows since there is some possibility that the host do not want the system always checks the status of their house.", "title": "" }, { "docid": "657186ef31657909c4b961c98df099d0", "text": "The purpose of this study was to examine several factors of vocal quality that might be affected by changes in vocal fold vibratory patterns. Four voice types were examined: modal, vocal fry, falsetto, and breathy. Three categories of analysis techniques were developed to extract source-related features from speech and electroglottographic (EGG) signals. Four factors were found to be important for characterizing the glottal excitations for the four voice types: the glottal pulse width, the glottal pulse skewness, the abruptness of glottal closure, and the turbulent noise component. The significance of these factors for voice synthesis was studied and a new voice source model that accounted for certain physiological aspects of vocal fold motion was developed and tested using speech synthesis. Perceptual listening tests were conducted to evaluate the auditory effects of the source model parameters upon synthesized speech. The effects of the spectral slope of the source excitation, the shape of the glottal excitation pulse, and the characteristics of the turbulent noise source were considered. Applications for these research results include synthesis of natural sounding speech, synthesis and modeling of vocal disorders, and the development of speaker independent (or adaptive) speech recognition systems.", "title": "" }, { "docid": "2b3851ac0d4202a90896d160523bedc3", "text": "Crying is a communication method used by infants given the limitations of language. Parents or nannies who have never had the experience to take care of the baby will experience anxiety when the infant is crying. Therefore, we need a way to understand about infant's cry and apply the formula. This research develops a system to classify the infant's cry sound using MACF (Mel-Frequency Cepstrum Coefficients) feature extraction and BNN (Backpropagation Neural Network) based on voice type. It is classified into 3 classes: hungry, discomfort, and tired. A voice input must be ascertained as infant's cry sound which using 3 features extraction (pitch with 2 approaches: Modified Autocorrelation Function and Cepstrum Pitch Determination, Energy, and Harmonic Ratio). The features coefficients of MFCC are furthermore classified by Backpropagation Neural Network. The experiment shows that the system can classify the infant's cry sound quite well, with 30 coefficients and 10 neurons in the hidden layer.", "title": "" }, { "docid": "cab91b728b363f362535758dd9ac57b3", "text": "The multimodal nature of speech is often ignored in human-computer interaction, but lip deformations and other body motion, such as those of the head, convey additional information. We integrate speech cues from many sources and this improves intelligibility, especially when the acoustic signal is degraded. This paper shows how this additional, often complementary, visual speech information can be used for speech recognition. Three methods for parameterizing lip image sequences for recognition using hidden Markov models are compared. Two of these are top-down approaches that fit a model of the inner and outer lip contours and derive lipreading features from a principal component analysis of shape or shape and appearance, respectively. The third, bottom-up, method uses a nonlinear scale-space analysis to form features directly from the pixel intensity. All methods are compared on a multitalker visual speech recognition task of isolated letters.", "title": "" }, { "docid": "738555e605ee2b90ff99bef6d434162d", "text": "In this paper we present two deep-learning systems that competed at SemEval-2017 Task 4 “Sentiment Analysis in Twitter”. We participated in all subtasks for English tweets, involving message-level and topic-based sentiment polarity classification and quantification. We use Long Short-Term Memory (LSTM) networks augmented with two kinds of attention mechanisms, on top of word embeddings pre-trained on a big collection of Twitter messages. Also, we present a text processing tool suitable for social network messages, which performs tokenization, word normalization, segmentation and spell correction. Moreover, our approach uses no hand-crafted features or sentiment lexicons. We ranked 1st (tie) in Subtask A, and achieved very competitive results in the rest of the Subtasks. Both the word embeddings and our text processing tool1 are available to the research community.", "title": "" }, { "docid": "c7d3e5201926bc6c3932d5e0555e2f57", "text": "The application of theory to practice is multifaceted. It requires a nursing theory that is compatible with an institution's values and mission and that is easily understood and simple enough to guide practice. Comfort Theory was chosen because of its universality. The authors describe how Kolcaba's Comfort Theory was used by a not-for-profit New England hospital to provide a coherent and consistent pattern for enhancing care and promoting professional practice, as well as to serve as a unifying framework for applying for Magnet Recognition Status.", "title": "" }, { "docid": "8db59f20491739420d9b40311705dbf1", "text": "With object-oriented programming languages, Object Relational Mapping (ORM) frameworks such as Hibernate have gained popularity due to their ease of use and portability to different relational database management systems. Hibernate implements the Java Persistent API, JPA, and frees a developer from authoring software to address the impedance mismatch between objects and relations. In this paper, we evaluate the performance of Hibernate by comparing it with a native JDBC implementation using a benchmark named BG. BG rates the performance of a system for processing interactive social networking actions such as view profile, extend an invitation from one member to another, and other actions. Our key findings are as follows. First, an object-oriented Hibernate implementation of each action issues more SQL queries than its JDBC counterpart. This enables the JDBC implementation to provide response times that are significantly faster. Second, one may use the Hibernate Query Language (HQL) to refine the object-oriented Hibernate implementation to provide performance that approximates the JDBC implementation.", "title": "" }, { "docid": "8dc400d9745983da1e91f0cec70606c9", "text": "Aspect-Oriented Programming (AOP) is intended to ease situations that involve many kinds of code tangling. This paper reports on a study to investigate AOP's ability to ease tangling related to exception detection and handling. We took an existing framework written in Java™, the JWAM framework, and partially reengineered its exception detection and handling aspects using AspectJ™, an aspect-oriented programming extension to Java.\nWe found that AspectJ supported implementations that drastically reduced the portion of the code related to exception detection and handling. In one scenario, we were able to reduce that code by a factor of 4. We also found that, with respect to the original implementation in plain Java, AspectJ provided better support for different configurations of exceptional behaviors, more tolerance for changes in the specifications of exceptional behaviors, better support for incremental development, better reuse, automatic enforcement of contracts in applications that use the framework, and cleaner program texts. We also found some weaknesses of AspectJ that should be addressed in the future.", "title": "" }, { "docid": "a2535d69f8d2e1ed486111cde8452b52", "text": "Pedestrian detection is one of the most important components in driver-assistance systems. In this paper, we propose a monocular vision system for real-time pedestrian detection and tracking during nighttime driving with a near-infrared (NIR) camera. Three modules (region-of-interest (ROI) generation, object classification, and tracking) are integrated in a cascade, and each utilizes complementary visual features to distinguish the objects from the cluttered background in the range of 20-80 m. Based on the common fact that the objects appear brighter than the nearby background in nighttime NIR images, efficient ROI generation is done based on the dual-threshold segmentation algorithm. As there is large intraclass variability in the pedestrian class, a tree-structured, two-stage detector is proposed to tackle the problem through training separate classifiers on disjoint subsets of different image sizes and arranging the classifiers based on Haar-like and histogram-of-oriented-gradients (HOG) features in a coarse-to-fine manner. To suppress the false alarms and fill the detection gaps, template-matching-based tracking is adopted, and multiframe validation is used to obtain the final results. Results from extensive tests on both urban and suburban videos indicate that the algorithm can produce a detection rate of more than 90% at the cost of about 10 false alarms/h and perform as fast as the frame rate (30 frames/s) on a Pentium IV 3.0-GHz personal computer, which also demonstrates that the proposed system is feasible for practical applications and enjoys the advantage of low implementation cost.", "title": "" }, { "docid": "8ba2c24008e85bb9cfe275240305c89d", "text": "Most real-world social networks are inherently dynamic, composed of communities that are constantly changing in membership. To track these evolving communities, we need dynamic community detection techniques. This article evaluates the performance of a set of game-theoretic approaches for identifying communities in dynamic networks. Our method, D-GT (Dynamic Game-Theoretic community detection), models each network node as a rational agent who periodically plays a community membership game with its neighbors. During game play, nodes seek to maximize their local utility by joining or leaving the communities of network neighbors. The community structure emerges after the game reaches a Nash equilibrium. Compared to the benchmark community detection methods, D-GT more accurately predicts the number of communities and finds community assignments with a higher normalized mutual information, while retaining a good modularity.", "title": "" }, { "docid": "6ebce4adb3693070cac01614078d68fc", "text": "The recent COCO object detection dataset presents several new challenges for object detection. In particular, it contains objects at a broad range of scales, less prototypical images, and requires more precise localization. To address these challenges, we test three modifications to the standard Fast R-CNN object detector: (1) skip connections that give the detector access to features at multiple network layers, (2) a foveal structure to exploit object context at multiple object resolutions, and (3) an integral loss function and corresponding network adjustment that improve localization. The result of these modifications is that information can flow along multiple paths in our network, including through features from multiple network layers and from multiple object views. We refer to our modified classifier as a ‘MultiPath’ network. We couple our MultiPath network with DeepMask object proposals, which are well suited for localization and small objects, and adapt our pipeline to predict segmentation masks in addition to bounding boxes. The combined system improves results over the baseline Fast R-CNN detector with Selective Search by 66% overall and by 4× on small objects. It placed second in both the COCO 2015 detection and segmentation challenges.", "title": "" }, { "docid": "84b366294dbddcede8675ddd234ca1ea", "text": "Binary Moment Diagrams (BMDs) provide a canonical representations for linear functions similar to the way Binary Decision Diagrams (BDDs) represent Boolean functions. Within the class of linear functions, we can embed arbitrary functions from Boolean variables to integer values. BMDs can thus model the functionality of data path circuits operating over word-level data. Many important functions, including integermultiplication, that cannot be represented efficiently at the bit level with BDDs have simple representations at the word level with BMDs. Furthermore, BMDs can represent Boolean functions with around the same complexity as BDDs. We propose a hierarchical approach to verifying arithmetic circuits, where componentmodules are first shownto implement their word-level specifications. The overall circuit functionality is then verified by composing the component functions and comparing the result to the word-level circuit specification. Multipliers with word sizes of up to 256 bits have been verified by this technique.", "title": "" }, { "docid": "df808fcf51612bf81e8fd328d298291d", "text": "Chemomechanical preparation of the root canal includes both mechanical instrumentation and antibacterial irrigation, and is principally directed toward the elimination of micro-organisms from the root canal system. A variety of instruments and techniques have been developed and described for this critical stage of root canal treatment. Since their introduction in 1988, nickel-titanium (NiTi) rotary instruments have become a mainstay in clinical endodontics because of their exceptional ability to shape root canals with potentially fewer procedural complications. Safe clinical usage of NiTi instruments requires an understanding of basic metallurgy of the alloy including fracture mechanisms and their correlation to canal anatomy. This paper reviews the biologic principles of preparing root canals with an emphasis on correct use of current rotary NiTi instrumentation techniques and systems. The role and properties of contemporary root canal irrigants is also discussed.", "title": "" }, { "docid": "8e7c2943eb6df575bf847cd67b6424dc", "text": "Today, money laundering poses a serious threat not only to financial institutions but also to the nation. This criminal activity is becoming more and more sophisticated and seems to have moved from the clich&#233, of drug trafficking to financing terrorism and surely not forgetting personal gain. Most international financial institutions have been implementing anti-money laundering solutions to fight investment fraud. However, traditional investigative techniques consume numerous man-hours. Recently, data mining approaches have been developed and are considered as well-suited techniques for detecting money laundering activities. Within the scope of a collaboration project for the purpose of developing a new solution for the anti-money laundering Units in an international investment bank, we proposed a simple and efficient data mining-based solution for anti-money laundering. In this paper, we present this solution developed as a tool and show some preliminary experiment results with real transaction datasets.", "title": "" }, { "docid": "7b507a50fd567d0d8679fea29495becd", "text": "Ontologies are the backbone of the Semantic Web and facilitate sharing, integration, and discovery of data. However, the number of existing ontologies is vastly growing, which makes it is problematic for software developers to decide which ontology is suitable for their application. Furthermore, often, only a small part of the ontology will be relevant for a certain application. In other cases, ontologies are so large, that they have to be split up in more manageable chunks to work with them. To this end, in this demo, we present OAPT, an ontology analysis and partitioning tool. First, before a candidate input ontology is partitioned, OAPT analyzes it to determine, if this ontology is worth to be considered using a predefined set of criteria that quantify the semantic richness of the ontology. Once the ontology is investigated, we apply a seeding-based partitioning algorithm to partition it into a set of modules. Through the demonstration of OAPT we introduce the tool’s capabilities and highlight its effectiveness and usability.", "title": "" }, { "docid": "568c7ef495bfc10936398990e72a04d2", "text": "Accurate estimation of heart rates from photoplethysmogram (PPG) signals during intense physical activity is a very challenging problem. This is because strenuous and high intensity exercise can result in severe motion artifacts in PPG signals, making accurate heart rate (HR) estimation difficult. In this study we investigated a novel technique to accurately reconstruct motion-corrupted PPG signals and HR based on time-varying spectral analysis. The algorithm is called Spectral filter algorithm for Motion Artifacts and heart rate reconstruction (SpaMA). The idea is to calculate the power spectral density of both PPG and accelerometer signals for each time shift of a windowed data segment. By comparing time-varying spectra of PPG and accelerometer data, those frequency peaks resulting from motion artifacts can be distinguished from the PPG spectrum. The SpaMA approach was applied to three different datasets and four types of activities: (1) training datasets from the 2015 IEEE Signal Process. Cup Database recorded from 12 subjects while performing treadmill exercise from 1 km/h to 15 km/h; (2) test datasets from the 2015 IEEE Signal Process. Cup Database recorded from 11 subjects while performing forearm and upper arm exercise. (3) Chon Lab dataset including 10 min recordings from 10 subjects during treadmill exercise. The ECG signals from all three datasets provided the reference HRs which were used to determine the accuracy of our SpaMA algorithm. The performance of the SpaMA approach was calculated by computing the mean absolute error between the estimated HR from the PPG and the reference HR from the ECG. The average estimation errors using our method on the first, second and third datasets are 0.89, 1.93 and 1.38 beats/min respectively, while the overall error on all 33 subjects is 1.86 beats/min and the performance on only treadmill experiment datasets (22 subjects) is 1.11 beats/min. Moreover, it was found that dynamics of heart rate variability can be accurately captured using the algorithm where the mean Pearson's correlation coefficient between the power spectral densities of the reference and the reconstructed heart rate time series was found to be 0.98. These results show that the SpaMA method has a potential for PPG-based HR monitoring in wearable devices for fitness tracking and health monitoring during intense physical activities.", "title": "" }, { "docid": "ce8df10940df980b5b9ff635843a76f9", "text": "This letter presents a 60-GHz 2 × 2 low temperature co-fired ceramic (LTCC) aperture-coupled patch antenna array with an integrated Sievenpiper electromagnetic band-gap (EBG) structure used to suppress TM-mode surface waves. The merit of this EBG structure is to yield a predicted 4-dB enhancement in broadside directivity and gain, and an 8-dB improvement in sidelobe level. The novelty of this antenna lies in the combination of a relatively new LTCC material system (DuPont Greentape 9K7) along with laser ablation processing for fine line and fine slot definition (50-μm gaps with +/ - 6 μm tolerance) allowing the first successful integration of a Sievenpiper EBG structure with a millimeter-wave LTCC patch array. A measured broadside gain/directivity of 11.5/14 dBi at 60 GHz is achieved with an aperture footprint of only 350 × 410 mil2 (1.78λ × 2.08λ) including the EBG structure. This thin (27 mil) LTCC array is well suited for chip-scale package applications.", "title": "" }, { "docid": "2bed165ccf2bfb3c39e1b47b89e22ecc", "text": "Metaphor has a double life. It can be described as a directional process in which a stable, familiar base domain provides inferential structure to a less clearly specified target. But metaphor is also described as a process of finding commonalities, an inherently symmetric process. In this second view, both concepts may be altered by the metaphorical comparison. Whereas most theories of metaphor capture one of these aspects, we offer a model based on structure-mapping that captures both sides of metaphor processing. This predicts (a) an initial processing stage of symmetric alignment; and (b) a later directional phase in which inferences are projected to the target. To test these claims, we collected comprehensibility judgments for forward (e.g., \"A rumor is a virus\") and reversed (\"A virus is a rumor\") metaphors at early and late stages of processing, using a deadline procedure. We found an advantage for the forward direction late in processing, but no directional preference early in processing. Implications for metaphor theory are discussed.", "title": "" }, { "docid": "99c99f927c3c416ba8c01c15c0c2f28c", "text": "Online Social Rating Networks (SRNs) such as Epinions and Flixter, allow users to form several implicit social networks, through their daily interactions like co-commenting on the same products, or similarly co-rating products. The majority of earlier work in Rating Prediction and Recommendation of products (e.g. Collaborative Filtering) mainly takes into account ratings of users on products. However, in SRNs users can also built their explicit social network by adding each other as friends. In this paper, we propose Social-Union, a method which combines similarity matrices derived from heterogeneous (unipartite and bipartite) explicit or implicit SRNs. Moreover, we propose an effective weighting strategy of SRNs influence based on their structured density. We also generalize our model for combining multiple social networks. We perform an extensive experimental comparison of the proposed method against existing rating prediction and product recommendation algorithms, using synthetic and two real data sets (Epinions and Flixter). Our experimental results show that our Social-Union algorithm is more effective in predicting rating and recommending products in SRNs.", "title": "" } ]
scidocsrr
5ccbe7e296fe3b82333f49af7537344f
An Analysis on the Correlation and Gender Difference between College Students' Internet Addiction and Mobile Phone Addiction in Taiwan
[ { "docid": "ae72dc57784a9b3bb05dea9418e28914", "text": "This study explores Internet addiction among some of the Taiwan's college students. Also covered are a discussion of the Internet as a form of addiction, and related literature on this issue. This study used the Uses and Grati®cations theory and the Play theory in mass communication. Nine hundred and ten valid surveys were collected from 12 universities and colleges around Taiwan. The results indicated that Internet addiction does exist among some of Taiwan's college students. In particular, 54 students were identi®ed as Internet addicts. It was found that Internet addicts spent almost triple the number of hours connected to the Internet as compare to non-addicts, and spent signi®cantly more time on BBSs, the WWW, e-mail and games than non-addicts. The addict group found the Internet entertaining, interesting, interactive, and satisfactory. The addict group rated Internet impacts on their studies and daily life routines signi®cantly more negatively than the non-addict group. The study also found that the most powerful predictor of Internet addiction is the communication pleasure score, followed by BBS use hours, sex, satisfaction score, and e-mail-use hours. 7 2000 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "406c19d893d52da1e9860c0d1a310378", "text": "Several authors have studied the risks arising from the growth in mobile phone use (e.g. large debts incurred by young people, banned or dangerous use of cellular phones). The aim of this study is to analyse whether impulsivity, which has often been related to various forms of addictive behaviours, is associated with massive use of and dependence on the mobile phone. In this study, 108 female undergraduate psychology students were screened using a questionnaire evaluating actual use of and perceived dependence on the mobile phone, and with the French adaptation of the UPPS Impulsive Behavior Scale. This scale identifies four distinct components associated with impulsive behaviour: Urgency, lack of Premeditation, lack of Perseverance, and Sensation Seeking. The results showed that a relationship can be established between the use of and perceived dependence on the cellular phone and two facets of impulsivity: Urgency and lack of Perseverance. Copyright # 2006 John Wiley & Sons, Ltd.", "title": "" } ]
[ { "docid": "8dfa68e87eee41dbef8e137b860e19cc", "text": "We investigate regrets associated with users' posts on a popular social networking site. Our findings are based on a series of interviews, user diaries, and online surveys involving 569 American Facebook users. Their regrets revolved around sensitive topics, content with strong sentiment, lies, and secrets. Our research reveals several possible causes of why users make posts that they later regret: (1) they want to be perceived in favorable ways, (2) they do not think about their reason for posting or the consequences of their posts, (3) they misjudge the culture and norms within their social circles, (4) they are in a \"hot\" state of high emotion when posting, or under the influence of drugs or alcohol, (5) their postings are seen by an unintended audience, (6) they do not foresee how their posts could be perceived by people within their intended audience, and (7) they misunderstand or misuse the Facebook platform. Some reported incidents had serious repercussions, such as breaking up relationships or job losses. We discuss methodological considerations in studying negative experiences associated with social networking posts, as well as ways of helping users of social networking sites avoid such regrets.", "title": "" }, { "docid": "c2c5f0f8b4647c651211b50411382561", "text": "Obesity is a multifactorial disease that results from a combination of both physiological, genetic, and environmental inputs. Obesity is associated with adverse health consequences, including T2DM, cardiovascular disease, musculoskeletal disorders, obstructive sleep apnea, and many types of cancer. The probability of developing adverse health outcomes can be decreased with maintained weight loss of 5% to 10% of current body weight. Body mass index and waist circumference are 2 key measures of body fat. A wide variety of tools are available to assess obesity-related risk factors and guide management.", "title": "" }, { "docid": "5cfbeef0e6ca5dd62a70160a83b0ecaa", "text": "Tissue mimicking phantoms (TMPs) replicating the dielectric properties of wet skin, fat, blood, and muscle tissues for the 0.3 to 20 GHz frequency range are presented in this paper. The TMPs reflect the dielectric properties with maximum deviations of 7.7 units and 3.9 S/m for relative dielectric constant and conductivity, respectively, for the whole band. The dielectric properties of the blood mimicking material are further investigated by adding realistic glucose amounts and a Cole-Cole model used to compare the behavior with respect to changing glucose levels. In addition, a patch resonator was fabricated and tested with the four-layered physical phantom developed in house. It was observed that the input impedance of the resonator is sensitive to the changes in the dielectric properties and, hence, to the realistic glucose level changes in the blood layer.", "title": "" }, { "docid": "e90b29baf65216807d80360083912cd4", "text": "Software maintenance claims a large proportion of organizational resources. It is thought that many maintenance problems derive from inadequate software design and development practices. Poor design choices can result in complex software that is costly to support and difficult to change. However, it is difficult to assess the actual maintenance performance effects of software development practices because their impact is realized over the software life cycle. To estimate the impact of development activities in a more practical time frame, this research develops a two stage model in which software complexity is a key intermediate variable that links design and development decisions to their downstream effects on software maintenance. The research analyzes data collected from a national mass merchandising retailer on twenty-nine software enhancement projects and twenty-three software applications in a large IBM COBOL environment. Results indicate that the use of a code generator in development is associated with increased software complexity and software enhancement project effort. The use of packaged software is associated with decreased software complexity and software enhancement effort. These results suggest an important link between software development practices and maintenance performance.", "title": "" }, { "docid": "736a413352df6b0225b4d567a26a5d78", "text": "This letter presents a compact, single-feed, dual-band antenna covering both the 433-MHz and 2.45-GHz Industrial Scientific and Medical (ISM) bands. The antenna has small dimensions of 51 ×28 mm2. A square-spiral resonant element is printed on the top layer for the 433-MHz band. The remaining space within the spiral is used to introduce an additional parasitic monopole element on the bottom layer that is resonant at 2.45 GHz. Measured results show that the antenna has a 10-dB return-loss bandwidth of 2 MHz at 433 MHz and 132 MHz at 2.45 GHz, respectively. The antenna has omnidirectional radiation characteristics with a peak realized gain (measured) of -11.5 dBi at 433 MHz and +0.5 dBi at 2.45 GHz, respectively.", "title": "" }, { "docid": "74a3c4dae9573325b292da736d46a78e", "text": "Machine learning is currently dominated by largely experimental work focused on improvements in a few key tasks. However, the impressive accuracy numbers of the best performing models are questionable because the same test sets have been used to select these models for multiple years now. To understand the danger of overfitting, we measure the accuracy of CIFAR-10 classifiers by creating a new test set of truly unseen images. Although we ensure that the new test set is as close to the original data distribution as possible, we find a large drop in accuracy (4% to 10%) for a broad range of deep learning models. Yet, more recent models with higher original accuracy show a smaller drop and better overall performance, indicating that this drop is likely not due to overfitting based on adaptivity. Instead, we view our results as evidence that current accuracy numbers are brittle and susceptible to even minute natural variations in the data distribution.", "title": "" }, { "docid": "ebb27d659246af9010248371fa22e733", "text": "Business Intelligence (BI) solutions require the design and implementation of complex processes (denoted ETL) that extract, transform, and load data from the sources to a common repository. New applications, like for example, real-time data warehousing, require agile and flexible tools that allow BI users to take timely decisions based on extremely up-to-date data. This calls for new ETL tools able to adapt to constant changes and quickly produce and modify executable code. A way to achieve this is to make ETL processes become aware of the business processes in the organization, in order to easily identify which data are required, and when and how to load them in the data warehouse. Therefore, we propose to model ETL processes using the standard representation mechanism denoted BPMN (Business Process Modeling and Notation). In this paper we present a BPMN-based metamodel for conceptual modeling of ETL processes. This metamodel is based on a classification of ETL objects resulting from a study of the most used commercial and open source ETL tools.", "title": "" }, { "docid": "bda1e2a1f27673dceed36adddfdc3e36", "text": "IEEE 802.11 WLANs are a very important technology to provide high speed wireless Internet access. Especially at airports, university campuses or in city centers, WLAN coverage is becoming ubiquitous leading to a deployment of hundreds or thousands of Access Points (AP). Managing and configuring such large WLAN deployments is a challenge. Current WLAN management protocols such as CAPWAP are hard to extend with new functionality. In this paper, we present CloudMAC, a novel architecture for enterprise or carrier grade WLAN systems. By partially offloading the MAC layer processing to virtual machines provided by cloud services and by integrating our architecture with OpenFlow, a software defined networking approach, we achieve a new level of flexibility and reconfigurability. In Cloud-MAC APs just forward MAC frames between virtual APs and IEEE 802.11 stations. The processing of MAC layer frames as well as the creation of management frames is handled at the virtual APs while the binding between the virtual APs and the physical APs is managed using OpenFlow. The testbed evaluation shows that CloudMAC achieves similar performance as normal WLANs, but allows novel services to be implemented easily in high level programming languages. The paper presents a case study which shows that dynamically switching off APs to save energy can be performed seamlessly with CloudMAC, while a traditional WLAN architecture causes large interruptions for users.", "title": "" }, { "docid": "544c1608c03535121b8274ff51343e38", "text": "As multilevel models (MLMs) are useful in understanding relationships existent in hierarchical data structures, these models have started to be used more frequently in research developed in social and health sciences. In order to draw meaningful conclusions from MLMs, researchers need to make sure that the model fits the data. Model fit, and thus, ultimately model selection can be assessed by examining changes in several fit indices across nested and/or nonnested models [e.g., -2 log likelihood (-2LL), Akaike Information Criterion (AIC), and Schwarz’s Bayesian Information Criterion (BIC)]. In addition, the difference in pseudo-R 2 is often used to examine the practical significance between two nested models. Considering the importance of using all of these measures when determining model selection, researchers who use analyze multilevel models would benefit from being able to easily assess model fit across estimated models. Whereas SAS PROC MIXED produces the -2LL, AIC, and BIC, it does not provide the actual change in these fit indices or the change in pseudo-R 2 between different nested and non-nested models. In order to make this information more attainable, Bardenheier (2009) developed a macro that allowed researchers using PROC MIXED to obtain the test statistic for the difference in -2LL along with the p-value of the Likelihood Ratio Test (LRT). As an extension of Bardenheier’s work, this paper provides a comprehensive SAS macro that incorporates changes in model fit statistics (-2LL, AIC and BIC) as well as change in pseudo-R 2 . By utilizing data from PROC MIXED ODS tables, the macro produces a comprehensive table of changes in model fit measures. Thus, this expanded macro allows SAS users to examine model fit in both nested and non-nested models and both in terms of statistical and practical significance. This paper provides a review of the different methods used to assess model fit in multilevel analysis, the macro programming language, an executed example of the macro, and a copy of the complete macro.", "title": "" }, { "docid": "b3d8c827ac58e5e385179275a2c73b31", "text": "It is the purpose of this article to identify and review criteria that rehabilitation technology should meet in order to offer arm-hand training to stroke patients, based on recent principles of motor learning. A literature search was conducted in PubMed, MEDLINE, CINAHL, and EMBASE (1997–2007). One hundred and eighty seven scientific papers/book references were identified as being relevant. Rehabilitation approaches for upper limb training after stroke show to have shifted in the last decade from being analytical towards being focussed on environmentally contextual skill training (task-oriented training). Training programmes for enhancing motor skills use patient and goal-tailored exercise schedules and individual feedback on exercise performance. Therapist criteria for upper limb rehabilitation technology are suggested which are used to evaluate the strengths and weaknesses of a number of current technological systems. This review shows that technology for supporting upper limb training after stroke needs to align with the evolution in rehabilitation training approaches of the last decade. A major challenge for related technological developments is to provide engaging patient-tailored task oriented arm-hand training in natural environments with patient-tailored feedback to support (re) learning of motor skills.", "title": "" }, { "docid": "668bb543268c659437828d6721d7eb8c", "text": "In this paper, the anthropomorphic robot hand is newly proposed by adopting dual-mode twisting actuation and EM joint locking mechanism. The proposed robot hand consists of five finger modules. Each finger has four links and three joints, and Joint 2 and 3 are coupled by the four-bar linkage mechanism. The dual-mode twisting actuation allows that the robot finger can move fast (up to 356.7 deg/sec) in Mode I and generate a large grasping force(up to 36.5 N) in Mode II. In addition, the workspace of the robot finger module is enlarged by EM joint locking mechanism depending on the locking states. In order to verify the effectiveness of the mechanisms adopted in the robot hand, we theoretically and numerically analyze the performances of the robot finger module such as bending speed, fingertip force, and workspace. Finally, through the developed robot hand, we perform the grasping test for various objects and the grasping performance is experimentally demonstrated.", "title": "" }, { "docid": "910346882d47487754379d02e4c839eb", "text": "Sudoku is a popular number puzzle. Here, we model the puzzle as a probabilistic graphical model and drive a modification to the well-known sum-product and max-product message passing to solve the puzzle. In addition, we propose a Sudoku solver utilizing a combination of message passing and Sinkhorn balancing and show that as Sudoku puzzles become larger, the impact of loopy propagation does not increase.", "title": "" }, { "docid": "c834c8a873cff7b32c94826afb1dd1ab", "text": "The workshop “Touch Affordances” addresses a concept relevant to human computer interactions based on touch. The main topic is the challenge of applying the notion of affordances to domains related to touch interactions (e.g. (multi)touch screens, RFID & NFC, ubiquitous interfaces). The goals of this workshop are to launch a community of researchers, designers, etc. interested in this topic, to create a common understanding of the field of touch affordances and to generate ideas for new research areas for intuitive touch interactions. The workshop will be highly interactive and will have a creative, generative character.", "title": "" }, { "docid": "80fd7c3cfee5cbf234247848bc10c568", "text": "A 2-GHz Si power MOSFET with 50% power-added efficiency and 1.0-W output power at a 3.6-V supply voltage has been developed for use as an RF high-power amplifier in wireless applications. This MOSFET achieves this performance by using a 0.4-/spl mu/m gate power device with an Al-shorted metal-silicide/Si gate structure and a reduced gate finger width pattern.", "title": "" }, { "docid": "0a13b179fbc5dbf75991367b2a301985", "text": "The discrete Weibull distribution is defined to correspond with the Weibull distribution in continuous time. A few properties of the discrete Weibull distribution are discussed.", "title": "" }, { "docid": "2b8c0923372e97ca5781378b7e220021", "text": "Motivated by requirements of Web 2.0 applications, a plethora of non-relational databases raised in recent years. Since it is very difficult to choose a suitable database for a specific use case, this paper evaluates the underlying techniques of NoSQL databases considering their applicability for certain requirements. These systems are compared by their data models, query possibilities, concurrency controls, partitioning and replication opportunities.", "title": "" }, { "docid": "f9fb4e18cd7e63294529269ff61a6575", "text": "Graphs provide a general representation or data model for many types of data where pair-wise relationships are known or thought to be particularly important.1 Thus, it should not be surprising that interest in graph mining has grown with the recent interest in “big data.” Much of the big data generated and analyzed involves pair-wise relationships among a set of entities. For example, in e-commerce applications such as with Amazon’s product database, customers are related to products through their purchasing activities; on the web, web pages are related through hypertext linking relationships; on social networks such as Facebook, individuals are related through their friendships; and so on. Similarly, in scientific applications, research articles are related through citations; proteins are related via metabolic pathways, co-expression, and regulatory network effects within a cell; materials are related through models of their crystalline structure; and so on. While many graphs are small, many large graphs are now extremely LARGE. For example, in early 2008, Google announced that it had indexed over 1 trillion URLs on the internet, corresponding to a graph with over 1 trillion nodes [Alpert and Hajaj, 2008]; in 2012, the Facebook friendship network spanned 721 million individuals and had 137 billion links [Backstrom et al., 2012]; phone companies process a few trillion calls a year [Strohm and Homan, 2013]; the human brain has around 100 billion neurons and 100 trillion neuronal connections [Zimmer, 2011]; one of the largest reported graph experiments involved 4.4 trillion nodes and around 70 trillion edges in a synthetic experiment that required one petabyte of storage [Burkhardt and Waring, 2013]; and one of the largest reported experiments with a real-world graph involved over 1.5 trillion edges [Fleury et al., 2015]. Given the ubiquity, size, and importance of graphs in many application areas, it should come as no surprise that large graph mining serves numerous roles within the large-scale data analysis ecosystem. For example, it can help us learn new things about the world, including both the chemical and biological sciences [Martin et al., 2012; Stelzl et al., 2005] as well as results in the social and economic sciences such as the Facebook study that showed that any two people in the(ir) world can be connected through approximately four intermediate individuals [Backstrom et al., 2012]. Alternatively, large graph mining produces similarity information for recommendation, suggestion, and prediction from messy data [Boldi et al., 2008; Epasto et al., 2014]; it can also tell us how to optimize a data infrastructure to improve response time [Ugander and Backstrom, 2013]; and it can tell us when and how our data are anomalous [Akoglu et al., 2010].", "title": "" }, { "docid": "ae468573cd37e4f3bf923d76bc9f0779", "text": "This paper integrates recent work on Path Integral (PI) and Kullback Leibler (KL) divergence stochastic optimal control theory with earlier work on risk sensitivity and the fundamental dualities between free energy and relative entropy. We derive the path integral optimal control framework and its iterative version based on the aforemetioned dualities. The resulting formulation of iterative path integral control is valid for general feedback policies and in contrast to previous work, it does not rely on pre-specified policy parameterizations. The derivation is based on successive applications of Girsanov's theorem and the use of Radon-Nikodým derivative as applied to diffusion processes due to the change of measure in the stochastic dynamics. We compare the PI control derived based on Dynamic Programming with PI based on the duality between free energy and relative entropy. Moreover we extend our analysis on the applicability of the relationship between free energy and relative entropy to optimal control of markov jump diffusions processes. Furthermore, we present the links between KL stochastic optimal control and the aforementioned dualities and discuss its generalizability.", "title": "" }, { "docid": "ccbd40976208fcb7a61d67674d1115af", "text": "Requirements Management (RM) is about organising the requirements and additional information gathered during the Requirements Engineering (RE) process, and managing changes of these requirements. Practioners as well as researchers acknowledge that RM is both important and difficult, and that changing requirements is a challenging factor in many development projects. But why, then, is so little research done within RM? This position paper identifies and discusses five research areas where further research within RM is needed.", "title": "" }, { "docid": "a7d7c7ae9da5936f050443f684f48916", "text": "There is growing evidence for the presence of viable microorganisms in geological salt formations that are millions of years old. It is still not known, however, whether these bacteria are dormant organisms that are themselves millions of years old or whether the salt crystals merely provide a habitat in which contemporary microorganisms can grow, perhaps interspersed with relatively short periods of dormancy (McGenity et al. 2000). Vreeland, Rosenzweig and Powers (2000) have recently reported the isolation and growth of a halotolerant spore-formingBacillus species from a brine inclusion within a 250-Myr-old salt crystal from the Permian Salado Formation in New Mexico. This bacterium, Bacillus strain 2-9-3, was informally christened Bacillus permians, and a 16S ribosomal RNA gene was sequenced and deposited in GenBank under the name B. permians (accession number AF166093). It has been claimed thatB. permians was trapped inside the salt crystal 250 MYA and survived within the crystal until the present, most probably as a spore. Serious doubts have been raised concerning the possibility of spore survival for 250 Myr (Tomas Lindahl, personal communication), mostly because spores contain no active DNA repair enzymes, so the DNA is expected to decay into small fragments due to such factors as the natural radioactive radiation in the soil, and the bacterium is expected to lose its viability within at most several hundred years (Lindahl 1993). In this note, we apply theproof-of-the-pudding-is-in-the-eating principle to test whether the newly reported B. permians 16S ribosomal RNA gene sequence is ancient or not. There are several reasons to doubt the antiquity of B. permians. The first concerns the extraordinary similarity of its 16S rRNA gene sequence to that of Bacillus marismortui. Bacillus marismortui was described by Arahal et al. (1999) as a moderately halophilic species from the Dead Sea and was later renamed Salibacillus marismortui (Arahal et al. 2000). TheB. permians sequence differs from that of S. marismortui by only one transition and one transversion out of the 1,555 aligned and unambiguously determined nucleotides. In comparison, the 16S rRNA gene fromStaphylococcus succinus, which was claimed to be ‘‘25–35 million years old’’ (Lambert et al. 1998), differs from its homolog in its closest present-day relative (a urinary pathogen called Staphylococcus saprophyticus) by 19 substitutions out of 1,525 aligned nucleotides. Using Kimura’s (1980) two-parameter model, the difference between the B. permians and S. marismortui sequences translates into 1.3", "title": "" } ]
scidocsrr
919dd986a060e3b4379d5f1a34a4efa6
Low-Rank Discriminant Embedding for Multiview Learning
[ { "docid": "3a7dca2e379251bd08b32f2331329f00", "text": "Canonical correlation analysis (CCA) is a method for finding linear relations between two multidimensional random variables. This paper presents a generalization of the method to more than two variables. The approach is highly scalable, since it scales linearly with respect to the number of training examples and number of views (standard CCA implementations yield cubic complexity). The method is also extended to handle nonlinear relations via kernel trick (this increases the complexity to quadratic complexity). The scalability is demonstrated on a large scale cross-lingual information retrieval task.", "title": "" }, { "docid": "14a2a003117d2bca8cb5034e09e8ea05", "text": "The regularization principals [31] lead approximation schemes to deal with various learning problems, e.g., the regularization of the norm in a reproducing kernel Hilbert space for the ill-posed problem. In this paper, we present a family of subspace learning algorithms based on a new form of regularization, which transfers the knowledge gained in training samples to testing samples. In particular, the new regularization minimizes the Bregman divergence between the distribution of training samples and that of testing samples in the selected subspace, so it boosts the performance when training and testing samples are not independent and identically distributed. To test the effectiveness of the proposed regularization, we introduce it to popular subspace learning algorithms, e.g., principal components analysis (PCA) for cross-domain face modeling; and Fisher's linear discriminant analysis (FLDA), locality preserving projections (LPP), marginal Fisher's analysis (MFA), and discriminative locality alignment (DLA) for cross-domain face recognition and text categorization. Finally, we present experimental evidence on both face image data sets and text data sets, suggesting that the proposed Bregman divergence-based regularization is effective to deal with cross-domain learning problems.", "title": "" } ]
[ { "docid": "f670b91f8874c2c2db442bc869889dbd", "text": "This paper summarizes lessons learned from the first Amazon Picking Challenge in which 26 international teams designed robotic systems that competed to retrieve items from warehouse shelves. This task is currently performed by human workers, and there is hope that robots can someday help increase efficiency and throughput while lowering cost. We report on a 28-question survey posed to the teams to learn about each team’s background, mechanism design, perception apparatus, planning and control approach. We identify trends in this data, correlate it with each team’s success in the competition, and discuss observations and lessons learned. Note to Practitioners: Abstract—Perception, motion planning, grasping, and robotic system engineering has reached a level of maturity that makes it possible to explore automating simple warehouse tasks in semi-structured environments that involve high-mix, low-volume picking applications. This survey summarizes lessons learned from the first Amazon Picking Challenge, highlighting mechanism design, perception, and motion planning algorithms, as well as software engineering practices that were most successful in solving a simplified order fulfillment task. While the choice of mechanism mostly affects execution speed, the competition demonstrated the systems challenges of robotics and illustrated the importance of combining reactive control with deliberative planning.", "title": "" }, { "docid": "8f73870d5e999c0269059c73bb85e05c", "text": "Placing the DRAM in the same package as a processor enables several times higher memory bandwidth than conventional off-package DRAM. Yet, the latency of in-package DRAM is not appreciably lower than that of off-package DRAM. A promising use of in-package DRAM is as a large cache. Unfortunately, most previous DRAM cache designs optimize mainly for cache hit latency and do not consider bandwidth efficiency as a first-class design constraint. Hence, as we show in this paper, these designs are suboptimal for use with in-package DRAM.\n We propose a new DRAM cache design, Banshee, that optimizes for both in-package and off-package DRAM bandwidth efficiency without degrading access latency. Banshee is based on two key ideas. First, it eliminates the tag lookup overhead by tracking the contents of the DRAM cache using TLBs and page table entries, which is efficiently enabled by a new lightweight TLB coherence protocol we introduce. Second, it reduces unnecessary DRAM cache replacement traffic with a new bandwidth-aware frequency-based replacement policy. Our evaluations show that Banshee significantly improves performance (15% on average) and reduces DRAM traffic (35.8% on average) over the best-previous latency-optimized DRAM cache design.", "title": "" }, { "docid": "25dcc8e71b878bfed01e95160d9b82ef", "text": "Wireless Sensor Networks (WSN) has been a focus for research for several years. WSN enables novel and attractive solutions for information gathering across the spectrum of endeavour including transportation, business, health-care, industrial automation, and environmental monitoring. Despite these advances, the exponentially increasing data extracted from WSN is not getting adequate use due to the lack of expertise, time and money with which the data might be better explored and stored for future use. The next generation of WSN will benefit when sensor data is added to blogs, virtual communities, and social network applications. This transformation of data derived from sensor networks into a valuable resource for information hungry applications will benefit from techniques being developed for the emerging Cloud Computing technologies. Traditional High Performance Computing approaches may be replaced or find a place in data manipulation prior to the data being moved into the Cloud. In this paper, a novel framework is proposed to integrate the Cloud Computing model with WSN. Deployed WSN will be connected to the proposed infrastructure. Users request will be served via three service layers (IaaS, PaaS, SaaS) either from the archive, archive is made by collecting data periodically from WSN to Data Centres (DC), or by generating live query to corresponding sensor network.", "title": "" }, { "docid": "ccf6084095c4c4fc59483f680e40afee", "text": "This brief presents an identification experiment performed on the coupled dynamics of the edgewise bending vibrations of the rotor blades and the in-plane motion of the drivetrain of three-bladed wind turbines. These dynamics vary with rotor speed, and are subject to periodic wind flow disturbances. This brief demonstrates that this time-varying behavior can be captured in a linear parameter-varying (LPV) model with the rotor speed as the scheduling signal, and with additional sinusoidal inputs that are used as basis functions for the periodic wind flow disturbances. By including these inputs, the predictor-based LPV subspace identification approach (LPV PBSIDopt) was tailored for wind turbine applications. Using this tailor-made approach, the LPV model is identified from data measured with the three-bladed Controls Advanced Research Turbine (CART3) at the National Renewable Energy Laboratory's National Wind Technology Center.", "title": "" }, { "docid": "a3ee3861b550cb8c5d98339ca7673c92", "text": "Background: Interview measures for investigating adverse childhood experiences, such as the Childhood Experience of Care and Abuse (CECA) instrument, are comprehensive and can be lengthy and time-consuming. A questionnaire version of the CECA (CECA.Q) has been developed which could allow for screening of individuals in research settings. This would enable researchers to identify individuals with adverse early experiences who might benefit from an in-depth interview. This paper aims to validate the CECA.Q against the CECA interview in a clinical population. Methods: One hundred and eight patients attending an affective disorders service were assessed using both the CECA interview and questionnaire measures. A follow-up sample was recruited 3 years later and sent the questionnaire. The questionnaire was also compared with the established Parental Bonding Instrument (PBI). Results: Agreement between ratings on the interview and questionnaire were high. Scales measuring antipathy and neglect also correlated highly with the PBI. The follow-up sample revealed the questionnaire to have a high degree of reliability over a long period of time. Conclusions: The CECA.Q appears to be a reliable and valid measure which can be used in research on clinical populations to screen for individuals who have experienced severe adversity in childhood.", "title": "" }, { "docid": "f88b8c7cbabda618f59e75357c1d8262", "text": "A security sandbox is a technology that is often used to detect advanced malware. However, current sandboxes are highly dependent on VM hypervisor types and versions. Thus, in this paper, we introduce a new sandbox design, using memory forensics techniques, to provide an agentless sandbox solution that is independent of the VM hypervisor. In particular, we leverage the VM introspection method to monitor malware running memory data outside the VM and analyze its system behaviors, such as process, file, registry, and network activities. We evaluate the feasibility of this method using 20 advanced and 8 script-based malware samples. We furthermore demonstrate how to analyze malware behavior from memory and verify the results with three different sandbox types. The results show that we can analyze suspicious malware activities, which is also helpful for cyber security defense.", "title": "" }, { "docid": "cfebf44f0d3ec7d1ffe76b832704a6d2", "text": "In practical scenario the transmission of signal or data from source to destination is very challenging. As there is a lot of surrounding environmental changes which influence the transmitted signal. The ISI, multipath will corrupt the data and this data appears at the receiver or destination. Due to this time varying multipath fading different channel estimation filter at the receiver are used to improve the performance. The performance of LMS and RLS adaptive algorithms are analyzed over a AWGN and Rayleigh channels under different multipath fading environments for estimating the time-varying channel.", "title": "" }, { "docid": "6c4495b8ecb26dae8765052e5c8c2678", "text": "Neurodevelopmental disorders such as autism, attention deficit disorder, mental retardation, and cerebral palsy are common, costly, and can cause lifelong disability. Their causes are mostly unknown. A few industrial chemicals (eg, lead, methylmercury, polychlorinated biphenyls [PCBs], arsenic, and toluene) are recognised causes of neurodevelopmental disorders and subclinical brain dysfunction. Exposure to these chemicals during early fetal development can cause brain injury at doses much lower than those affecting adult brain function. Recognition of these risks has led to evidence-based programmes of prevention, such as elimination of lead additives in petrol. Although these prevention campaigns are highly successful, most were initiated only after substantial delays. Another 200 chemicals are known to cause clinical neurotoxic effects in adults. Despite an absence of systematic testing, many additional chemicals have been shown to be neurotoxic in laboratory models. The toxic effects of such chemicals in the developing human brain are not known and they are not regulated to protect children. The two main impediments to prevention of neurodevelopmental deficits of chemical origin are the great gaps in testing chemicals for developmental neurotoxicity and the high level of proof required for regulation. New, precautionary approaches that recognise the unique vulnerability of the developing brain are needed for testing and control of chemicals.", "title": "" }, { "docid": "5089b13262867f2bd77d85460000cfaa", "text": "While different optical flow techniques continue to appear, there has been a lack of quantitative evaluation of existing methods. For a common set of real and synthetic image sequences, we report the results of a number of regularly cited optical flow techniques, including instances of differential, matching, energy-based, and phase-based methods. Our comparisons are primarily empirical, and concentrate on the accuracy, reliability, and density of the velocity measurements; they show that performance can differ significantly among the techniques we implemented.", "title": "" }, { "docid": "c57d9c4f62606e8fccef34ddd22edaec", "text": "Based on research into learning programming and a review of program visualization research, we designed an educational software tool that aims to target students' apparent fragile knowledge of elementary programming which manifests as difficulties in tracing and writing even simple programs. Most existing tools build on a single supporting technology and focus on one aspect of learning. For example, visualization tools support the development of a conceptual-level understanding of how programs work, and automatic assessment tools give feedback on submitted tasks. We implemented a combined tool that closely integrates programming tasks with visualizations of program execution and thus lets students practice writing code and more easily transition to visually tracing it in order to locate programming errors. In this paper we present Jype, a web-based tool that provides an environment for visualizing the line-by-line execution of Python programs and for solving programming exercises with support for immediate automatic feedback and an integrated visual debugger. Moreover, the debugger allows stepping back in the visualization of the execution as if executing in reverse. Jype is built for Python, when most research in programming education support tools revolves around Java.", "title": "" }, { "docid": "eaa3284dbe2bbd5c72df99d76d4909a7", "text": "BACKGROUND\nWorldwide, depression is rated as the fourth leading cause of disease burden and is projected to be the second leading cause of disability by 2020. Annual depression-related costs in the United States are estimated at US $210.5 billion, with employers bearing over 50% of these costs in productivity loss, absenteeism, and disability. Because most adults with depression never receive treatment, there is a need to develop effective interventions that can be more widely disseminated through new channels, such as employee assistance programs (EAPs), and directly to individuals who will not seek face-to-face care.\n\n\nOBJECTIVE\nThis study evaluated a self-guided intervention, using the MoodHacker mobile Web app to activate the use of cognitive behavioral therapy (CBT) skills in working adults with mild-to-moderate depression. It was hypothesized that MoodHacker users would experience reduced depression symptoms and negative cognitions, and increased behavioral activation, knowledge of depression, and functioning in the workplace.\n\n\nMETHODS\nA parallel two-group randomized controlled trial was conducted with 300 employed adults exhibiting mild-to-moderate depression. Participants were recruited from August 2012 through April 2013 in partnership with an EAP and with outreach through a variety of additional non-EAP organizations. Participants were blocked on race/ethnicity and then randomly assigned within each block to receive, without clinical support, either the MoodHacker intervention (n=150) or alternative care consisting of links to vetted websites on depression (n=150). Participants in both groups completed online self-assessment surveys at baseline, 6 weeks after baseline, and 10 weeks after baseline. Surveys assessed (1) depression symptoms, (2) behavioral activation, (3) negative thoughts, (4) worksite outcomes, (5) depression knowledge, and (6) user satisfaction and usability. After randomization, all interactions with subjects were automated with the exception of safety-related follow-up calls to subjects reporting current suicidal ideation and/or severe depression symptoms.\n\n\nRESULTS\nAt 6-week follow-up, significant effects were found on depression, behavioral activation, negative thoughts, knowledge, work productivity, work absence, and workplace distress. MoodHacker yielded significant effects on depression symptoms, work productivity, work absence, and workplace distress for those who reported access to an EAP, but no significant effects on these outcome measures for those without EAP access. Participants in the treatment arm used the MoodHacker app an average of 16.0 times (SD 13.3), totaling an average of 1.3 hours (SD 1.3) of use between pretest and 6-week follow-up. Significant effects on work absence in those with EAP access persisted at 10-week follow-up.\n\n\nCONCLUSIONS\nThis randomized effectiveness trial found that the MoodHacker app produced significant effects on depression symptoms (partial eta(2) = .021) among employed adults at 6-week follow-up when compared to subjects with access to relevant depression Internet sites. The app had stronger effects for individuals with access to an EAP (partial eta(2) = .093). For all users, the MoodHacker program also yielded greater improvement on work absence, as well as the mediating factors of behavioral activation, negative thoughts, and knowledge of depression self-care. Significant effects were maintained at 10-week follow-up for work absence. General attenuation of effects at 10-week follow-up underscores the importance of extending program contacts to maintain user engagement. This study suggests that light-touch, CBT-based mobile interventions like MoodHacker may be appropriate for implementation within EAPs and similar environments. In addition, it seems likely that supporting MoodHacker users with guidance from counselors may improve effectiveness for those who seek in-person support.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT02335554; https://clinicaltrials.gov/ct2/show/NCT02335554 (Archived by WebCite at http://www.webcitation.org/6dGXKWjWE).", "title": "" }, { "docid": "b5b6fc6ce7690ae8e49e1951b08172ce", "text": "The output voltage derivative term associated with a PID controller injects significant noise in a dc-dc converter. This is mainly due to the parasitic resistance and inductance of the output capacitor. Particularly, during a large-signal transient, noise injection significantly degrades phase margin. Although noise characteristics can be improved by reducing the cutoff frequency of the low-pass filter associated with the voltage derivative, this degrades the closed-loop bandwidth. A formulation of a PID controller is introduced to replace the output voltage derivative with information about the capacitor current, thus reducing noise injection. It is shown that this formulation preserves the fundamental principle of a PID controller and incorporates a load current feedforward, as well as inductor current dynamics. This can be helpful to further improve bandwidth and phase margin. The proposed method is shown to be equivalent to a voltage-mode-controlled buck converter and a current-mode-controlled boost converter with a PID controller in the voltage feedback loop. A buck converter prototype is tested, and the proposed algorithm is implemented using a field-programmable gate array.", "title": "" }, { "docid": "9a4bd291522b19ab4a6848b365e7f546", "text": "This paper reports on modern approaches in Information Extraction (IE) and its two main sub-tasks of Named Entity Recognition (NER) and Relation Extraction (RE). Basic concepts and the most recent approaches in this area are reviewed, which mainly include Machine Learning (ML) based approaches and the more recent trend to Deep Learning (DL)", "title": "" }, { "docid": "d50d3997572847200f12d69f61224760", "text": "The main function of a network layer is to route packets from the source machine to the destination machine. Algorithms that are used for route selection and data structure are the main parts for the network layer. In this paper we examine the network performance when using three routing protocols, RIP, OSPF and EIGRP. Video, HTTP and Voice application where configured for network transfer. We also examine the behaviour when using link failure/recovery controller between network nodes. The simulation results are analyzed, with a comparison between these protocols on the effectiveness and performance in network implemented.", "title": "" }, { "docid": "ac3511f0a3307875dc49c26da86afcfb", "text": "With the explosive growth of microblogging services, short-text messages (also known as tweets) are being created and shared at an unprecedented rate. Tweets in its raw form can be incredibly informative, but also overwhelming. For both end-users and data analysts it is a nightmare to plow through millions of tweets which contain enormous noises and redundancies. In this paper, we study continuous tweet summarization as a solution to address this problem. While traditional document summarization methods focus on static and small-scale data, we aim to deal with dynamic, quickly arriving, and large-scale tweet streams. We propose a novel prototype called Sumblr (SUMmarization By stream cLusteRing) for tweet streams. We first propose an online tweet stream clustering algorithm to cluster tweets and maintain distilled statistics called Tweet Cluster Vectors. Then we develop a TCV-Rank summarization technique for generating online summaries and historical summaries of arbitrary time durations. Finally, we describe a topic evolvement detection method, which consumes online and historical summaries to produce timelines automatically from tweet streams. Our experiments on large-scale real tweets demonstrate the efficiency and effectiveness of our approach.", "title": "" }, { "docid": "e9c4f5743dcbd1935134f1e34e7d2adc", "text": "Consumer vehicles have been proven to be insecure; the addition of electronics to monitor and control vehicle functions have added complexity resulting in safety critical vulnerabilities. Heavy commercial vehicles have also begun adding electronic control systems similar to consumer vehicles. We show how the openness of the SAE J1939 standard used across all US heavy vehicle industries gives easy access for safetycritical attacks and that these attacks aren’t limited to one specific make, model, or industry. We test our attacks on a 2006 Class-8 semi tractor and 2001 school bus. With these two vehicles, we demonstrate how simple it is to replicate the kinds of attacks used on consumer vehicles and that it is possible to use the same attack on other vehicles that use the SAE J1939 standard. We show safety critical attacks that include the ability to accelerate a truck in motion, disable the driver’s ability to accelerate, and disable the vehicle’s engine brake. We conclude with a discussion for possibilities of additional attacks and potential remote attack vectors.", "title": "" }, { "docid": "add80fd9c0cb935a5868e0b31c1d7432", "text": "Adders are the basic building block in the arithmetic circuits. In order to achieve high speed and low power consumption a 32bit carry skip adder is proposed. In the conventional technique, a hybrid variable latency extension is used with a method called as parallel prefix network (Brent-Kung). As a result, larger delay along with higher power consumption is obtained, which is the main drawback for any VLSI applications. In order to overcome this, Han Carlson adder along with CSA is used to design parallel prefix network. Therefore it reduces delay and power consumption. The proposed structure is designed by using HSPICE simulation tool. Therefore, a lower delay and low power consumption can be achieved in the benchmark circuits. Keyword: High speed, low delay, efficient power consumption and size.", "title": "" }, { "docid": "5fb0931dafbb024663f2d68faca2f552", "text": "The instrumentation and control (I&C) systems in nuclear power plants (NPPs) collect signals from sensors measuring plant parameters, integrate and evaluate sensor information, monitor plant performance, and generate signals to control plant devices for a safe operation of NPPs. Although the application of digital technology in industrial control systems (ICS) started a few decades ago, I&C systems in NPPs have utilized analog technology longer than any other industries. The reason for this stems from the fact that NPPs require strong assurance for safety and reliability. In recent years, however, digital I&C systems have been developed and installed in new and operating NPPs. This application of digital computers, and communication system and network technologies in NPP I&C systems accompanies cyber security concerns, similar to other critical infrastructures based on digital technologies. The Stuxnet case in 2010 evoked enormous concern regarding cyber security in NPPs. Thus, performing appropriate cyber security risk assessment for the digital I&C systems of NPPs, and applying security measures to the systems, has become more important nowadays. In general, approaches to assure cyber security in NPPs may be compatible with those for ICS and/or supervisory control and data acquisition (SCADA) systems in many aspects. Cyber security requirements and the risk assessment methodologies for ICS and SCADA systems are adopted from those for information technology (IT) systems. Many standards and guidance documents have been published for these areas [1~10]. Among them NIST SP 800-30 [4], NIST SP 800-37 [5], and NIST 800-39 [6] describe the risk assessment methods, NIST SP 800-53 [7] and NIST SP 800-53A [8] address security controls for IT systems. NIST SP 800-82 [10] describes the differences between IT systems and ICS and provides guidance for securing ICS, including SCADA systems, distributed control systems (DCS), and other systems performing control functions. As NIST SP 800-82 noted the differences between IT The applications of computers and communication system and network technologies in nuclear power plants have expanded recently. This application of digital technologies to the instrumentation and control systems of nuclear power plants brings with it the cyber security concerns similar to other critical infrastructures. Cyber security risk assessments for digital instrumentation and control systems have become more crucial in the development of new systems and in the operation of existing systems. Although the instrumentation and control systems of nuclear power plants are similar to industrial control systems, the former have specifications that differ from the latter in terms of architecture and function, in order to satisfy nuclear safety requirements, which need different methods for the application of cyber security risk assessment. In this paper, the characteristics of nuclear power plant instrumentation and control systems are described, and the considerations needed when conducting cyber security risk assessments in accordance with the lifecycle process of instrumentation and control systems are discussed. For cyber security risk assessments of instrumentation and control systems, the activities and considerations necessary for assessments during the system design phase or component design and equipment supply phase are presented in the following 6 steps: 1) System Identification and Cyber Security Modeling, 2) Asset and Impact Analysis, 3) Threat Analysis, 4) Vulnerability Analysis, 5) Security Control Design, and 6) Penetration test. The results from an application of the method to a digital reactor protection system are described.", "title": "" } ]
scidocsrr
46a24fb03287ffd169b17ac0cc91e047
Integrating Question Classification and Deep Learning for improved Answer Selection
[ { "docid": "1a6ece40fa87e787f218902eba9b89f7", "text": "Learning a similarity function between pairs of objects is at the core of learning to rank approaches. In information retrieval tasks we typically deal with query-document pairs, in question answering -- question-answer pairs. However, before learning can take place, such pairs needs to be mapped from the original space of symbolic words into some feature space encoding various aspects of their relatedness, e.g. lexical, syntactic and semantic. Feature engineering is often a laborious task and may require external knowledge sources that are not always available or difficult to obtain. Recently, deep learning approaches have gained a lot of attention from the research community and industry for their ability to automatically learn optimal feature representation for a given task, while claiming state-of-the-art performance in many tasks in computer vision, speech recognition and natural language processing. In this paper, we present a convolutional neural network architecture for reranking pairs of short texts, where we learn the optimal representation of text pairs and a similarity function to relate them in a supervised way from the available training data. Our network takes only words in the input, thus requiring minimal preprocessing. In particular, we consider the task of reranking short text pairs where elements of the pair are sentences. We test our deep learning system on two popular retrieval tasks from TREC: Question Answering and Microblog Retrieval. Our model demonstrates strong performance on the first task beating previous state-of-the-art systems by about 3\\% absolute points in both MAP and MRR and shows comparable results on tweet reranking, while enjoying the benefits of no manual feature engineering and no additional syntactic parsers.", "title": "" }, { "docid": "87f0a390580c452d77fcfc7040352832", "text": "• J. Wieting, M. Bansal, K. Gimpel, K. Livescu, and D. Roth. 2015. From paraphrase database to compositional paraphrase model and back. TACL. • K. S. Tai, R. Socher, and C. D. Manning. 2015. Improved semantic representations from treestructured long short-term memory networks. ACL. • W. Yin and H. Schutze. 2015. Convolutional neural network for paraphrase identification. NAACL. The product also streams internet radio and comes with a 30-day free trial for realnetworks' rhapsody music subscription. The device plays internet radio streams and comes with a 30-day trial of realnetworks rhapsody music service. Given two sentences, measure their similarity:", "title": "" } ]
[ { "docid": "977f7723cde3baa1d98ca99cd9ed8881", "text": "Identity Crime is well known, established, and costly. Identity Crime is the term used to refer to all types of crime in which someone wrongfully obtains and uses another person’s personal data in some way that involves fraud or deception, typically for economic gain. Forgery and use of fraudulent identity documents are major enablers of Identity Fraud. It has affected the e-commerce. It is increasing significantly with the development of modern technology and the global superhighways of communication, resulting in the loss of lots of money worldwide each year. Also along with transaction the application domain such as credit application is hit by this crime. These are growing concerns for not only governmental bodies but business organizations also all over the world. This paper gives a brief summary of the identity fraud. Also it discusses various data mining techniques used to overcome it.", "title": "" }, { "docid": "bc98a124101c25116182a1a8f21e328e", "text": "Serverless computing lets businesses and application developers focus on the program they need to run, without worrying about the machine on which it runs, or the resources it requires.", "title": "" }, { "docid": "6f265af3f4f93fcce13563cac14b5774", "text": "Inorganic pyrophosphate (PP(i)) produced by cells inhibits mineralization by binding to crystals. Its ubiquitous presence is thought to prevent \"soft\" tissues from mineralizing, whereas its degradation to P(i) in bones and teeth by tissue-nonspecific alkaline phosphatase (Tnap, Tnsalp, Alpl, Akp2) may facilitate crystal growth. Whereas the crystal binding properties of PP(i) are largely understood, less is known about its effects on osteoblast activity. We have used MC3T3-E1 osteoblast cultures to investigate the effect of PP(i) on osteoblast function and matrix mineralization. Mineralization in the cultures was dose-dependently inhibited by PP(i). This inhibition could be reversed by Tnap, but not if PP(i) was bound to mineral. PP(i) also led to increased levels of osteopontin (Opn) induced via the Erk1/2 and p38 MAPK signaling pathways. Opn regulation by PP(i) was also insensitive to foscarnet (an inhibitor of phosphate uptake) and levamisole (an inhibitor of Tnap enzymatic activity), suggesting that increased Opn levels did not result from changes in phosphate. Exogenous OPN inhibited mineralization, but dephosphorylation by Tnap reversed this effect, suggesting that OPN inhibits mineralization via its negatively charged phosphate residues and that like PP(i), hydrolysis by Tnap reduces its mineral inhibiting potency. Using enzyme kinetic studies, we have shown that PP(i) inhibits Tnap-mediated P(i) release from beta-glycerophosphate (a commonly used source of organic phosphate for culture mineralization studies) through a mixed type of inhibition. In summary, PP(i) prevents mineralization in MC3T3-E1 osteoblast cultures by at least three different mechanisms that include direct binding to growing crystals, induction of Opn expression, and inhibition of Tnap activity.", "title": "" }, { "docid": "dba627e41a71ddeb2390a2d5d4682930", "text": "We present GeoXp, an R package implementing interactive graphics for exploratory spatial data analysis. We use a data basis concerning public schools of the French MidiPyrénées region to illustrate the use of these exploratory techniques based on the coupling between a statistical graph and a map. Besides elementary plots like boxplots, histograms or simple scatterplots, GeoXp also couples maps with Moran scatterplots, variogram clouds, Lorenz curves, etc. In order to make the most of the multidimensionality of the data, GeoXp includes dimension reduction techniques such as principal components analysis and cluster analysis whose results are also linked to the map.", "title": "" }, { "docid": "aadbe9c44c2d2a859f652f53b67ae22c", "text": "In this communication, a switchable absorber/reflector based on active frequency-selective surface (AFSS) has been presented for single- as well as broadband applications. The FSS comprises periodic patterns of square loops connected among themselves through p-i-n diodes, which exhibit switchable performances. The novelty of the proposed design lies in its symmetric configuration and biasing network, which makes the structure polarization insensitive unlike the earlier reported AFSSs. A single-band switchable absorber/reflector has initially been realized, which is characterized through an equivalent circuit model to derive the circuit parameters. Later, surface-mount resistors have been implemented in the design to realize a wideband switchable absorber/reflector aimed for C-band applications. Both the structures have used a novel biasing methodology to provide the bias voltage to semiconductor switches without disturbing the original resonance pattern. Furthermore, the fabricated samples, while measuring in an anechoic chamber, show good agreement with the simulated responses under normal incidence as well as for different polarization angles.", "title": "" }, { "docid": "5479fec24e36a1e88c32a58d6eb5b158", "text": "This paper describes and evaluates a method for computing artist similarity from a set of artist biographies. The proposed method aims at leveraging semantic information present in these biographies, and can be divided in three main steps, namely: (1) entity linking, i.e. detecting mentions to named entities in the text and linking them to an external knowledge base; (2) deriving a knowledge representation from these mentions in the form of a semantic graph or a mapping to a vector-space model; and (3) computing semantic similarity between documents. We test this approach on a corpus of 188 artist biographies and a slightly larger dataset of 2,336 artists, both gathered from Last.fm. The former is mapped to the MIREX Audio and Music Similarity evaluation dataset, so that its similarity judgments can be used as ground truth. For the latter dataset we use the similarity between artists as provided by the Last.fm API. Our evaluation results show that an approach that computes similarity over a graph of entities and semantic categories clearly outperforms a baseline that exploits word co-occurrences and latent factors.", "title": "" }, { "docid": "d09144b7f20f75501e2e0806f6c8258c", "text": "Social Network Marketing techniques employ pre-existing social networks to increase brands or products awareness through word-of-mouth promotion. Full understanding of social network marketing and the potential candidates that can thus be marketed to certainly offer lucrative opportunities for prospective sellers. Due to the complexity of social networks, few models exist to interpret social network marketing realistically. We propose to model social network marketing using Heat Diffusion Processes. This paper presents three diffusion models, along with three algorithms for selecting the best individuals to receive marketing samples. These approaches have the following advantages to best illustrate the properties of real-world social networks: (1) We can plan a marketing strategy sequentially in time since we include a time factor in the simulation of product adoptions; (2) The algorithm of selecting marketing candidates best represents and utilizes the clustering property of real-world social networks; and (3) The model we construct can diffuse both positive and negative comments on products or brands in order to simulate the complicated communications within social networks. Our work represents a novel approach to the analysis of social network marketing, and is the first work to propose how to defend against negative comments within social networks. Complexity analysis shows our model is also scalable to very large social networks.", "title": "" }, { "docid": "67a3f92ab8c5a6379a30158bb9905276", "text": "We present a compendium of recent and current projects that utilize crowdsourcing technologies for language studies, finding that the quality is comparable to controlled laboratory experiments, and in some cases superior. While crowdsourcing has primarily been used for annotation in recent language studies, the results here demonstrate that far richer data may be generated in a range of linguistic disciplines from semantics to psycholinguistics. For these, we report a number of successful methods for evaluating data quality in the absence of a ‘correct’ response for any given data point.", "title": "" }, { "docid": "b29947243b1ad21b0529a6dd8ef3c529", "text": "We define a multiresolution spline technique for combining two or more images into a larger image mosaic. In this procedure, the images to be splined are first decomposed into a set of band-pass filtered component images. Next, the component images in each spatial frequency hand are assembled into a corresponding bandpass mosaic. In this step, component images are joined using a weighted average within a transition zone which is proportional in size to the wave lengths represented in the band. Finally, these band-pass mosaic images are summed to obtain the desired image mosaic. In this way, the spline is matched to the scale of features within the images themselves. When coarse features occur near borders, these are blended gradually over a relatively large distance without blurring or otherwise degrading finer image details in the neighborhood of th e border.", "title": "" }, { "docid": "d4cd0dabcf4caa22ad92fab40844c786", "text": "NA", "title": "" }, { "docid": "ccd6e2b8dac7bf25e9ac70ee35a06751", "text": "In this letter, an ultrawideband (UWB) bandpass filter with a band notch is proposed. The UWB BPF (3.1-10.6 GHz) is realized by cascading a distributed high-pass filter and an elliptic low-pass filter with an embedded stepped impedance resonator (SIR) to achieve a band notch characteristic. The notch band is obtained at 5.22 GHz. It is shown that the notch frequency can be tuned by changing the impedance ratio of the embedded SIR. A fabricated prototype of the proposed UWB bandpass filter is developed. The inband and out-of-band performance obtained by measurement, EM simulation, and that with an equivalent circuit model are in good agreement.", "title": "" }, { "docid": "3cff4fcd725b1f6ddf903849cb05f28f", "text": "Context personalisation is a flourishing area of research with many applications. Context personalisation systems usually employ a user model to predict the appeal of the context to a particular user given a history of interactions. Most of the models used are context-dependent and their applicability is usually limited to the system and the data used for model construction. Establishing models of user experience that are highly scalable while maintaing the performance constitutes an important research direction. In this paper, we propose generic models of user experience in the computer games domain. We employ two datasets collected from players interactions with two games from different genres where accurate models of players experience were previously built. We take the approach one step further by investigating the modelling mechanism ability to generalise over the two datasets. We further examine whether generic features of player behaviour can be defined and used to boost the modelling performance. The accuracies obtained in both experiments indicate a promise for the proposed approach and suggest that game-independent player experience models can be built.", "title": "" }, { "docid": "aaa53a3d22c20db14658df37a8ace50b", "text": "Four theories about cultural suppression of female sexuality are evaluated. Data are reviewed on cross-cultural differences in power and sex ratios, reactions to the sexual revolution, direct restraining influences on adolescent and adult female sexuality, double standard patterns of sexual morality, female genital surgery, legal and religious restrictions on sex, prostitution and pornography, and sexual deception. The view that men suppress female sexuality received hardly any support and is flatly contradicted by some findings. Instead, the evidence favors the view that women have worked to stifle each other’s sexuality because sex is a limited resource that women use to negotiate with men, and scarcity gives women an advantage.", "title": "" }, { "docid": "75567866ec1a72c48d78658a0b3115f9", "text": "BACKGROUND\nImpingement is a common cause of shoulder pain. Impingement mechanisms may occur subacromially (under the coraco-acromial arch) or internally (within the shoulder joint), and a number of secondary pathologies may be associated. These include subacromial-subdeltoid bursitis (inflammation of the subacromial portion of the bursa, the subdeltoid portion, or both), tendinopathy or tears affecting the rotator cuff or the long head of biceps tendon, and glenoid labral damage. Accurate diagnosis based on physical tests would facilitate early optimisation of the clinical management approach. Most people with shoulder pain are diagnosed and managed in the primary care setting.\n\n\nOBJECTIVES\nTo evaluate the diagnostic accuracy of physical tests for shoulder impingements (subacromial or internal) or local lesions of bursa, rotator cuff or labrum that may accompany impingement, in people whose symptoms and/or history suggest any of these disorders.\n\n\nSEARCH METHODS\nWe searched electronic databases for primary studies in two stages. In the first stage, we searched MEDLINE, EMBASE, CINAHL, AMED and DARE (all from inception to November 2005). In the second stage, we searched MEDLINE, EMBASE and AMED (2005 to 15 February 2010). Searches were delimited to articles written in English.\n\n\nSELECTION CRITERIA\nWe considered for inclusion diagnostic test accuracy studies that directly compared the accuracy of one or more physical index tests for shoulder impingement against a reference test in any clinical setting. We considered diagnostic test accuracy studies with cross-sectional or cohort designs (retrospective or prospective), case-control studies and randomised controlled trials.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo pairs of review authors independently performed study selection, assessed the study quality using QUADAS, and extracted data onto a purpose-designed form, noting patient characteristics (including care setting), study design, index tests and reference standard, and the diagnostic 2 x 2 table. We presented information on sensitivities and specificities with 95% confidence intervals (95% CI) for the index tests. Meta-analysis was not performed.\n\n\nMAIN RESULTS\nWe included 33 studies involving 4002 shoulders in 3852 patients. Although 28 studies were prospective, study quality was still generally poor. Mainly reflecting the use of surgery as a reference test in most studies, all but two studies were judged as not meeting the criteria for having a representative spectrum of patients. However, even these two studies only partly recruited from primary care.The target conditions assessed in the 33 studies were grouped under five main categories: subacromial or internal impingement, rotator cuff tendinopathy or tears, long head of biceps tendinopathy or tears, glenoid labral lesions and multiple undifferentiated target conditions. The majority of studies used arthroscopic surgery as the reference standard. Eight studies utilised reference standards which were potentially applicable to primary care (local anaesthesia, one study; ultrasound, three studies) or the hospital outpatient setting (magnetic resonance imaging, four studies). One study used a variety of reference standards, some applicable to primary care or the hospital outpatient setting. In two of these studies the reference standard used was acceptable for identifying the target condition, but in six it was only partially so. The studies evaluated numerous standard, modified, or combination index tests and 14 novel index tests. There were 170 target condition/index test combinations, but only six instances of any index test being performed and interpreted similarly in two studies. Only two studies of a modified empty can test for full thickness tear of the rotator cuff, and two studies of a modified anterior slide test for type II superior labrum anterior to posterior (SLAP) lesions, were clinically homogenous. Due to the limited number of studies, meta-analyses were considered inappropriate. Sensitivity and specificity estimates from each study are presented on forest plots for the 170 target condition/index test combinations grouped according to target condition.\n\n\nAUTHORS' CONCLUSIONS\nThere is insufficient evidence upon which to base selection of physical tests for shoulder impingements, and local lesions of bursa, tendon or labrum that may accompany impingement, in primary care. The large body of literature revealed extreme diversity in the performance and interpretation of tests, which hinders synthesis of the evidence and/or clinical applicability.", "title": "" }, { "docid": "62688aa48180943a6fcf73fef154fe75", "text": "Oxidative stress is a phenomenon associated with the pathology of several diseases including atherosclerosis, neurodegenerative diseases such as Alzheimer’s and Parkinson’s diseases, cancer, diabetes mellitus, inflammatory diseases, as well as psychiatric disorders or aging process. Oxidative stress is defined as an imbalance between the production of free radicals and reactive metabolites, so called oxidants, and their elimination by protective mechanisms named antioxidative systems. Free radicals and their metabolites prevail over antioxidants. This imbalance leads to damage of important biomolecules and organs with plausible impact on the whole organism. Oxidative and antioxidative processes are associated with electron transfer influencing the redox state of cells and organisms; therefore, oxidative stress is also known as redox stress. At present, the opinion that oxidative stress is not always harmful has been accepted. Depending on its intensity, it can play a role in regulation of other important processes through modulation of signal pathways, influencing synthesis of antioxidant enzymes, repair processes, inflammation, apoptosis and cell proliferation, and thus process of a malignity. Therefore, improper administration of antioxidants can potentially negatively impact biological systems.", "title": "" }, { "docid": "cf506587f2699d88e4a2e0be36ccac41", "text": "A complete list of the titles in this series appears at the end of this volume. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print, however, may not be available in electronic format.", "title": "" }, { "docid": "c92afabbe921c3b75408d841189ff1df", "text": "The contamination from heavy metals has risen during the last decade due to increase in Industrialization. This has led to a significant increase in health problems. Many of the known remediation techniques to remove heavy metal from soil are expensive, time consuming and environmentally destructive. Phytoremediation is an emerging technology for removal of heavy metals which is cost effective, and has aesthetic advantages and long term applicability. The present study aims at efficiently utilizing Brassica juncea L. to remove lead (Pb). The result of our study indicate that amount of lead in Indian mustard is increased with the amount of EDTA applied to the soil and maximum accumulation was achieved with 5mmol/kg of EDTA. On further increase in EDTA resulted in leaf necrosis and early shedding of leaves. Therefore EDTA at a concentration of 5mmol/kg was considered optimum for lead accumulation by Brassica juncea L.", "title": "" }, { "docid": "029c3f6528a4c80e8afe05c9397cc06a", "text": "There are five types of taste receptor cell, sweet, salt, bitter, sour, and umami (protein taste). There are 1000 olfactory receptor genes each specifying a different type of receptor each for a set of odors. Tastes are primary, unlearned, rewards and punishers, and are important in emotion. Pheromones and some other olfactory stimuli are primary reinforcers, but for many odors the reward value is learned by stimulus–reinforcer association learning. The primary taste cortex in the anterior insula provides separate and combined representations of the taste, temperature, and texture (including fat texture) of food in the mouth independently of hunger and thus of reward value and pleasantness. One synapse on, in the orbitofrontal cortex, these sensory inputs are for some neurons combined by learning with olfactory and visual inputs, and these neurons encode food reward value in that they only respond to food when hungry, and in that activations correlate with subjective pleasantness. Cognitive factors, including word-level descriptions, and attention, modulate the representation of the reward value of taste, odor, and flavor in the orbitofrontal cortex and a region to which it projects, the anterior cingulate cortex. Further, there are individual differences in the representation of the reward value of food in the orbitofrontal cortex. Overeating and obesity are related in many cases to an increased reward value of the sensory inputs produced by foods, and their modulation by cognition and attention that override existing satiety signals. Rapid advances have been made recently in understanding the receptors for taste and smell, the neural systems for taste and smell, the separation of sensory from hedonic processing of taste and smell, and how taste and smell and also the texture of food are important in the palatability of food and appetite control. Emphasis is placed on these advances. Taste receptors. There are receptors on the tongue for sweet, salt, bitter, sour, and the fifth taste, umami as exemplified by monosodium glutamate (Chandrashekar et al., 2006; Chaudhari and Roper, 2010). Umami taste is found in a diversity of foods rich in glutamate like fish, meat, human mothers' milk, tomatoes and some vegetables, and is enhanced by some ribonucleotides (including inosine and guanosine nucleotides), which are present in, for example, meat and some fish. The mixture of these umami components, which act synergistically at the receptor, underlies the rich taste characteristic of many cuisines (Rolls, 2009). Olfactory receptors. There are approximately 1000 different types of …", "title": "" }, { "docid": "ec9f13212368d59ff737a0e87939ccd2", "text": "Abstract words refer to things that can not be seen, heard, felt, smelled, or tasted as opposed to concrete words. Among other applications, the degree of abstractness has been shown to be a useful information for metaphor detection. Our contribution to this topic are as follows: i) we compare supervised techniques to learn and extend abstractness ratings for huge vocabularies ii) we learn and investigate norms for multi-word units by propagating abstractness to verb-noun pairs which lead to better metaphor detection, iii) we overcome the limitation of learning a single rating per word and show that multisense abstractness ratings are potentially useful for metaphor detection. Finally, with this paper we publish automatically created abstractness norms for 3 million English words and multi-words as wellwords refer to things that can not be seen, heard, felt, smelled, or tasted as opposed to concrete words. Among other applications, the degree of abstractness has been shown to be a useful information for metaphor detection. Our contribution to this topic are as follows: i) we compare supervised techniques to learn and extend abstractness ratings for huge vocabularies ii) we learn and investigate norms for multi-word units by propagating abstractness to verb-noun pairs which lead to better metaphor detection, iii) we overcome the limitation of learning a single rating per word and show that multisense abstractness ratings are potentially useful for metaphor detection. Finally, with this paper we publish automatically created abstractness norms for 3 million English words and multi-words as well as automatically created sense-specific abstractness ratings.", "title": "" } ]
scidocsrr
7d9f3e2a01429c716a479f816db7c83c
A Load Balancing Model Based on Cloud Partitioning for the Public Cloud
[ { "docid": "1231b1e1e0ace856815e32dbdc38a113", "text": "Availability of cloud systems is one of the main concerns of cloud computing. The term, availability of clouds, is mainly evaluated by ubiquity of information comparing with resource scaling. In clouds, load balancing, as a method, is applied across different data centers to ensure the network availability by minimizing use of computer hardware, software failures and mitigating recourse limitations. This work discusses the load balancing in cloud computing and then demonstrates a case study of system availability based on a typical Hospital Database Management solution.", "title": "" } ]
[ { "docid": "44bd9d0b66cb8d4f2c4590b4cb724765", "text": "AIM\nThis paper is a description of inductive and deductive content analysis.\n\n\nBACKGROUND\nContent analysis is a method that may be used with either qualitative or quantitative data and in an inductive or deductive way. Qualitative content analysis is commonly used in nursing studies but little has been published on the analysis process and many research books generally only provide a short description of this method.\n\n\nDISCUSSION\nWhen using content analysis, the aim was to build a model to describe the phenomenon in a conceptual form. Both inductive and deductive analysis processes are represented as three main phases: preparation, organizing and reporting. The preparation phase is similar in both approaches. The concepts are derived from the data in inductive content analysis. Deductive content analysis is used when the structure of analysis is operationalized on the basis of previous knowledge.\n\n\nCONCLUSION\nInductive content analysis is used in cases where there are no previous studies dealing with the phenomenon or when it is fragmented. A deductive approach is useful if the general aim was to test a previous theory in a different situation or to compare categories at different time periods.", "title": "" }, { "docid": "4a164ec21fb69e7db5c90467c6f6af17", "text": "Recent technologies have made it cost-effective to collect diverse types of genome-wide data. Computational methods are needed to combine these data to create a comprehensive view of a given disease or a biological process. Similarity network fusion (SNF) solves this problem by constructing networks of samples (e.g., patients) for each available data type and then efficiently fusing these into one network that represents the full spectrum of underlying data. For example, to create a comprehensive view of a disease given a cohort of patients, SNF computes and fuses patient similarity networks obtained from each of their data types separately, taking advantage of the complementarity in the data. We used SNF to combine mRNA expression, DNA methylation and microRNA (miRNA) expression data for five cancer data sets. SNF substantially outperforms single data type analysis and established integrative approaches when identifying cancer subtypes and is effective for predicting survival.", "title": "" }, { "docid": "1819f17297b526e69b345c0c723f4de4", "text": "Boosted by recent legislations, data anonymization is fast becoming a norm. However, as of yet no generic solution has been found to safely release data. As a consequence, data custodians often resort to ad-hoc means to anonymize datasets. Both past and current practices indicate that hashing is often believed to be an effective way to anonymize data. Unfortunately, in practice it is only rarely effective. This paper is a tutorial to explain the limits of cryptographic hash functions as an anonymization technique. Anonymity set is the best privacy model that can be achieved by hash functions. However, this model has several shortcomings. We provide three case studies to illustrate how hashing only yields a weakly anonymized data. The case studies include MAC and email address anonymization as well as the analysis of Google safe browsing.", "title": "" }, { "docid": "f649db3b6fa6a929ac0434b12ddeea54", "text": "The rapid growth of e-Commerce amongst private sectors and Internet usage amongst citizens has vastly stimulated e-Government initiatives from many countries. The Thailand e-Government initiative is based on the government's long-term strategic policy that aims to reform and overhaul the Thai bureaucracy. This study attempted to identify the e-Excise success factors by employing the IS success model. The study focused on finding the factors that may contribute to the success of the e-Excise initiative. The Delphi Technique was used to investigate the determinant factors for the success of the e-Excise initiative. Three-rounds of data collection were conducted with 77 active users from various industries. The results suggest that by increasing Trust in the e-Government website, Perceptions of Information Quality, Perceptions of System Quality, and Perceptions of Service Quality will influence System Usage and User Satisfaction, and will ultimately have consequences for the Perceived Net Benefits.", "title": "" }, { "docid": "8234cb805d080a13fb9aeab9373f75c8", "text": "Essentially a software system’s utility is determined by both its functionality and its non-functional characteristics, such as usability, flexibility, performance, interoperability and security. Nonetheless, there has been a lop-sided emphasis in the functionality of the software, even though the functionality is not useful or usable without the necessary non-functional characteristics. In this chapter, we review the state of the art on the treatment of non-functional requirements (hereafter, NFRs), while providing some prospects for future", "title": "" }, { "docid": "890f459384ea47a8915a60c19a3320e3", "text": "Product ads are a popular form of search advertizing offered by major search engines, including Yahoo, Google and Bing. Unlike traditional search ads, product ads include structured product specifications, which allow search engine providers to perform better keyword-based ad retrieval. However, the level of completeness of the product specifications varies and strongly influences the performance of ad retrieval. On the other hand, online shops are increasing adopting semantic markup languages such as Microformats, RDFa and Microdata, to annotate their content, making large amounts of product description data publicly available. In this paper, we present an approach for enriching product ads with structured data extracted from thousands of online shops offering Microdata annotations. In our approach we use structured product ads as supervision for training feature extraction models able to extract attribute-value pairs from unstructured product descriptions. We use these features to identify matching products across different online shops and enrich product ads with the extracted data. Our evaluation on three product categories related to electronics show promising results in terms of enriching product ads with useful product data.", "title": "" }, { "docid": "27465b2c8ce92ccfbbda6c802c76838f", "text": "Nonlinear hyperelastic energies play a key role in capturing the fleshy appearance of virtual characters. Real-world, volume-preserving biological tissues have Poisson’s ratios near 1/2, but numerical simulation within this regime is notoriously challenging. In order to robustly capture these visual characteristics, we present a novel version of Neo-Hookean elasticity. Our model maintains the fleshy appearance of the Neo-Hookean model, exhibits superior volume preservation, and is robust to extreme kinematic rotations and inversions. We obtain closed-form expressions for the eigenvalues and eigenvectors of all of the system’s components, which allows us to directly project the Hessian to semipositive definiteness, and also leads to insights into the numerical behavior of the material. These findings also inform the design of more sophisticated hyperelastic models, which we explore by applying our analysis to Fung and Arruda-Boyce elasticity. We provide extensive comparisons against existing material models.", "title": "" }, { "docid": "54e2dfd355e9e082d9a6f8c266c84360", "text": "The wealth and value of organizations are increasingly based on intellectual capital. Although acquiring talented individuals and investing in employee learning adds value to the organization, reaping the benefits of intellectual capital involves translating the wisdom of employees into reusable and sustained actions. This requires a culture that creates employee commitment, encourages learning, fosters sharing, and involves employees in decision making. An infrastructure to recognize and embed promising and best practices through social networks, evidence-based practice, customization of innovations, and use of information technology results in increased productivity, stronger financial performance, better patient outcomes, and greater employee and customer satisfaction.", "title": "" }, { "docid": "e264903ee2759f638dcd60a715cbb994", "text": "Bioinspired hardware holds the promise of low-energy, intelligent, and highly adaptable computing systems. Applications span from automatic classification for big data management, through unmanned vehicle control, to control for biomedical prosthesis. However, one of the major challenges of fabricating bioinspired hardware is building ultrahigh-density networks out of complex processing units interlinked by tunable connections. Nanometer-scale devices exploiting spin electronics (or spintronics) can be a key technology in this context. In particular, magnetic tunnel junctions (MTJs) are well suited for this purpose because of their multiple tunable functionalities. One such functionality, nonvolatile memory, can provide massive embedded memory in unconventional circuits, thus escaping the von-Neumann bottleneck arising when memory and processors are located separately. Other features of spintronic devices that could be beneficial for bioinspired computing include tunable fast nonlinear dynamics, controlled stochasticity, and the ability of single devices to change functions in different operating conditions. Large networks of interacting spintronic nanodevices can have their interactions tuned to induce complex dynamics such as synchronization, chaos, soliton diffusion, phase transitions, criticality, and convergence to multiple metastable states. A number of groups have recently proposed bioinspired architectures that include one or several types of spintronic nanodevices. In this paper, we show how spintronics can be used for bioinspired computing. We review the different approaches that have been proposed, the recent advances in this direction, and the challenges toward fully integrated spintronics complementary metal-oxide-semiconductor (CMOS) bioinspired hardware.", "title": "" }, { "docid": "b02c718acfab40a33840eec013a09bda", "text": "Smartphones today are ubiquitous source of sensitive information. Information leakage instances on the smartphones are on the rise because of exponential growth in smartphone market. Android is the most widely used operating system on smartphones. Many information flow tracking and information leakage detection techniques are developed on Android operating system. Taint analysis is commonly used data flow analysis technique which tracks the flow of sensitive information and its leakage. This paper provides an overview of existing Information flow tracking techniques based on the Taint analysis for android applications. It is observed that static analysis techniques look at the complete program code and all possible paths of execution before its run, whereas dynamic analysis looks at the instructions executed in the program-run in the real time. We provide in depth analysis of both static and dynamic taint analysis approaches.", "title": "" }, { "docid": "0846f7d40f5cbbd4c199dfb58c4a4e7d", "text": "While active learning has drawn broad attention in recent years, there are relatively few studies on stopping criterion for active learning. We here propose a novel model stability based stopping criterion, which considers the potential of each unlabeled examples to change the model once added to the training set. The underlying motivation is that active learning should terminate when the model does not change much by adding remaining examples. Inspired by the widely used stochastic gradient update rule, we use the gradient of the loss at each candidate example to measure its capability to change the classifier. Under the model change rule, we stop active learning when the changing ability of all remaining unlabeled examples is less than a given threshold. We apply the stability-based stopping criterion to two popular classifiers: logistic regression and support vector machines (SVMs). It can be generalized to a wide spectrum of learning models. Substantial experimental results on various UCI benchmark data sets have demonstrated that the proposed approach outperforms state-of-art methods in most cases.", "title": "" }, { "docid": "de73e8e382dddfba867068f1099b86fb", "text": "Endophytes are fungi which infect plants without causing symptoms. Fungi belonging to this group are ubiquitous, and plant species not associated to fungal endophytes are not known. In addition, there is a large biological diversity among endophytes, and it is not rare for some plant species to be hosts of more than one hundred different endophytic species. Different mechanisms of transmission, as well as symbiotic lifestyles occur among endophytic species. Latent pathogens seem to represent a relatively small proportion of endophytic assemblages, also composed by latent saprophytes and mutualistic species. Some endophytes are generalists, being able to infect a wide range of hosts, while others are specialists, limited to one or a few hosts. Endophytes are gaining attention as a subject for research and applications in Plant Pathology. This is because in some cases plants associated to endophytes have shown increased resistance to plant pathogens, particularly fungi and nematodes. Several possible mechanisms by which endophytes may interact with pathogens are discussed in this review. Additional key words: biocontrol, biodiversity, symbiosis.", "title": "" }, { "docid": "f6266e5c4adb4fa24cc353dccccaf6db", "text": "Clustering plays an important role in many large-scale data analyses providing users with an overall understanding of their data. Nonetheless, clustering is not an easy task due to noisy features and outliers existing in the data, and thus the clustering results obtained from automatic algorithms often do not make clear sense. To remedy this problem, automatic clustering should be complemented with interactive visualization strategies. This paper proposes an interactive visual analytics system for document clustering, called iVisClustering, based on a widelyused topic modeling method, latent Dirichlet allocation (LDA). iVisClustering provides a summary of each cluster in terms of its most representative keywords and visualizes soft clustering results in parallel coordinates. The main view of the system provides a 2D plot that visualizes cluster similarities and the relation among data items with a graph-based representation. iVisClustering provides several other views, which contain useful interaction methods. With help of these visualization modules, we can interactively refine the clustering results in various ways.", "title": "" }, { "docid": "1601469a8a05ede558d9e39f26dc1c61", "text": "machine code", "title": "" }, { "docid": "2779fabc9c858ba67fa8be2545cec0f1", "text": "Abst rac t -A meta-analysis of 32 comparative studies showed that computer-based education has generally had positive effects on the achievement of elementary school pupils. These effects have been different, however, for programs of @line computer-managed instruction (CMI) and for interactive computer-assisted instruction (CAI). The average effect in 28 studies of CAI programs was an increase in pupil achievement scores of O. 47 standard deviations, or from the 50th to the 68th percentile. The average effect in four studies of CMI programs, however, was an increase in scores of only O. 07 standard deviations. Study features were not significantly related to study outcomes.", "title": "" }, { "docid": "612f35c7a84177440da5a3dea9d33ad3", "text": "Anglican is a probabilistic programming system designed to interoperate with Clojure and other JVM languages. We introduce the programming language Anglican, outline our design choices, and discuss in depth the implementation of the Anglican language and runtime, including macro-based compilation, extended CPS-based evaluation model, and functional representations for probabilistic paradigms, such as a distribution, a random process, and an inference algorithm.\n We show that a probabilistic functional language can be implemented efficiently and integrated tightly with a conventional functional language with only moderate computational overhead. We also demonstrate how advanced probabilistic modelling concepts are mapped naturally to the functional foundation.", "title": "" }, { "docid": "51ec3dee7a91b7e9afcb26694ded0c11", "text": "[1] PRIMELT2.XLS software is introduced for calculating primary magma composition and mantle potential temperature (TP) from an observed lava composition. It is an upgrade over a previous version in that it includes garnet peridotite melting and it detects complexities that can lead to overestimates in TP by >100 C. These are variations in source lithology, source volatile content, source oxidation state, and clinopyroxene fractionation. Nevertheless, application of PRIMELT2.XLS to lavas from a wide range of oceanic islands reveals no evidence that volatile-enrichment and source fertility are sufficient to produce them. All are associated with thermal anomalies, and this appears to be a prerequisite for their formation. For the ocean islands considered in this work, TP maxima are typically 1450–1500 C in the Atlantic and 1500–1600 C in the Pacific, substantially greater than 1350 C for ambient mantle. Lavas from the Galápagos Islands and Hawaii record in their geochemistry high TP maxima and large ranges in both TP and melt fraction over short horizontal distances, a result that is predicted by the mantle plume model.", "title": "" }, { "docid": "bd700aba43a8a8de5615aa1b9ca595a7", "text": "Cloud computing has formed the conceptual and infrastructural basis for tomorrow’s computing. The global computing infrastructure is rapidly moving towards cloud based architecture. While it is important to take advantages of could based computing by means of deploying it in diversified sectors, the security aspects in a cloud based computing environment remains at the core of interest. Cloud based services and service providers are being evolved which has resulted in a new business trend based on cloud technology. With the introduction of numerous cloud based services and geographically dispersed cloud service providers, sensitive information of different entities are normally stored in remote servers and locations with the possibilities of being exposed to unwanted parties in situations where the cloud servers storing those information are compromised. If security is not robust and consistent, the flexibility and advantages that cloud computing has to offer will have little credibility. This paper presents a review on the cloud computing concepts as well as security issues inherent within the context of cloud computing and cloud", "title": "" }, { "docid": "80821a715db4616d8cbdefd8e3372fba", "text": "General rights Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact [email protected] providing details, and we will remove access to the work immediately and investigate your claim. This article proposes a novel framework for representing and measuring local coherence. Central to this approach is the entity-grid representation of discourse, which captures patterns of entity distribution in a text. The algorithm introduced in the article automatically abstracts a text into a set of entity transition sequences and records distributional, syntactic, and referential information about discourse entities. We re-conceptualize coherence assessment as a learning task and show that our entity-based representation is well-suited for ranking-based generation and text classification tasks. Using the proposed representation, we achieve good performance on text ordering, summary coherence evaluation, and readability assessment.", "title": "" } ]
scidocsrr
f1d49f63a50043b3495f1da8e33a298c
Data Mining : Future Trends and Applications
[ { "docid": "767ccff6feddbc9a89d24aaab4068c18", "text": "Advances in multimedia acquisition and storage technology have led to tremendous growth in very large and detailed multimedia databases. If these multimedia files are analyzed, useful information to users can be revealed. Multimedia mining deals with the extraction of implicit knowledge, multimedia data relationships, or other patterns not explicitly stored in multimedia files. Multimedia mining is more than just an extension of data mining, as it is an interdisciplinary endeavor that draws upon expertise in computer vision, multimedia processing, multimedia retrieval, data mining, machine learning, database and artificial intelligence. This paper briefly describes the multimedia mining, while references cited cover the major theoretical issues. Key-Words: text mining; image mining; audio mining; video mining", "title": "" }, { "docid": "eff576fd8f6876acc7518c66da0504f2", "text": "Knowledge discovery in databases and data mining aim a t semiautomatic tools for the analysis of large data sets. We give an overview of the area and present some of the research issues, especially from the database angle.", "title": "" } ]
[ { "docid": "4018c4183c2f60d98c7fdaa21fb17379", "text": "Algebraic key establishment protocols based on the difficulty of solving equations over algebraic structures are described as a theoretical basis for constructing public–key cryptosystems.", "title": "" }, { "docid": "98fef878e8313392b1e193f5a14b1afc", "text": "Environmental factors, like early exposure to stressors or high caloric diets, can alter the early programming of central nervous system, leading to long-term effects on cognitive function, increased vulnerability to cognitive decline and development of psychopathologies later in life. The interaction between these factors and their combined effects on brain structure and function are still not completely understood. In this study, we evaluated long-term effects of social isolation in the prepubertal period, with or without chronic high fat diet access, on memory and on neurochemical markers in the prefrontal cortex of rats. We observed that early social isolation led to impairment in short-term and working memory in adulthood, and to reductions of Na(+),K(+)-ATPase activity and the immunocontent of phospho-AKT, in prefrontal cortex. Chronic exposure to a high fat diet impaired short-term memory (object recognition), and decreased BDNF levels in that same brain area. Remarkably, the association of social isolation with chronic high fat diet rescued the memory impairment on the object recognition test, as well as the changes in BDNF levels, Na(+),K(+)-ATPase activity, MAPK, AKT and phospho-AKT to levels similar to the control-chow group. In summary, these findings showed that a brief social isolation period and access to a high fat diet during a sensitive developmental period might cause memory deficits in adulthood. On the other hand, the interplay between isolation and high fat diet access caused a different brain programming, preventing some of the effects observed when these factors are separately applied.", "title": "" }, { "docid": "b6a1134d1388b9e7d49edc971e276055", "text": "Latent Dirichlet allocation (LDA) is a Bayesian network tha has recently gained much popularity in applications ranging from document mode ling to computer vision. Due to the large scale nature of these applications, current inference procedures like variational Bayes and Gibbs sampling have been found lacking. In this paper we propose the collapsed variational Bayesian in fere ce algorithm for LDA, and show that it is computationally efficient, easy to im plement and significantly more accurate than standard variational Bayesian in fere ce for LDA.", "title": "" }, { "docid": "0ecded7fad85b79c4c288659339bc18b", "text": "We present an end-to-end supervised based system for detecting malware by analyzing network traffic. The proposed method extracts 972 behavioral features across different protocols and network layers, and refers to different observation resolutions (transaction, session, flow and conversation windows). A feature selection method is then used to identify the most meaningful features and to reduce the data dimensionality to a tractable size. Finally, various supervised methods are evaluated to indicate whether traffic in the network is malicious, to attribute it to known malware “families” and to discover new threats. A comparative experimental study using real network traffic from various environments indicates that the proposed system outperforms existing state-of-the-art rule-based systems, such as Snort and Suricata. In particular, our chronological evaluation shows that many unknown malware incidents could have been detected at least a month before their static rules were introduced to either the Snort or Suricata systems.", "title": "" }, { "docid": "db02af0f6c2994e4348c1f7c4f3191ce", "text": "American students rank well below international peers in the disciplines of science, technology, engineering, and mathematics (STEM). Early exposure to STEM-related concepts is critical to later academic achievement. Given the rise of tablet-computer use in early childhood education settings, interactive technology might be one particularly fruitful way of supplementing early STEM education. Using a between-subjects experimental design, we sought to determine whether preschoolers could learn a fundamental math concept (i.e., measurement with non-standard units) from educational technology, and whether interactivity is a crucial component of learning from that technology. Participants who either played an interactive tablet-based game or viewed a non-interactive video demonstrated greater transfer of knowledge than those assigned to a control condition. Interestingly, interactivity contributed to better performance on near transfer tasks, while participants in the non-interactive condition performed better on far transfer tasks. Our findings suggest that, while preschool-aged children can learn early STEM skills from educational technology, interactivity may only further support learning in certain", "title": "" }, { "docid": "405a1e8badfb85dcd1d5cc9b4a0026d2", "text": "It is of great practical importance to improve yield and quality of vegetables in soilless cultures. This study investigated the effects of iron-nutrition management on yield and quality of hydroponic-cultivated spinach (Spinacia oleracea L.). The results showed that mild Fe-deficient treatment (1 μM FeEDTA) yielded a greater biomass of edible parts than Fe-omitted treatment (0 μM FeEDTA) or Fe-sufficient treatments (10 and 50 μM FeEDTA). Conversely, mild Fe-deficient treatment had the lowest nitrate concentration in the edible parts out of all the Fe treatments. Interestingly, all the concentrations of soluble sugar, soluble protein and ascorbate in mild Fe-deficient treatments were higher than Fe-sufficient treatments. In addition, both phenolic concentration and DPPH scavenging activity in mild Fe-deficient treatments were comparable with those in Fe-sufficient treatments, but were higher than those in Fe-omitted treatments. Therefore, we concluded that using a mild Fe-deficient nutrition solution to cultivate spinach not only would increase yield, but also would improve quality.", "title": "" }, { "docid": "b0f396c692568194708a7cf6b8fce394", "text": "DreamCam is a modular smart camera constructed with the use of an FPGA like main processing board. The core of the camera is an Altera Cyclone-III associated with a CMOS imager and six private Ram blocks. The main novel feature of our work consists in proposing a new smart camera architecture and several modules (IP) to efficiently extract and sort the visual features in real time. In this paper, extraction is performed by a Harris and Stephen filtering associated with customized modules. These modules extract, select and sort visual features in real-time. As a result, DreamCam (with such a configuration) provides a description of each visual feature in the form of its position and the grey-level template around it.", "title": "" }, { "docid": "508ad7d072a62433f3233d90286ef902", "text": "The NP-hard Colorful Components problem is, given a vertex-colored graph, to delete a minimum number of edges such that no connected component contains two vertices of the same color. It has applications in multiple sequence alignment and in multiple network alignment where the colors correspond to species. We initiate a systematic complexity-theoretic study of Colorful Components by presenting NP-hardness as well as fixed-parameter tractability results for different variants of Colorful Components. We also perform experiments with our algorithms and additionally develop an efficient and very accurate heuristic algorithm clearly outperforming a previous min-cut-based heuristic on multiple sequence alignment data.", "title": "" }, { "docid": "035fbb25ed4a97ceb6f92b464b617dfa", "text": "The microblogging service Twitter is one of the world's most popular online social networks and assembles a huge amount of data produced by interactions between users. A careful analysis of this data allows identifying groups of users who share similar traits, opinions, and preferences. We call community detection the process of user group identification, which grants valuable insights not available upfront. In order to extract useful knowledge from Twitter data many methodologies have been proposed, which define the attributes to be used in community detection problems by manual and empirical criteria - oftentimes guided by the aimed type of community and what the researcher attaches importance to. However, such approach cannot be generalized because it is well known that the task of finding out an appropriate set of attributes leans on context, domain, and data set. Aiming to the advance of community detection domain, reduce computational cost and improve the quality of related researches, this paper proposes a standard methodology for community detection in Twitter using feature selection methods. Results of the present research directly affect the way community detection methodologies have been applied to Twitter and quality of outcomes produced.", "title": "" }, { "docid": "855e0db3812648a18bd5f9bddb28c551", "text": "The ball and plate system is a typical multi-variable non-linear plant. By extension of the traditional ball and beam system, it is used as a standard benchmark to inspect diverse control schemes. In this paper, a problem with trajectory-tracking of ball and plate systems is studied based on non-linear control theory. Firstly, we briefly summarise control problems of ball and plate systems and introduce a ball and plate system named BPVS-JLU developed in authors’ laboratory. Then the system’s mathematical model is derived using the Lagrange method. Lastly, a non-linear switching control scheme is put forward to deal with the trajectory-tracking problem after the non-linear control analysis of ball and plate system. Experimental results have shown that the proposed scheme can accomplish the task of trajectory-tracking effectively.", "title": "" }, { "docid": "97270ca739c7e005da4cab41f19342e7", "text": "Automatic approach for bladder segmentation from computed tomography (CT) images is highly desirable in clinical practice. It is a challenging task since the bladder usually suffers large variations of appearance and low soft-tissue contrast in CT images. In this study, we present a deep learning-based approach which involves a convolutional neural network (CNN) and a 3D fully connected conditional random fields recurrent neural network (CRF-RNN) to perform accurate bladder segmentation. We also propose a novel preprocessing method, called dual-channel preprocessing, to further advance the segmentation performance of our approach. The presented approach works as following: first, we apply our proposed preprocessing method on the input CT image and obtain a dual-channel image which consists of the CT image and an enhanced bladder density map. Second, we exploit a CNN to predict a coarse voxel-wise bladder score map on this dual-channel image. Finally, a 3D fully connected CRF-RNN refines the coarse bladder score map and produce final fine-localized segmentation result. We compare our approach to the state-of-the-art V-net on a clinical dataset. Results show that our approach achieves superior segmentation accuracy, outperforming the V-net by a significant margin. The Dice Similarity Coefficient of our approach (92.24%) is 8.12% higher than that of the V-net. Moreover, the bladder probability maps performed by our approach present sharper boundaries and more accurate localizations compared with that of the V-net. Our approach achieves higher segmentation accuracy than the state-of-the-art method on clinical data. Both the dual-channel processing and the 3D fully connected CRF-RNN contribute to this improvement. The united deep network composed of the CNN and 3D CRF-RNN also outperforms a system where the CRF model acts as a post-processing method disconnected from the CNN.", "title": "" }, { "docid": "68cf646ecd3aa857ec819485eab03d93", "text": "Since their introduction as a means of front propagation and their first application to edge-based segmentation in the early 90’s, level set methods have become increasingly popular as a general framework for image segmentation. In this paper, we present a survey of a specific class of region-based level set segmentation methods and clarify how they can all be derived from a common statistical framework. Region-based segmentation schemes aim at partitioning the image domain by progressively fitting statistical models to the intensity, color, texture or motion in each of a set of regions. In contrast to edge-based schemes such as the classical Snakes, region-based methods tend to be less sensitive to noise. For typical images, the respective cost functionals tend to have less local minima which makes them particularly well-suited for local optimization methods such as the level set method. We detail a general statistical formulation for level set segmentation. Subsequently, we clarify how the integration of various low level criteria leads to a set of cost functionals. We point out relations between the different segmentation schemes. In experimental results, we demonstrate how the level set function is driven to partition the image plane into domains of coherent color, texture, dynamic texture or motion. Moreover, the Bayesian formulation allows to introduce prior shape knowledge into the level set method. We briefly review a number of advances in this domain.", "title": "" }, { "docid": "c7fd5a26da59fab4e66e0cb3e93530d6", "text": "Switching audio amplifiers are widely used in HBridge topology thanks to their high efficiency; however low audio performances in single ended power stage topology is a strong weakness leading to not be used for headset applications. This paper explains the importance of efficient error correction in Single Ended Class-D audio amplifier. A hysteresis control for Class-D amplifier with a variable window is also presented. The analyses are verified by simulations and measurements. The proposed solution was fabricated in 0.13µm CMOS technology with an active area of 0.2mm2. It could be used in single ended output configuration fully compatible with common headset connectors. The proposed Class-D amplifier achieves a harmonic distortion of 0.01% and a power supply rejection of 70dB with a quite low static current consumption.", "title": "" }, { "docid": "0f8f269ef4cf43261981dcf5c8df6b3c", "text": "Recently, high-power white light-emitting diodes (LEDs) have attracted much attention due to their versatility in applications and to the increasing market demand for them. So great attention has been focused on producing highly reliable LED lighting. How to accurately predict the reliability of LED lighting is emerging as one of the key issues in this field. Physics-of-failure-based prognostics and health management (PoF-based PHM) is an approach that utilizes knowledge of a product's life cycle loading and failure mechanisms to design for and assess reliability. In this paper, after analyzing the materials and geometries for high-power white LED lighting at all levels, i.e., chips, packages and systems, failure modes, mechanisms and effects analysis (FMMEA) was used in the PoF-based PHM approach to identify and rank the potential failures emerging from the design process. The second step in this paper was to establish the appropriate PoF-based damage models for identified failure mechanisms that carry a high risk.", "title": "" }, { "docid": "6d61da17db5c16611409356bd79006c4", "text": "We examine empirical evidence for religious prosociality, the hypothesis that religions facilitate costly behaviors that benefit other people. Although sociological surveys reveal an association between self-reports of religiosity and prosociality, experiments measuring religiosity and actual prosocial behavior suggest that this association emerges primarily in contexts where reputational concerns are heightened. Experimentally induced religious thoughts reduce rates of cheating and increase altruistic behavior among anonymous strangers. Experiments demonstrate an association between apparent profession of religious devotion and greater trust. Cross-cultural evidence suggests an association between the cultural presence of morally concerned deities and large group size in humans. We synthesize converging evidence from various fields for religious prosociality, address its specific boundary conditions, and point to unresolved questions and novel predictions.", "title": "" }, { "docid": "4bf2bf9480b4c40ae4ded35e26ccf585", "text": "Wireless Body Area Network (WBAN) is a modern technology having a wide-range of applications. Reliability is a key performance index for WBANs, especially for medical applications. An unreliable WBAN may severely affect the patient under observation. Main causes of unreliability are inefficient routing, low transmission range, body shadowing and environmental interference. We propose a Cross-Layer Opportunistic MAC/Routing (COMR) protocol that can improve the reliability by using a timer based approach for the relay selection mechanism. The value of this timer depends on Received Signal Strength Indicator (RSSI) and residual energy. The selected relay node will have the highest residual energy and will be closest to the sink as compared to other possible relay nodes. We evaluate the performance of the proposed mechanism in terms of network lifetime, Packet Delivery Ratio (PDR), End-To-End (ETE) delay, and energy used per bit. Results depict that COMR protocol shows improvement in terms of reliability, energy efficiency, ETE delay and network lifetime in a WBAN as compared to Simple Opportunistic Routing (SOR).", "title": "" }, { "docid": "00bba90894fec5e0ddd95cf38c588f51", "text": "The particle filter (PF) was introduced in 1993 as a numerical approximation to the nonlinear Bayesian filtering problem, and there is today a rather mature theory as well as a number of successful applications described in literature. This tutorial serves two purposes: to survey the part of the theory that is most important for applications and to survey a number of illustrative positioning applications from which conclusions relevant for the theory can be drawn. The theory part first surveys the nonlinear filtering problem and then describes the general PF algorithm in relation to classical solutions based on the extended Kalman filter (EKF) and the point mass filter (PMF). Tuning options, design alternatives, and user guidelines are described, and potential computational bottlenecks are identified and remedies suggested. Finally, the marginalized (or Rao-Blackwellized) PF is overviewed as a general framework for applying the PF to complex systems. The application part is more or less a stand-alone tutorial without equations that does not require any background knowledge in statistics or nonlinear filtering. It describes a number of related positioning applications where geographical information systems provide a nonlinear measurement and where it should be obvious that classical approaches based on Kalman filters (KFs) would have poor performance. All applications are based on real data and several of them come from real-time implementations. This part also provides complete code examples.", "title": "" }, { "docid": "5a8ac761e486bc58222339fd3e705a75", "text": "Traditional approaches to organizational change have been dominated by assumptions privileging stability, routine, and order. As a result, organizational change has been reified and treated as exceptional rather than natural. In this paper, we set out to offer an account of organizational change on its own terms—to treat change as the normal condition of organizational life. The central question we address is as follows: What must organization(s) be like if change is constitutive of reality? Wishing to highlight the pervasiveness of change in organizations, we talk about organizational becoming. Change, we argue, is the reweaving of actors’ webs of beliefs and habits of action to accommodate new experiences obtained through interactions. Insofar as this is an ongoing process, that is to the extent actors try to make sense of and act coherently in the world, change is inherent in human action, and organizations are sites of continuously evolving human action. In this view, organization is a secondary accomplishment, in a double sense. Firstly, organization is the attempt to order the intrinsic flux of human action, to channel it towards certain ends by generalizing and institutionalizing particular cognitive representations. Secondly, organization is a pattern that is constituted, shaped, and emerging from change. Organization aims at stemming change but, in the process of doing so, it is generated by it. These claims are illustrated by drawing on the work of several organizational ethnographers. The implications of this view for theory and practice are outlined. (Continuous Change; Routines; Process; Improvization; Reflexivity; Emergence; Interaction; Experience) The point is that usually we look at change but we do not see it. We speak of change, but we do not think about it. We say that change exists, that everything changes, that change is the very law of things: Yes, we say it and we repeat it; but those are only words, and we reason and philosophize as though change did not exist. In order to think change and see it, there is a whole veil of prejudices to brush aside, some of them artificial, created by philosophical speculation, the others natural", "title": "" }, { "docid": "8a6492185b786438237d3cf5ab3d2b07", "text": "This article presents the growing research area of Behavioural Corporate Finance in the context of one specific example: distortions in corporate investment due to CEO overconfidence. We first review the relevant psychology and experimental evidence on overconfidence. We then summarise the results of Malmendier and Tate (2005a) on the impact of overconfidence on corporate investment. We present supplementary evidence on the relationship betweenCEOs’ press portrayals and overconfident investment decisions. This alternative approach to measuring overconfidence, developed in Malmendier and Tate (2005b), relies on the perception of outsiders rather than the CEO’s own actions. The robustness of the results across such diverse proxies jointly corroborates previous findings and suggests new avenues to measuring executive overconfidence.", "title": "" }, { "docid": "2b8ca8be8d5e468d4cd285ecc726eceb", "text": "These days, large-scale graph processing becomes more and more important. Pregel, inspired by Bulk Synchronous Parallel, is one of the highly used systems to process large-scale graph problems. In Pregel, each vertex executes a function and waits for a superstep to communicate its data to other vertices. Superstep is a very time-consuming operation, used by Pregel, to synchronize distributed computations in a cluster of computers. However, it may become a bottleneck when the number of communications increases in a graph with million vertices. Superstep works like a barrier in Pregel that increases the side effect of skew problem in distributed computing environment. ExPregel is a Pregel-like model that is designed to reduce the number of communication messages between two vertices resided on two different computational nodes. We have proven that ExPregel reduces the number of exchanged messages as well as the number of supersteps for all graph topologies. Enhancing parallelism in our new computational model is another important feature that manifolds the speed of graph analysis programs. More interestingly, ExPregel uses the same model of programming as Pregel. Our experiments on large-scale real-world graphs show that ExPregel can reduce network traffic as well as number of supersteps from 45% to 96%. Runtime speed up in the proposed model varies from 1.2× to 30×. Copyright © 2015 John Wiley & Sons, Ltd.", "title": "" } ]
scidocsrr
f90ca138db2ecc0deba9486091125cb9
Shoulder injuries in the throwing athlete.
[ { "docid": "679d1c25d707c099de2f8ed5f0a09612", "text": "BACKGROUND\nAlterations in glenohumeral range of motion, including increased posterior shoulder tightness and glenohumeral internal rotation deficit that exceeds the accompanying external rotation gain, are suggested contributors to throwing-related shoulder injuries such as pathologic internal impingement. Yet these contributors have not been identified in throwers with internal impingement.\n\n\nHYPOTHESIS\nThrowers with pathologic internal impingement will exhibit significantly increased posterior shoulder tightness and glenohumeral internal rotation deficit without significantly increased external rotation gain.\n\n\nSTUDY DESIGN\nCase control study; Level of evidence, 3.\n\n\nMETHODS\nEleven throwing athletes with pathologic internal impingement diagnosed using both clinical examination and a magnetic resonance arthrogram were demographically matched with 11 control throwers who had no history of upper extremity injury. Passive glenohumeral internal and external rotation were measured bilaterally with standard goniometry at 90 degrees of humeral abduction and elbow flexion. Bilateral differences in glenohumeral range of motion were used to calculate glenohumeral internal rotation deficit and external rotation gain. Posterior shoulder tightness was quantified as the bilateral difference in passive shoulder horizontal adduction with the scapula retracted and the shoulder at 90 degrees of elevation. Comparisons were made between groups with dependent t tests (P < .05).\n\n\nRESULTS\nThe throwing athletes with internal impingement demonstrated significantly greater glenohumeral internal rotation deficit (P = .03) and posterior shoulder tightness (P = .03) compared with the control subjects. No significant differences were observed in external rotation gain between groups (P = .16).\n\n\nCLINICAL RELEVANCE\nThese findings could indicate that a tightening of the posterior elements of the shoulder (capsule, rotator cuff) may contribute to impingement. The results suggest that management should include stretching to restore flexibility to the posterior shoulder.", "title": "" } ]
[ { "docid": "7fd33ebd4fec434dba53b15d741fdee4", "text": "We present a data-efficient representation learning approach to learn video representation with small amount of labeled data. We propose a multitask learning model ActionFlowNet to train a single stream network directly from raw pixels to jointly estimate optical flow while recognizing actions with convolutional neural networks, capturing both appearance and motion in a single model. Our model effectively learns video representation from motion information on unlabeled videos. Our model significantly improves action recognition accuracy by a large margin (23.6%) compared to state-of-the-art CNN-based unsupervised representation learning methods trained without external large scale data and additional optical flow input. Without pretraining on large external labeled datasets, our model, by well exploiting the motion information, achieves competitive recognition accuracy to the models trained with large labeled datasets such as ImageNet and Sport-1M.", "title": "" }, { "docid": "9864bce09ff74218fb817aab62e70081", "text": "Nowadays, sentiment analysis methods become more and more popular especially with the proliferation of social media platform users number. In the same context, this paper presents a sentiment analysis approach which can faithfully translate the sentimental orientation of Arabic Twitter posts, based on a novel data representation and machine learning techniques. The proposed approach applied a wide range of features: lexical, surface-form, syntactic, etc. We also made use of lexicon features inferred from two Arabic sentiment words lexicons. To build our supervised sentiment analysis system, we use several standard classification methods (Support Vector Machines, K-Nearest Neighbour, Naïve Bayes, Decision Trees, Random Forest) known by their effectiveness over such classification issues.\n In our study, Support Vector Machines classifier outperforms other supervised algorithms in Arabic Twitter sentiment analysis. Via an ablation experiments, we show the positive impact of lexicon based features on providing higher prediction performance.", "title": "" }, { "docid": "1e77561120fd88f86cdd68d64a8ebd58", "text": "Climate warming has created favorable conditions for the range expansion of many southern Ponto-Caspian freshwater fish and mollusks through the Caspian-Volga-Baltic “invasion corridor.” Some parasites can be used as “biological tags” of migration activity and generic similarity of new host populations in the Middle and Upper Volga. The study demonstrates a low biodiversity of parasites even of the most common estuarial invaders sampled from the northern reservoir such as the Ponto-Caspian kilka Clupeonella cultriventris (16 species), tubenose goby Proterorhinus semilunaris (19 species), and round goby Neogobius (=Appollonia) malanostomus (14 species). In 2000–2010, only a few cases of a significant increase in occurrence (up to 80–100%) and abundance indexes were recorded for some nonspecific parasites such as peritricha ciliates Epistilys lwoffi, Trichodina acuta, and Ambiphrya ameiuri on the gills of the tubenose goby; the nematode Contracoecum microcephalum and the acanthocephalan Pomphorhynchus laevis from the round goby; and metacercariae of trematodes Bucaphalus polymorphus and Apophallus muehlingi from the muscles of kilka. In some water bodies, the occurrence of the trematode Bucephalus polymorphus tended to decrease after a partial replacement of its intermediate host zebra mussel Dreissena polymorpha by D. bugensi (quagga mussel). High occurrence of parthenites of Apophallus muehlingi in the mollusk Lithoglyphus naticoides was recorded in the Upper Volga (up to 70%) as compared to the Middle Volga (34%). Fry of fish with a considerable degree of muscle injury caused by the both trematode species have lower mobility and become more available food objects for birds and carnivorous fish.", "title": "" }, { "docid": "8bcb5b946b9f5e07807ec9a44884cf4e", "text": "Using data from two waves of a panel study of families who currently or recently received cash welfare benefits, we test hypotheses about the relationship between food hardships and behavior problems among two different age groups (458 children ages 3–5-and 747 children ages 6–12). Results show that food hardships are positively associated with externalizing behavior problems for older children, even after controlling for potential mediators such as parental stress, warmth, and depression. Food hardships are positively associated with internalizing behavior problems for older children, and with both externalizing and internalizing behavior problems for younger children, but these effects are mediated by parental characteristics. The implications of these findings for child and family interventions and food assistance programs are discussed. Food Hardships and Child Behavior Problems among Low-Income Children INTRODUCTION In the wake of the 1996 federal welfare reforms, several large-scale, longitudinal studies of welfare recipients and low-income families were launched with the intent of assessing direct benchmarks, such as work and welfare activity, over time, as well as indirect and unintended outcomes related to material hardship and mental health. One area of special concern to many researchers and policymakers alike is child well-being in the context of welfare reforms. As family welfare use and parental work activities change under new welfare policies, family income and material resources may also fluctuate. To the extent that family resources are compromised by changes in welfare assistance and earnings, children may experience direct hardships, such as instability in food consumption, which in turn may affect other areas of functioning. It is also possible that changes in parental work and family welfare receipt influence children indirectly through their caregivers. As parents themselves experience hardships or new stresses, their mental health and interactions with their children may change, which in turn could affect their children’s functioning. This research assesses whether one particular form of hardship, food hardship, is associated with adverse behaviors among low-income children. Specifically, analyses assess whether food hardships have relationships with externalizing (e.g., aggressive or hyperactive) and internalizing (e.g., anxietyand depression-related) child behavior problems, and whether associations between food hardships and behavior problems are mediated by parental stress, warmth, and depression. The study involves a panel survey of individuals in one state who were receiving Temporary Assistance for Needy Families (TANF) in 1998 and were caring for minor-aged children. Externalizing and internalizing behavior problems associated with a randomly selected child from each household are assessed in relation to key predictors, taking advantage of the prospective study design. 2 BACKGROUND Food hardships have been conceptualized by researchers in various ways. For example, food insecurity is defined by the U.S. Department of Agriculture (USDA) as the “limited or uncertain availability of nutritionally adequate and safe foods or limited or uncertain ability to acquire acceptable foods in socially acceptable ways” (Bickel, Nord, Price, Hamilton, and Cook, 2000, p. 6). An 18-item scale was developed by the USDA to assess household food insecurity with and without hunger, where hunger represents a potential result of more severe forms of food insecurity, but not a necessary condition for food insecurity to exist (Price, Hamilton, and Cook, 1997). Other researchers have used selected items from the USDA Food Security Module to assess food hardships (Nelson, 2004; Bickel et al., 2000) The USDA also developed the following single-item question to identify food insufficiency: “Which of the following describes the amount of food your household has to eat....enough to eat, sometimes not enough to eat, or often not enough to eat?” This measure addresses the amount of food available to a household, not assessments about the quality of the food consumed or worries about food (Alaimo, Olson and Frongillo, 1999; Dunifon and Kowaleski-Jones, 2003). The Community Childhood Hunger Identification Project (CCHIP) assesses food hardships using an 8-item measure to determine whether the household as a whole, adults as individuals, or children are affected by food shortages, perceived food insufficiency, or altered food intake due to resource constraints (Wehler, Scott, and Anderson, 1992). Depending on the number of affirmative answers, respondents are categorized as either “hungry,” “at-risk for hunger,” or “not hungry” (Wehler et al., 1992; Kleinman et al., 1998). Other measures, such as the Radimer/Cornell measures of hunger and food insecurity, have also been created to measure food hardships (Kendall, Olson, and Frongillo, 1996). In recent years, food hardships in the United States have been on the rise. After declining from 1995 to 1999, the prevalence of household food insecurity in households with children rose from 14.8 percent in 1999 to 16.5 percent in 2002, and the prevalence of household food insecurity with hunger in households with children rose from 0.6 percent in 1999 to 0.7 percent in 2002 (Nord, Andrews, and 3 Carlson, 2003). A similar trend was also observed using a subset of questions from the USDA Food Security Module (Nelson, 2004). Although children are more likely than adults to be buffered from household food insecurity (Hamilton et al., 1997) and inadequate nutrition (McIntyre et al., 2003), a concerning number of children are reported to skip meals or have reduced food intake due to insufficient household resources. Nationally, children in 219,000 U.S. households were hungry at times during the 12 months preceding May 1999 (Nord and Bickel, 2002). Food Hardships and Child Behavior Problems Very little research has been conducted on the effects of food hardship on children’s behaviors, although the existing research suggests that it is associated with adverse behavioral and mental health outcomes for children. Using data from the National Health and Nutrition Examination Survey (NHANES), Alaimo and colleagues (2001a) found that family food insufficiency is positively associated with visits to a psychologist among 6to 11year-olds. Using the USDA Food Security Module, Reid (2002) found that greater severity and longer periods of children’s food insecurity were associated with greater levels of child behavior problems. Dunifon and Kowaleski-Jones (2003) found, using the same measure, that food insecurity is associated with fewer positive behaviors among school-age children. Children from households with incomes at or below 185 percent of the poverty level who are identified as hungry are also more likely to have a past or current history of mental health counseling and to have more psychosocial dysfunctions than children who are not identified as hungry (Kleinman et al., 1998; Murphy et al., 1998). Additionally, severe child hunger in both pre-school-age and school-age children is associated with internalizing behavior problems (Weinreb et al., 2002), although Reid (2002) found a stronger association between food insecurity and externalizing behaviors than between food insecurity and internalizing behaviors among children 12 and younger. Other research on hunger has identified several adverse behavioral consequences for children (See Wachs, 1995 for a review; Martorell, 1996; Pollitt, 1994), including poor play behaviors, poor preschool achievement, and poor scores on 4 developmental indices (e.g., Bayley Scores). These studies have largely taken place in developing countries, where the prevalence of hunger and malnutrition is much greater than in the U.S. population (Reid, 2002), so it is not known whether similar associations would emerge for children in the United States. Furthermore, while existing studies point to a relationship between food hardships and adverse child behavioral outcomes, limitations in design stemming from cross-sectional data, reliance on singleitem measures of food difficulties, or failure to adequately control for factors that may confound the observed relationships make it difficult to assess the robustness of the findings. For current and recent recipients of welfare and their families, increased food hardships are a potential problem, given the fluctuations in benefits and resources that families are likely to experience as a result of legislative reforms. To the extent that food hardships are tied to economic factors, we may expect levels of food hardships to increase for families who experience periods of insufficient material resources, and to decrease for families whose economic situations improve. If levels of food hardship are associated with the availability of parents and other caregivers, we may find that the provision of food to children changes as parents work more hours, or as children spend more time in alternative caregiving arrangements. Poverty and Child Behavior Problems When exploring the relationship between food hardships and child well-being, it is crucial to ensure that factors associated with economic hardship and poverty are adequately controlled, particularly since poverty has been linked to some of the same outcomes as food hardships. Extensive research has shown a higher prevalence of behavior problems among children from families of lower socioeconomic status (McLoyd, 1998; Duncan, Brooks-Gunn, and Klebanov, 1994), and from families receiving welfare (Hofferth, Smith, McLoyd, and Finkelstein, 2000). This relationship has been shown to be stronger among children in single-parent households than among those in two-parent households (Hanson, McLanahan, and Thompson, 1996), and among younger children (Bradley and Corwyn, 2002; McLoyd, 5 1998), with less consistent findings for adolescents (Conger, Conger, and Elder, 1997; Elder, N", "title": "" }, { "docid": "9b1cd0c567ba1d93f2d0ac8c72f0be9a", "text": "The complexities of pediatric brain imaging have precluded studies that trace the neural development of cognitive skills acquired during childhood. Using a task that isolates reading-related brain activity and minimizes confounding performance effects, we carried out a cross-sectional functional magnetic resonance imaging (fMRI) study using subjects whose ages ranged from 6 to 22 years. We found that learning to read is associated with two patterns of change in brain activity: increased activity in left-hemisphere middle temporal and inferior frontal gyri and decreased activity in right inferotemporal cortical areas. Activity in the left-posterior superior temporal sulcus of the youngest readers was associated with the maturation of their phonological processing abilities. These findings inform current reading models and provide strong support for Orton's 1925 theory of reading development.", "title": "" }, { "docid": "c8dbc63f90982e05517bbdb98ebaeeb5", "text": "Even though considerable attention has been given to the polarity of words (positive and negative) and the creation of large polarity lexicons, research in emotion analysis has had to rely on limited and small emotion lexicons. In this paper we show how the combined strength and wisdom of the crowds can be used to generate a large, high-quality, word–emotion and word–polarity association lexicon quickly and inexpensively. We enumerate the challenges in emotion annotation in a crowdsourcing scenario and propose solutions to address them. Most notably, in addition to questions about emotions associated with terms, we show how the inclusion of a word choice question can discourage malicious data entry, help identify instances where the annotator may not be familiar with the target term (allowing us to reject such annotations), and help obtain annotations at sense level (rather than at word level). We conducted experiments on how to formulate the emotionannotation questions, and show that asking if a term is associated with an emotion leads to markedly higher inter-annotator agreement than that obtained by asking if a term evokes an emotion.", "title": "" }, { "docid": "d434ef675b4d8242340f4d501fdbbae3", "text": "We study the problem of selecting a subset of k random variables to observe that will yield the best linear prediction of another variable of interest, given the pairwise correlations between the observation variables and the predictor variable. Under approximation preserving reductions, this problem is equivalent to the \"sparse approximation\" problem of approximating signals concisely. The subset selection problem is NP-hard in general; in this paper, we propose and analyze exact and approximation algorithms for several special cases of practical interest. Specifically, we give an FPTAS when the covariance matrix has constant bandwidth, and exact algorithms when the associated covariance graph, consisting of edges for pairs of variables with non-zero correlation, forms a tree or has a large (known) independent set. Furthermore, we give an exact algorithm when the variables can be embedded into a line such that the covariance decreases exponentially in the distance, and a constant-factor approximation when the variables have no \"conditional suppressor variables\". Much of our reasoning is based on perturbation results for the R2 multiple correlation measure, which is frequently used as a natural measure for \"goodness-of-fit statistics\". It lies at the core of our FPTAS, and also allows us to extend our exact algorithms to approximation algorithms when the matrix \"nearly\" falls into one of the above classes. We also use our perturbation analysis to prove approximation guarantees for the widely used \"Forward Regression\" heuristic under the assumption that the observation variables are nearly independent.", "title": "" }, { "docid": "e94cc8dbf257878ea9b78eceb990cb3b", "text": "The past two decades have seen extensive growth of sexual selection research. Theoretical and empirical work has clarified many components of pre- and postcopulatory sexual selection, such as aggressive competition, mate choice, sperm utilization and sexual conflict. Genetic mechanisms of mate choice evolution have been less amenable to empirical testing, but molecular genetic analyses can now be used for incisive experimentation. Here, we highlight some of the currently debated areas in pre- and postcopulatory sexual selection. We identify where new techniques can help estimate the relative roles of the various selection mechanisms that might work together in the evolution of mating preferences and attractive traits, and in sperm-egg interactions.", "title": "" }, { "docid": "0a97c254e5218637235a7e23597f572b", "text": "We investigate the design of a reputation system for decentralized unstructured P2P networks like Gnutella. Having reliable reputation information about peers can form the basis of an incentive system and can guide peers in their decision making (e.g., who to download a file from). The reputation system uses objective criteria to track each peer's contribution in the system and allows peers to store their reputations locally. Reputation are computed using either of the two schemes, debit-credit reputation computation (DCRC) and credit-only reputation computation (CORC). Using a reputation computation agent (RCA), we design a public key based mechanism that periodically updates the peer reputations in a secure, light-weight, and partially distributed manner. We evaluate using simulations the performance tradeoffs inherent in the design of our system.", "title": "" }, { "docid": "ea4da468a0e7f84266340ba5566f4bdb", "text": "We present a novel realtime algorithm to compute the trajectory of each pedestrian in a crowded scene. Our formulation is based on an adaptive scheme that uses a combination of deterministic and probabilistic trackers to achieve high accuracy and efficiency simultaneously. Furthermore, we integrate it with a multi-agent motion model and local interaction scheme to accurately compute the trajectory of each pedestrian. We highlight the performance and benefits of our algorithm on well-known datasets with tens of pedestrians.", "title": "" }, { "docid": "d170aec1225da4ec34d3847a2807d9b5", "text": "By leveraging advances in deep learning, challenging pattern recognition problems have been solved in computer vision, speech recognition, natural language processing, and more. Mobile computing has also adopted these powerful modeling approaches, delivering astonishing success in the field’s core application domains, including the ongoing transformation of human activity recognition technology through machine learning.", "title": "" }, { "docid": "8a8b33eabebb6d53d74ae97f8081bf7b", "text": "Social networks are inevitable part of modern life. A class of social networks is those with both positive (friendship or trust) and negative (enmity or distrust) links. Ranking nodes in signed networks remains a hot topic in computer science. In this manuscript, we review different ranking algorithms to rank the nodes in signed networks, and apply them to the sign prediction problem. Ranking scores are used to obtain reputation and optimism, which are used as features in the sign prediction problem. Reputation of a node shows patterns of voting towards the node and its optimism demonstrates how optimistic a node thinks about others. To assess the performance of different ranking algorithms, we apply them on three signed networks including Epinions, Slashdot and Wikipedia. In this paper, we introduce three novel ranking algorithms for signed networks and compare their ability in predicting signs of edges with already existing ones. We use logistic regression as the predictor and the reputation and optimism values for the trustee and trustor as features (that are obtained based on different ranking algorithms). We find that ranking algorithms resulting in correlated ranking scores, leads to almost the same prediction accuracy. Furthermore, our analysis identifies a number of ranking algorithms that result in higher prediction accuracy compared to others.", "title": "" }, { "docid": "13abacabef42365ac61be64597698f78", "text": "Wikidata is the new, large-scale knowledge base of the Wikimedia Foundation. As it can be edited by anyone, entries frequently get vandalized, leading to the possibility that it might spread of falsified information if such posts are not detected. The WSDM 2017 Wiki Vandalism Detection Challenge requires us to solve this problem by computing a vandalism score denoting the likelihood that a revision corresponds to an act of vandalism and performance is measured using the ROC-AUC obtained on a held-out test set. This paper provides the details of our submission that obtained an ROC-AUC score of 0.91976 in the final evaluation.", "title": "" }, { "docid": "b2bfbe0588e3383b9ba249f7abd69cda", "text": "We define a new, intuitive measure for evaluating machine translation output that avoids the knowledge intensiveness of more meaning-based approaches, and the labor-intensiveness of human judgments. Translation Error Rate (TER) measures the amount of editing that a human would have to perform to change a system output so it exactly matches a reference translation. We also compute a human-targeted TER (or HTER), where the minimum TER of the translation is computed against a human ‘targeted reference’ that preserves the meaning (provided by the reference translations) and is fluent, but is chosen to minimize the TER score for a particular system output. We show that: (1) The single-reference variant of TER correlates as well with human judgments of MT quality as the four-reference variant of BLEU; (2) The human-targeted HTER yields a 33% error-rate reduction and is shown to be very well correlated with human judgments; (3) The four-reference variant of TER and the single-reference variant of HTER yield higher correlations with human judgments than BLEU; (4) HTER yields higher correlations with human judgments than METEOR or its human-targeted variant (HMETEOR); and (5) The four-reference variant of TER correlates as well with a single human judgment as a second human judgment does, while HTER, HBLEU, and HMETEOR correlate significantly better with a human judgment than a second human judgment does. This work has been supported, in part, by BBNT contract number 9500006806. 1", "title": "" }, { "docid": "c13df020baab4d07a5b88e379178a933", "text": "In this paper, we consider the temporal pattern in traffic flow time series, and implement a deep learning model for traffic flow prediction. Detrending based methods decompose original flow series into trend and residual series, in which trend describes the fixed temporal pattern in traffic flow and residual series is used for prediction. Inspired by the detrending method, we propose DeepTrend, a deep hierarchical neural network used for traffic flow prediction which considers and extracts the time-variant trend. DeepTrend has two stacked layers: extraction layer and prediction layer. Extraction layer, a fully connected layer, is used to extract the time-variant trend in traffic flow by feeding the original flow series concatenated with corresponding simple average trend series. Prediction layer, an LSTM layer, is used to make flow prediction by feeding the obtained trend from the output of extraction layer and calculated residual series. To make the model more effective, DeepTrend needs first pre-trained layer-by-layer and then fine-tuned in the entire network. Experiments show that DeepTrend can noticeably boost the prediction performance compared with some traditional prediction models and LSTM with detrending based methods.", "title": "" }, { "docid": "eae5470d2b5cfa6a595ee335a25c7b68", "text": "For uplink large-scale MIMO systems, linear minimum mean square error (MMSE) signal detection algorithm is near-optimal but involves matrix inversion with high complexity. In this paper, we propose a low-complexity signal detection algorithm based on the successive overrelaxation (SOR) method to avoid the complicated matrix inversion. We first prove a special property that the MMSE filtering matrix is symmetric positive definite for uplink large-scale MIMO systems, which is the premise for the SOR method. Then a low-complexity iterative signal detection algorithm based on the SOR method as well as the convergence proof is proposed. The analysis shows that the proposed scheme can reduce the computational complexity from O(K3) to O(K2), where K is the number of users. Finally, we verify through simulation results that the proposed algorithm outperforms the recently proposed Neumann series approximation algorithm, and achieves the near-optimal performance of the classical MMSE algorithm with a small number of iterations.", "title": "" }, { "docid": "304bf3c44e2946025370283e5c71ffbe", "text": "Van Gog and Sweller (2015) claim that there is no testing effect—no benefit of practicing retrieval—for complex materials. We show that this claim is incorrect on several grounds. First, Van Gog and Sweller’s idea of “element interactivity” is not defined in a quantitative, measurable way. As a consequence, the idea is applied inconsistently in their literature review. Second, none of the experiments on retrieval practice with worked-example materials manipulated element interactivity. Third, Van Gog and Sweller’s literature review omitted several studies that have shown retrieval practice effects with complex materials, including studies that directly manipulated the complexity of the materials. Fourth, the experiments that did not show retrieval practice effects, which were emphasized by Van Gog and Sweller, either involved retrieval of isolated words in individual sentences or required immediate, massed retrieval practice. The experiments failed to observe retrieval practice effects because of the retrieval tasks, not because of the complexity of the materials. Finally, even though the worked-example experiments emphasized by Van Gog and Sweller have methodological problems, they do not show strong evidence favoring the null. Instead, the data provide evidence that there is indeed a small positive effect of retrieval practice with worked examples. Retrieval practice remains an effective way to improve meaningful learning of complex materials.", "title": "" }, { "docid": "261ab16552e2f7cfcdf89971a066a812", "text": "The paper demonstrates that in a multi-voltage level (medium and low-voltages) distribution system the incident energy can be reduced to 8 cal/cm2, or even less, (Hazard risk category, HRC 2), so that a PPE outfit of greater than 2 is not required. This is achieved with the current state of the art equipment and protective devices. It is recognized that in the existing distribution systems, not specifically designed with this objective, it may not be possible to reduce arc flash hazard to this low level, unless major changes in the system design and protection are made. A typical industrial distribution system is analyzed, and tables and time coordination plots are provided to support the analysis. Unit protection schemes and practical guidelines for arc flash reduction are provided. The methodology of IEEE 1584 [1] is used for the analyses.", "title": "" }, { "docid": "9cb2f99aa1c745346999179132df3854", "text": "As a complementary and alternative medicine in medical field, traditional Chinese medicine (TCM) has drawn great attention in the domestic field and overseas. In practice, TCM provides a quite distinct methodology to patient diagnosis and treatment compared to western medicine (WM). Syndrome (ZHENG or pattern) is differentiated by a set of symptoms and signs examined from an individual by four main diagnostic methods: inspection, auscultation and olfaction, interrogation, and palpation which reflects the pathological and physiological changes of disease occurrence and development. Patient classification is to divide patients into several classes based on different criteria. In this paper, from the machine learning perspective, a survey on patient classification issue will be summarized on three major aspects of TCM: sign classification, syndrome differentiation, and disease classification. With the consideration of different diagnostic data analyzed by different computational methods, we present the overview for four subfields of TCM diagnosis, respectively. For each subfield, we design a rectangular reference list with applications in the horizontal direction and machine learning algorithms in the longitudinal direction. According to the current development of objective TCM diagnosis for patient classification, a discussion of the research issues around machine learning techniques with applications to TCM diagnosis is given to facilitate the further research for TCM patient classification.", "title": "" } ]
scidocsrr
120fe668093668c15f4522b661599e1f
Gesture Classification with Handcrafted Micro-Doppler Features using a FMCW Radar
[ { "docid": "5e503aaee94e2dc58f9311959d5a142e", "text": "The use of the fast Fourier transform in power spectrum analysis is described. Principal advantages of this method are a reduction in the number of computations and in required core storage, and convenient application in nonstationarity tests. The method involves sectioning the record and averaging modified periodograms of the sections. T INTRODLCTION HIS PAPER outlines a method for the application of the fast Fourier transform algorithm to the estimation of power spectra, which involves sectioning the record, taking modified periodograms of these sections, and averaging these modified periodo-grams. In many instances this method involves fewer computations than other methods. Moreover, it involves the transformation of sequences which are shorter than the whole record which is an advantage when computations are to be performed on a machine with limited core storage. Finally, it directly yields a potential resolution in the time dimension which is useful for testing and measuring nonstationarity. As will be pointed out, it is closely related to the method of complex demodulation described Let X(j), j= 0, N-1 be a sample from a stationary , second-order stochastic sequence. Assume for simplicity that E(X) 0. Let X(j) have spectral density Pcf), I f \\ 5%. We take segments, possibly overlapping, of length L with the starting points of these segments D units apart. Let X,(j),j=O, L 1 be the first such segment. Then Xdj) X($ and finally X&) X(j+ (K 1)D) j 0, ,L-1. We suppose we have K such segments; Xl(j), X,($, and that they cover the entire record, Le., that (K-1)DfL N. This segmenting is illustrated in Fig. 1. The method of estimation is as follows. For each segment of length L we calculate a modified periodo-gram. That is, we select a data window W(j), j= 0, L-1, and form the sequences Xl(j)W(j), X,(j) W(j). We then take the finite Fourier transforms A1(n), AK(~) of these sequences. Here ~k(n) xk(j) w(j)e-z~cijnlL 1 L-1 L j-0 and i= Finally, we obtain the K modified periodograms L U Ik(fn) I Ah(%) k 1, 2, K, where f n 0 , o-,L/2 n \" L and 1 Wyj). L j=o The spectral estimate is the average of these periodo", "title": "" } ]
[ { "docid": "f7793f3940fa0fe601254eb7382d4535", "text": "This chapter presents two examples of work for evolving designs by generating useful complex gene structures. where the first example uses a genetic engineering approach whereas the other uses a growth model of form. Both examples have as their motivation to overcome the combinatorial effect of large design spaces by focussing the search in useful areas. This focussing is achieved by starting with design spaces defined by low-level basic genes and creating design spaces defined by increasingly more complex gene structures. In both cases the low-level basic genes represent simple design actions which when executed produce parts of design solutions. Both works are exemplified in the domain of architectural floor plans.", "title": "" }, { "docid": "fa80a34e7718a1eaa271f79ec5cc0a01", "text": "Stream computing is becoming a more and more popular paradigm as it enables the real-time promise of data analytics. Apache Kafka is currently the most popular framework used to ingest the data streams into the processing platforms. However, how to tune Kafka and how much resources to allocate for it remains a challenge for most users, who now rely mainly on empirical approaches to determine the best parameter settings for their deployments. In this poster, we make a through evaluation of several configurations and performance metrics of Kafka in order to allow users avoid bottlenecks, reach its full potential and avoid bottlenecks and eventually leverage some good practice for efficient stream processing.", "title": "" }, { "docid": "393c833de547f262fdbbe53c9c4a7bf3", "text": "Blind image separation consists in processing a set of observed mixed images to separate them into a set of original components. Most of the current blind separation methods assume that the sources are as statistically independent or sparsity as possible given the observations. However, these hypotheses do not hold in real world situation. Considering that the images do not satisfy the independent and sparsity conditions, so the mixed images cannot be separated with independent component analysis and sparse component analysis directly. In this paper, a method based on reorganization of blocked discrete cosine transform (RBDCT) is first proposed to separate the mixed images. Firstly, we get the sparse blocks through RBDCT, and then select the sparsest block adaptively by linear strength in which the mixing matrix can be estimated by clustering methods. In addition, a theoretical result about the linearity of the RBDCT is proved. The effectiveness of the proposed approach is demonstrated by several numerical experiments and compared the results with other classical blind image methods.", "title": "" }, { "docid": "28077980daa51a0c423e1e6298c6b417", "text": "We introduce a method which enables a recurrent dynamics model to be temporally abstract. Our approach, which we call Adaptive Skip Intervals (ASI), is based on the observation that in many sequential prediction tasks, the exact time at which events occur is irrelevant to the underlying objective. Moreover, in many situations, there exist prediction intervals which result in particularly easy-to-predict transitions. We show that there are prediction tasks for which we gain both computational efficiency and prediction accuracy by allowing the model to make predictions at a sampling rate which it can choose itself.", "title": "" }, { "docid": "45addba115a5046a9840daf2860e8ddc", "text": "This paper investigates the use of Doppler radar sensor for occupancy monitoring. The feasibility of true presence is explored with Doppler radar occupancy sensors to overcome the limitations of the common occupancy sensors. The common occupancy sensors are more of a motion sensor than a presence detector. Existing cost effective off the shelf System-on-Chip CC2530 RF transceiver is used for developing the radio. The transmitter sends continuous wave signal at 2.405 GHz. Different levels of activity is detected by post-processing sensor signals. Heart and respiratory signals are extracted in order to improve stationary subject detection.", "title": "" }, { "docid": "6ef6cbb60da56bfd53ae945480908d3c", "text": "OBJECTIVE\nIn multidisciplinary prenatal diagnosis centers, the search for a tetrasomy 12p mosaic is requested following the discovery of a diaphragmatic hernia in the antenatal period. Thus, the series of Pallister Killian syndromes (PKS: OMIM 601803) probably overestimate the prevalence of diaphragmatic hernia in this syndrome to the detriment of other morphological abnormalities.\n\n\nMETHODS\nA multicenter retrospective study was conducted with search for assistance from members of the French society for Fetal Pathology. For each identified case, we collected all antenatal and postnatal data. Antenatal data were compared with data from the clinicopathological examination to assess the adequacy of sonographic signs of PKS. A review of the literature on antenatal morphological anomalies in case of PKS completed the study.\n\n\nRESULTS\nTen cases were referred to us: 7 had cytogenetic confirmation and 6 had ultrasound screening. In the prenatal as well as post mortem period, the most common sign is facial dysmorphism (5 cases/6). A malformation of limbs is reported in half of the cases (3 out of 6). Ultrasound examination detected craniofacial dysmorphism in 5 cases out of 6. We found 1 case of left diaphragmatic hernia. Our results are in agreement with the malformation spectrum described in the literature.\n\n\nCONCLUSION\nSome malformation associations could evoke a SPK without classical diaphragmatic hernia.", "title": "" }, { "docid": "421ab26a36eb4f9d97dfb323e394fa38", "text": "Dual-system approaches to psychology explain the fundamental properties of human judgment, decision making, and behavior across diverse domains. Yet, the appropriate characterization of each system is a source of debate. For instance, a large body of research on moral psychology makes use of the contrast between \"emotional\" and \"rational/cognitive\" processes, yet even the chief proponents of this division recognize its shortcomings. Largely independently, research in the computational neurosciences has identified a broad division between two algorithms for learning and choice derived from formal models of reinforcement learning. One assigns value to actions intrinsically based on past experience, while another derives representations of value from an internally represented causal model of the world. This division between action- and outcome-based value representation provides an ideal framework for a dual-system theory in the moral domain.", "title": "" }, { "docid": "085ec38c3e756504be93ac0b94483cea", "text": "Low power wide area (LPWA) networks are making spectacular progress from design, standardization, to commercialization. At this time of fast-paced adoption, it is of utmost importance to analyze how well these technologies will scale as the number of devices connected to the Internet of Things inevitably grows. In this letter, we provide a stochastic geometry framework for modeling the performance of a single gateway LoRa network, a leading LPWA technology. Our analysis formulates the unique peculiarities of LoRa, including its chirp spread-spectrum modulation technique, regulatory limitations on radio duty cycle, and use of ALOHA protocol on top, all of which are not as common in today’s commercial cellular networks. We show that the coverage probability drops exponentially as the number of end-devices grows due to interfering signals using the same spreading sequence. We conclude that this fundamental limiting factor is perhaps more significant toward LoRa scalability than for instance spectrum restrictions. Our derivations for co-spreading factor interference found in LoRa networks enables rigorous scalability analysis of such networks.", "title": "" }, { "docid": "e0cb22810c7dc3797e71dd39f966e7ce", "text": "A crystal-based differential oscillator circuit offering simultaneously high stability and ultra-low power consumption is presented for timekeeping and demanding radio applications. The differential circuit structure -in contrast to that of the conventional 3-points- does not require any loading capacitance to be functional and the power consumption can thus be minimized. Although the loading capacitance is omitted a very precise absolute oscillation frequency can be obtained as well as an excellent insensitivity to temperature variations thanks to the reduced parasitics of a deep-submicron technology. The power consumption of a 12.8MHz quartz oscillator including an amplitude regulation mechanism is below 1 µA under a 1.8 to 0.6V supply voltage range.", "title": "" }, { "docid": "5d5c3c8cc8344a8c5d18313bec9adb04", "text": "Research in reinforcement learning (RL) has thus far concentrated on two optimality criteria: the discounted framework, which has been very well-studied, and the average-reward framework, in which interest is rapidly increasing. In this paper, we present a framework called sensitive discount optimality which ooers an elegant way of linking these two paradigms. Although sensitive discount optimality has been well studied in dynamic programming, with several provably convergent algorithms, it has not received any attention in RL. This framework is based on studying the properties of the expected cumulative discounted reward, as discounting tends to 1. Under these conditions, the cumulative discounted reward can be expanded using a Laurent series expansion to yields a sequence of terms, the rst of which is the average reward, the second involves the average adjusted sum of rewards (or bias), etc. We use the sensitive discount optimality framework to derive a new model-free average reward technique, which is related to Q-learning type methods proposed by Bertsekas, Schwartz, and Singh, but which unlike these previous methods, optimizes both the rst and second terms in the Laurent series (average reward and bias values). Statement: This paper has not been submitted to any other conference.", "title": "" }, { "docid": "7394f492845ef421d820ac3c7f7a3ff2", "text": "Network intrusion detection system (NIDS) is a tool used to detect and classify the network breaches dynamically in information and communication technologies (ICT) systems in both academia and industries. Adopting a new and existing machine learning classifiers to NIDS has been a significant area in security research due to the fact that the enhancement in detection rate and accuracy is of important in large volume of security audit data including diverse and dynamic characteristics of attacks. This paper evaluates the effectiveness of various shallow and deep networks to NIDS. The shallow and deep networks are trained and evaluated on the KDDCup ‘99’ and NSL-KDD data sets in both binary and multi-class classification settings. The deep networks are performed well in comparison to the shallow networks in most of the experiment configurations. The main reason to this might be a deep network passes information through several layers to learn the underlying hidden patterns of normal and attack network connection records and finally aggregates these learned features of each layer together to effectively distinguish the normal and various attacks of network connection records. Additionally, deep networks have not only performed well in detecting and classifying the known attacks additionally in unknown attacks too. To achieve an acceptable detection rate, we used various configurations of network settings and its parameters in deep networks. All the various configurations of deep network are run up to 1000 epochs in training with a learning rate in the range [0.01-0.5] to effectively capture the time varying patterns of normal and various attacks.", "title": "" }, { "docid": "6cfd834f969e0693c886e9a8216a0250", "text": "Detecting early morphological changes in the brain and making early diagnosis are important for Alzheimer's disease (AD). High resolution magnetic resonance imaging can be used to help diagnosis and prediction of the disease. In this paper, we proposed a machine learning method to discriminate patients with AD or mild cognitive impairment (MCI) from healthy elderly and to predict the AD conversion in MCI patients by computing and analyzing the regional morphological differences of brain between groups. Distance between each pair of subjects was quantified from a symmetric diffeomorphic registration, followed by an embedding algorithm and a learning approach for classification. The proposed method obtained accuracy of 96.5% in differentiating mild AD from healthy elderly with the whole-brain gray matter or temporal lobe as region of interest (ROI), 91.74% in differentiating progressive MCI from healthy elderly and 88.99% in classifying progressive MCI versus stable MCI with amygdala or hippocampus as ROI. This deformation-based method has made full use of the pair-wise macroscopic shape difference between groups and consequently increased the power for discrimination.", "title": "" }, { "docid": "d5a18a82f8e041b717291c69676c7094", "text": "Total sleep deprivation (TSD) for one whole night improves depressive symptoms in 40-60% of treatments. The degree of clinical change spans a continuum from complete remission to worsening (in 2-7%). Other side effects are sleepiness and (hypo-) mania. Sleep deprivation (SD) response shows up in the SD night or on the following day. Ten to 15% of patients respond after recovery sleep only. After recovery sleep 50-80% of day 1 responders suffer a complete or partial relapse; but improvement can last for weeks. Sleep seems to lead to relapse although this is not necessarily the case. Treatment effects may be stabilised by antidepressant drugs, lithium, shifting of sleep time or light therapy. The best predictor of a therapeutic effect is a large variability of mood. Current opinion is that partial sleep deprivation (PSD) in the second half of the night is equally effective as TSD. There are, however, indications that TSD is superior. Early PSD (i.e. sleeping between 3:00 and 6:00) has the same effect as late PSD given equal sleep duration. New data cast doubt on the time-honoured conviction that REM sleep deprivation is more effective than non-REM SD. Both may work by reducing total sleep time. SD is an unspecific therapy. The main indication is the depressive syndrome. Some studies show positive effects in Parkinson's disease. It is still unknown how sleep deprivation works.", "title": "" }, { "docid": "6c149f1f6e9dc859bf823679df175afb", "text": "Neurofeedback is attracting renewed interest as a method to self-regulate one's own brain activity to directly alter the underlying neural mechanisms of cognition and behavior. It not only promises new avenues as a method for cognitive enhancement in healthy subjects, but also as a therapeutic tool. In the current article, we present a review tutorial discussing key aspects relevant to the development of electroencephalography (EEG) neurofeedback studies. In addition, the putative mechanisms underlying neurofeedback learning are considered. We highlight both aspects relevant for the practical application of neurofeedback as well as rather theoretical considerations related to the development of new generation protocols. Important characteristics regarding the set-up of a neurofeedback protocol are outlined in a step-by-step way. All these practical and theoretical considerations are illustrated based on a protocol and results of a frontal-midline theta up-regulation training for the improvement of executive functions. Not least, assessment criteria for the validation of neurofeedback studies as well as general guidelines for the evaluation of training efficacy are discussed.", "title": "" }, { "docid": "951713821f87941a0b44e1178873d9aa", "text": "Laboratory measurements of electric fields have been carried out around examples of smart meter devices used in Great Britain. The aim was to quantify exposure of people to radiofrequency signals emitted from smart meter devices operating at 2.4 GHz, and then to compare this with international (ICNIRP) health-related guidelines and with exposures from other telecommunication sources such as mobile phones and Wi-Fi devices. The angular distribution of the electric fields from a sample of 39 smart meter devices was measured in a controlled laboratory environment. The angular direction where the power density was greatest was identified and the equivalent isotropically radiated power was determined in the same direction. Finally, measurements were carried out as a function of distance at the angles where maximum field strengths were recorded around each device. The maximum equivalent power density measured during transmission around smart meter devices at 0.5 m and beyond was 15 mWm-2 , with an estimation of maximum duty factor of only 1%. One outlier device had a maximum power density of 91 mWm-2 . All power density measurements reported in this study were well below the 10 W m-2 ICNIRP reference level for the general public. Bioelectromagnetics. 2017;38:280-294. © 2017 Crown copyright. BIOELECTROMAGNETICS © 2017 Wiley Periodicals, Inc.", "title": "" }, { "docid": "7a2c39fd78ad176949af58258393151b", "text": "Quantization of a continuous-value signal into a discrete form (or discretization of amplitude) is a standard task in all analog/digital devices. We consider quantization of a signal (or random process) in a probabilistic framework. The quantization method presented in this paper can be applied to signal coding and storage capacity problems. In order to demonstrate a general approach, both uniform and non-uniform quantization of a Gaussian process are studied in more detail and compared with a conventional piecewise constant approximation. We investigate asymptotic properties of some accuracy characteristics, such as a random quantization rate, in terms of the correlation structure of the original random process when quantization cellwidth tends to zero. Some examples and numerical experiments are presented. AMS subject classifications: Primary 60G15; Secondary 94A29, 94A34", "title": "" }, { "docid": "6a51aba04d0af9351e86b8a61b4529cb", "text": "Cloud computing is a newly emerged technology, and the rapidly growing field of IT. It is used extensively to deliver Computing, data Storage services and other resources remotely over internet on a pay per usage model. Nowadays, it is the preferred choice of every IT organization because it extends its ability to meet the computing demands of its everyday operations, while providing scalability, mobility and flexibility with a low cost. However, the security and privacy is a major hurdle in its success and its wide adoption by organizations, and the reason that Chief Information Officers (CIOs) hesitate to move the data and applications from premises of organizations to the cloud. In fact, due to the distributed and open nature of the cloud, resources, applications, and data are vulnerable to intruders. Intrusion Detection System (IDS) has become the most commonly used component of computer system security and compliance practices that defends network accessible Cloud resources and services from various kinds of threats and attacks. This paper presents an overview of different intrusions in cloud, various detection techniques used by IDS and the types of Cloud Computing based IDS. Then, we analyze some pertinent existing cloud based intrusion detection systems with respect to their various types, positioning, detection time and data source. The analysis also gives strengths of each system, and limitations, in order to evaluate whether they carry out the security requirements of cloud computing environment or not. We highlight the deployment of IDS that uses multiple detection approaches to deal with security challenges in cloud.", "title": "" }, { "docid": "6ac79f297625bada5642733da91129fa", "text": "The past decade has seen the development of several reconfigurable flight control strategies for unmanned aerial vehicles. Although the majority of the research is dedicated to fixed wing vehicles, simulation results do support the application of reconfigurable flight control to unmanned rotorcraft. This paper develops a fault tolerant control architecture that couples techniques for fault detection and identification with reconfigurable flight control to augment the reliability and autonomy of an unmanned aerial vehicle. The architecture is applicable to fixed and rotary wing aircraft. An adaptive neural network feedback linearization technique is employed to stabilize the vehicle after the detection of a fault. Actual flight test results support the validity of the approach on an unmanned helicopter. The fault tolerant control architecture recovers aircraft performance after the occurrence of four different faults in the flight control system: three swash-plate actuator faults and a collective actuator fault. All of these faults are catastrophic under nominal conditions", "title": "" }, { "docid": "57fcce4eeac895ef56945008e2c4cd59", "text": "BACKGROUND\nComputational state space models (CSSMs) enable the knowledge-based construction of Bayesian filters for recognizing intentions and reconstructing activities of human protagonists in application domains such as smart environments, assisted living, or security. Computational, i. e., algorithmic, representations allow the construction of increasingly complex human behaviour models. However, the symbolic models used in CSSMs potentially suffer from combinatorial explosion, rendering inference intractable outside of the limited experimental settings investigated in present research. The objective of this study was to obtain data on the feasibility of CSSM-based inference in domains of realistic complexity.\n\n\nMETHODS\nA typical instrumental activity of daily living was used as a trial scenario. As primary sensor modality, wearable inertial measurement units were employed. The results achievable by CSSM methods were evaluated by comparison with those obtained from established training-based methods (hidden Markov models, HMMs) using Wilcoxon signed rank tests. The influence of modeling factors on CSSM performance was analyzed via repeated measures analysis of variance.\n\n\nRESULTS\nThe symbolic domain model was found to have more than 10(8) states, exceeding the complexity of models considered in previous research by at least three orders of magnitude. Nevertheless, if factors and procedures governing the inference process were suitably chosen, CSSMs outperformed HMMs. Specifically, inference methods used in previous studies (particle filters) were found to perform substantially inferior in comparison to a marginal filtering procedure.\n\n\nCONCLUSIONS\nOur results suggest that the combinatorial explosion caused by rich CSSM models does not inevitably lead to intractable inference or inferior performance. This means that the potential benefits of CSSM models (knowledge-based model construction, model reusability, reduced need for training data) are available without performance penalty. However, our results also show that research on CSSMs needs to consider sufficiently complex domains in order to understand the effects of design decisions such as choice of heuristics or inference procedure on performance.", "title": "" } ]
scidocsrr
3066c116e530a72abdf7b82b1b90c135
Tracker-assisted rate adaptation for MPEG DASH live streaming
[ { "docid": "d0a68fbbca8e81f1ed9da8264278c1c5", "text": "Comprising more than 61,000 servers located across nearly 1,000 networks in 70 countries worldwide, the Akamai platform delivers hundreds of billions of Internet interactions daily, helping thousands of enterprises boost the performance and reliability of their Internet applications. In this paper, we give an overview of the components and capabilities of this large-scale distributed computing platform, and offer some insight into its architecture, design principles, operation, and management.", "title": "" } ]
[ { "docid": "6e7d5e2548e12d11afd3389b6d677a0f", "text": "Internet marketing is a field that is continuing to grow, and the online auction concept may be defining a totally new and unique distribution alternative. Very few studies have examined auction sellers and their internet marketing strategies. This research examines the internet auction phenomenon as it relates to the marketing mix of online auction sellers. The data in this study indicate that, whilst there is great diversity among businesses that utilise online auctions, distinct cost leadership and differentiation marketing strategies are both evident. These two approaches are further distinguished in terms of the internet usage strategies employed by each group.", "title": "" }, { "docid": "5401143c61a2a0ad2901bd72a086368b", "text": "In this paper we provide an implementation, evaluation, and analysis of PowerHammer, a malware (bridgeware [1]) that uses power lines to exfiltrate data from air-gapped computers. In this case, a malicious code running on a compromised computer can control the power consumption of the system by intentionally regulating the CPU utilization. Data is modulated, encoded, and transmitted on top of the current flow fluctuations, and then it is conducted and propagated through the power lines. This phenomena is known as a ’conducted emission’. We present two versions of the attack. Line level powerhammering: In this attack, the attacker taps the in-home power lines that are directly attached to the electrical outlet. Phase level power-hammering: In this attack, the attacker taps the power lines at the phase level, in the main electrical service panel. In both versions of the attack, the attacker measures the emission conducted and then decodes the exfiltrated data. We describe the adversarial attack model and present modulations and encoding schemes along with a transmission protocol. We evaluate the covert channel in different scenarios and discuss signal-to-noise (SNR), signal processing, and forms of interference. We also present a set of defensive countermeasures. Our results show that binary data can be covertly exfiltrated from air-gapped computers through the power lines at bit rates of 1000 bit/sec for the line level power-hammering attack and 10 bit/sec for the phase level power-hammering attack.", "title": "" }, { "docid": "c09e5f5592caab9a076d92b4f40df760", "text": "Producing a comprehensive overview of the chemical content of biologically-derived material is a major challenge. Apart from ensuring adequate metabolome coverage and issues of instrument dynamic range, mass resolution and sensitivity, there are major technical difficulties associated with data pre-processing and signal identification when attempting large scale, high-throughput experimentation. To address these factors direct infusion or flow infusion electrospray mass spectrometry has been finding utility as a high throughput metabolite fingerprinting tool. With little sample pre-treatment, no chromatography and instrument cycle times of less than 5 min it is feasible to analyse more than 1,000 samples per week. Data pre-processing is limited to aligning extracted mass spectra and mass-intensity matrices are generally ready in a working day for a month’s worth of data mining and hypothesis generation. ESI-MS fingerprinting has remained rather qualitative by nature and as such ion suppression does not generally compromise data information content as originally suggested when the methodology was first introduced. This review will describe how the quality of data has improved through use of nano-flow infusion and mass-windowing approaches, particularly when using high resolution instruments. The increasingly wider availability of robust high accurate mass instruments actually promotes ESI-MS from a merely fingerprinting tool to the ranks of metabolite profiling and combined with MS/MS capabilities of hybrid instruments improved structural information is available concurrently. We summarise current applications in a wide range of fields where ESI-MS fingerprinting has proved to be an excellent tool for “first pass” metabolome analysis of complex biological samples. The final part of the review describes a typical workflow with reference to recently published data to emphasise key aspects of overall experimental design.", "title": "" }, { "docid": "d4c8acbbee72b8a9e880e2bce6e2153a", "text": "This paper presents a simple linear operator that accurately estimates the position and parameters of ellipse features. Based on the dual conic model, the operator avoids the intermediate stage of precisely extracting individual edge points by exploiting directly the raw gradient information in the neighborhood of an ellipse's boundary. Moreover, under the dual representation, the dual conic can easily be constrained to a dual ellipse when minimizing the algebraic distance. The new operator is assessed and compared to other estimation approaches in simulation as well as in real situation experiments and shows better accuracy than the best approaches, including those limited to the center position.", "title": "" }, { "docid": "21511302800cd18d21dbc410bec3cbb2", "text": "We investigate theoretical and practical aspects of the design of far-field RF power extraction systems consisting of antennas, impedance matching networks and rectifiers. Fundamental physical relationships that link the operating bandwidth and range are related to technology dependent quantities like threshold voltage and parasitic capacitances. This allows us to design efficient planar antennas, coupled resonator impedance matching networks and low-power rectifiers in standard CMOS technologies (0.5-mum and 0.18-mum) and accurately predict their performance. Experimental results from a prototype power extraction system that operates around 950 MHz and integrates these components together are presented. Our measured RF power-up threshold (in 0.18-mum, at 1 muW load) was 6 muWplusmn10%, closely matching the predicted value of 5.2 muW.", "title": "" }, { "docid": "b137e24f41def95c5bb4776de48804ef", "text": "Adequate sleep is essential for general healthy functioning. This paper reviews recent research on the effects of chronic sleep restriction on neurobehavioral and physiological functioning and discusses implications for health and lifestyle. Restricting sleep below an individual's optimal time in bed (TIB) can cause a range of neurobehavioral deficits, including lapses of attention, slowed working memory, reduced cognitive throughput, depressed mood, and perseveration of thought. Neurobehavioral deficits accumulate across days of partial sleep loss to levels equivalent to those found after 1 to 3 nights of total sleep loss. Recent experiments reveal that following days of chronic restriction of sleep duration below 7 hours per night, significant daytime cognitive dysfunction accumulates to levels comparable to that found after severe acute total sleep deprivation. Additionally, individual variability in neurobehavioral responses to sleep restriction appears to be stable, suggesting a trait-like (possibly genetic) differential vulnerability or compensatory changes in the neurobiological systems involved in cognition. A causal role for reduced sleep duration in adverse health outcomes remains unclear, but laboratory studies of healthy adults subjected to sleep restriction have found adverse effects on endocrine functions, metabolic and inflammatory responses, suggesting that sleep restriction produces physiological consequences that may be unhealthy.", "title": "" }, { "docid": "5c08689daeea47930758510491cebac9", "text": "Structural identity is a concept of symmetry in which network nodes are identified according to the network structure and their relationship to other nodes. Structural identity has been studied in theory and practice over the past decades, but only recently has it been addressed with representational learning techniques. This work presents struc2vec, a novel and flexible framework for learning latent representations for the structural identity of nodes. struc2vec uses a hierarchy to measure node similarity at different scales, and constructs a multilayer graph to encode structural similarities and generate structural context for nodes. Numerical experiments indicate that state-of-the-art techniques for learning node representations fail in capturing stronger notions of structural identity, while struc2vec exhibits much superior performance in this task, as it overcomes limitations of prior approaches. As a consequence, numerical experiments indicate that struc2vec improves performance on classification tasks that depend more on structural identity.", "title": "" }, { "docid": "c966c67c098e8178e6c05b6d446f6dd3", "text": "Data are today an asset more critical than ever for all organizations we may think of. Recent advances and trends, such as sensor systems, IoT, cloud computing, and data analytics, are making possible to pervasively, efficiently, and effectively collect data. However for data to be used to their full power, data security and privacy are critical. Even though data security and privacy have been widely investigated over the past thirty years, today we face new difficult data security and privacy challenges. Some of those challenges arise from increasing privacy concerns with respect to the use of data and from the need of reconciling privacy with the use of data for security in applications such as homeland protection, counterterrorism, and health, food and water security. Other challenges arise because the deployments of new data collection and processing devices, such as those used in IoT systems, increase the data attack surface. In this paper, we discuss relevant concepts and approaches for data security and privacy, and identify research challenges that must be addressed by comprehensive solutions to data security and privacy.", "title": "" }, { "docid": "9343a2775b5dac7c48c1c6cec3d0a59c", "text": "The Extended String-to-String Correction Problem [ESSCP] is defined as the problem of determining, for given strings A and B over alphabet V, a minimum-cost sequence S of edit operations such that S(A) &equil; B. The sequence S may make use of the operations: <underline>Change, Insert, Delete</underline> and <underline>Swaps</underline>, each of constant cost W<subscrpt>C</subscrpt>, W<subscrpt>I</subscrpt>, W<subscrpt>D</subscrpt>, and W<subscrpt>S</subscrpt> respectively. Swap permits any pair of adjacent characters to be interchanged.\n The principal results of this paper are:\n (1) a brief presentation of an algorithm (the CELLAR algorithm) which solves ESSCP in time Ø(¦A¦* ¦B¦* ¦V¦<supscrpt>s</supscrpt>*s), where s &equil; min(4W<subscrpt>C</subscrpt>, W<subscrpt>I</subscrpt>+W<subscrpt>D</subscrpt>)/W<subscrpt>S</subscrpt> + 1;\n (2) presentation of polynomial time algorithms for the cases (a) W<subscrpt>S</subscrpt> &equil; 0, (b) W<subscrpt>S</subscrpt> > 0, W<subscrpt>C</subscrpt>&equil; W<subscrpt>I</subscrpt>&equil; W<subscrpt>D</subscrpt>&equil; @@@@;\n (3) proof that ESSCP, with W<subscrpt>I</subscrpt> < W<subscrpt>C</subscrpt> &equil; W<subscrpt>D</subscrpt> &equil; @@@@, 0 < W<subscrpt>S</subscrpt> < @@@@, suitably encoded, is NP-complete. (The remaining case, W<subscrpt>S</subscrpt>&equil; @@@@, reduces ESSCP to the string-to-string correction problem of [1], where an Ø( ¦A¦* ¦B¦) algorithm is given.) Thus, “almost all” ESSCP's can be solved in deterministic polynomial time, but the general problem is NP-complete.", "title": "" }, { "docid": "36ed994422af57284e1c98b41b46a9fc", "text": "The atypical face scanning patterns in individuals with Autism Spectrum Disorder (ASD) has been repeatedly discovered by previous research. The present study examined whether their face scanning patterns could be potentially useful to identify children with ASD by adopting the machine learning algorithm for the classification purpose. Particularly, we applied the machine learning method to analyze an eye movement dataset from a face recognition task [Yi et al., 2016], to classify children with and without ASD. We evaluated the performance of our model in terms of its accuracy, sensitivity, and specificity of classifying ASD. Results indicated promising evidence for applying the machine learning algorithm based on the face scanning patterns to identify children with ASD, with a maximum classification accuracy of 88.51%. Nevertheless, our study is still preliminary with some constraints that may apply in the clinical practice. Future research should shed light on further valuation of our method and contribute to the development of a multitask and multimodel approach to aid the process of early detection and diagnosis of ASD. Autism Res 2016, 9: 888-898. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.", "title": "" }, { "docid": "e1913f692dca6f716dd5152a07b17d7b", "text": "The feel of air painting is like none other! It is like being a dancer, conductor and magician all wrapped up in one. As legendary science fiction writer and visionary, Arthur C. Clarke, famously said, \"any sufficiently advanced technology is indistinguishable from magic\". Air painting is indeed magical! Imagine being able to control every aspect of the painting process through only gesture and movement of your fingers in the air, and being able to paint with up to ten simultaneous brush strokes controlled by the movement of your fingers and thumbs on both hands. This is now a reality with the combination of the revolutionary new Leap Motion Controller providing 3-D motion-generated input data to the Corel Painter Freestyle application. It is the powerful synergy of these two complementary technologies applied to fine art observational painting that is the focus of this talk.", "title": "" }, { "docid": "ca1005dddee029e92bc50717513a53d0", "text": "Citation recommendation is an interesting but challenging research problem. Most existing studies assume that all papers adopt the same criterion and follow the same behavioral pattern in deciding relevance and authority of a paper. However, in reality, papers have distinct citation behavioral patterns when looking for different references, depending on paper content, authors and target venues. In this study, we investigate the problem in the context of heterogeneous bibliographic networks and propose a novel cluster-based citation recommendation framework, called ClusCite, which explores the principle that citations tend to be softly clustered into interest groups based on multiple types of relationships in the network. Therefore, we predict each query's citations based on related interest groups, each having its own model for paper authority and relevance. Specifically, we learn group memberships for objects and the significance of relevance features for each interest group, while also propagating relative authority between objects, by solving a joint optimization problem. Experiments on both DBLP and PubMed datasets demonstrate the power of the proposed approach, with 17.68% improvement in Recall@50 and 9.57% growth in MRR over the best performing baseline.", "title": "" }, { "docid": "985e19556726656ddfeb07703d27dde7", "text": "PURPOSE\nThis study evaluated the long-term survival of anterior porcelain laminate veneers placed with and without incisal porcelain coverage.\n\n\nMATERIALS AND METHODS\nTwo prosthodontists in a private dental practice placed 110 labial feldspathic porcelain veneers in 50 patients; 46 veneers were provided with incisal porcelain coverage, and 64 were not. The veneers were evaluated retrospectively from case records for up to 7 years (mean 4 years).\n\n\nRESULTS\nAt 5, 6, and 7 years, the cumulative survival estimates were 95.8% for veneers with incisal porcelain coverage and 85.5% for those without incisal coverage. The difference was not statistically significant. Six of the nine failures occurred from porcelain fracture in the veneers without incisal coverage.\n\n\nCONCLUSION\nAlthough there was a trend for better long-term survival of the veneers with incisal porcelain coverage, this finding was not statistically significant.", "title": "" }, { "docid": "e2ccfd1fa61cd49a26ebb3caffdcf646", "text": "This study proposes and tests a research model which was developed based on the uses-and-gratifications theory. The aim of this study was to investigate if selected factors have differential predicting power on the use of Facebook and Google service in Taiwan. This study employed seven constructs: purposive value, hedonic value, social identity, social support, interpersonal relationship, personality traits, and intimacy as the factors predicting Facebook and Google usage. An electronic survey technique was used to collect data from Internet. The results showed that hedonic value and social identity constructs can significantly predict Facebook usage and purposive value has significant predicting power on Google usage. The construct intimacy is the most significant factor for both Google and Facebook usages. Our findings make suggestions for social network sites (SNSs) providers that to differentiate their SNSs quality from others’, both functional aspects and emotional factors need to be taken into consideration.", "title": "" }, { "docid": "8bba758fac60ce1139b7a6809bbe3efd", "text": "BACKGROUND\nYoung women with polycystic ovary syndrome (PCOS) have a high risk of developing endometrial carcinoma. There is a need for the development of new medical therapies that can reduce the need for surgical intervention so as to preserve the fertility of these patients. The aim of the study was to describe and discuss cases of PCOS and insulin resistance (IR) women with early endometrial carcinoma while being co-treated with Diane-35 and metformin.\n\n\nMETHODS\nFive PCOS-IR women who were scheduled for diagnosis and therapy for early endometrial carcinoma were recruited. The hospital records and endometrial pathology reports were reviewed. All patients were co-treated with Diane-35 and metformin for 6 months to reverse the endometrial carcinoma and preserve their fertility. Before, during, and after treatment, endometrial biopsies and blood samples were obtained and oral glucose tolerance tests were performed. Endometrial pathology was evaluated. Body weight (BW), body mass index (BMI), follicle-stimulating hormone (FSH), luteinizing hormone (LH), total testosterone (TT), sex hormone-binding globulin (SHBG), free androgen index (FAI), insulin area under curve (IAUC), and homeostasis model assessment of insulin resistance (HOMA-IR) were determined.\n\n\nRESULTS\nClinical stage 1a, low grade endometrial carcinoma was confirmed before treatment. After 6 months of co-treatment, all patients showed normal epithelia. No evidence of atypical hyperplasia or endometrial carcinoma was found. Co-treatment resulted in significant decreases in BW, BMI, TT, FAI, IAUC, and HOMA-IR in parallel with a significant increase in SHBG. There were no differences in the FSH and LH levels after co-treatment.\n\n\nCONCLUSIONS\nCombined treatment with Diane-35 and metformin has the potential to revert the endometrial carcinoma into normal endometrial cells in PCOS-IR women. The cellular and molecular mechanisms behind this effect merit further investigation.", "title": "" }, { "docid": "4e2ca9943ba585211f8e5eb9de4c8675", "text": "This paper describes progress made on the T-Wing tail-sitter UAV programme currently being undertaken via a collaborative research agreement between Sonacom Pty Ltd and the University of Sydney. This vehicle is being developed in response to a perceived requirement for a more flexible surveillance and remote sensing platform than is currently available. Missions for such a platform include coastal surveillance, defence intelligence gathering and environmental monitoring. The use of an unmanned air-vehicle (UAV) with a vertical takeoff and landing (VTOL) capability that can still enjoy efficient horizontal flight promises significant advantages over other vehicles for such missions. One immediate advantage is the potential to operate from small patrol craft and frigates equipped with helipads. In this role such a vehicle could be used for maritime surveillance; sonobuoy or other store deployment; communication relay; convoy protection; and support for ground and helicopter operations. The programme currently being undertaken involves building a 50-lb fully autonomous VTOL tail-sitter UAV to demonstrate successful operation near the ground in windy conditions and to perform the transition maneuvers between vertical and horizontal flight. This will then allow the development of a full-size prototype vehicle, (The “Mirli”) to be undertaken as a prelude to commercial production. The Need for a Tail-Sitter UAV Defence Applications Although conflicts over the last 20 years have demonstrated the importance of UAV systems in facilitating real-time intelligence gathering, it is clear that most current systems still do not possess the operational flexibility that is desired by force commanders. One of the reasons for this is that most UAVs have adopted relatively conventional aircraft configurations. This leads directly to operational limitations because it either necessitates take-off and landing from large fixed runways; or the use of specialized launch and recovery methods such catapults, rockets, nets, parachutes and airbags. One potential solution to these operational difficulties is a tail-sitter VTOL UAV. Such a vehicle has few operational requirements other than a small clear area for take-off and landing. While other VTOL concepts share this operational advantage over conventional vehicles the tail-sitter has some other unique benefits. In comparison to helicopters, a tailsitter vehicle does not suffer the same performance penalties in terms of dash-speed, range and endurance because it spends the majority of its mission in a more efficient airplane flight mode. The only other VTOL concepts that combine vertical and horizontal flight are the tiltrotor and tilt-wing, however, both involve significant extra mechanical complexity in comparison to the tail-sitter vehicle, which has fixed wings and nacelles. A further simplification can be made in comparison to other VTOL designs by the use of prop-wash over wing and fin mounted control surfaces to effect control during vertical flight, thus obviating the need for cyclic rotor control. For naval forces, a tail-sitter VTOL UAV has enormous potential as an aircraft that can be deployed from small ships and used for long-range reconnaissance and surveillance; over† Department of Aeronautical Engineering, University of Sydney ‡ Sonacom Pty Ltd the-horizon detection of low-flying missiles and aircraft; deployment of remote acoustic sensors; and as a platform for aerial support and communications. The vehicle could also be used in anti-submarine activities and anti-surface operations and is ideal for battlefield monitoring over both sea and land. The obvious benefit in comparison to a conventional UAV is the operational flexibility provided by the vertical launch and recovery of the vehicle. The US Navy and Marine Corps who anticipate spending approximately US$350m on their VTUAV program have clearly recognized this fact. Figure 1: A Typical Naval UAV Mission: Monitoring Acoustic Sensors For ground based forces a tail-sitter vehicle is also attractive because it allows UAV systems to be quickly deployed from small cleared areas with a minimum of support equipment. This makes the UAVs less vulnerable to attacks on fixed bases without the need to set-up catapult launchers or recovery nets. It is envisaged that ground forces would mainly use small VTOL UAVs as reconnaissance and communication relay platforms. Civilian Applications Besides the defence requirements, there are also many civilian applications for which a VTOL UAV is admirably suited. Coastal surveillance to protect national borders from illegal immigrants and illicit drugs is clearly an area where such vehicles could be used. The VTOL characteristics in this role are an advantage, as they allow such vehicles to be based in remote areas without the fixed infrastructure of airstrips, or to be operated from small coastal patrol vessels. Further applications are also to be found in mineral exploration and environmental monitoring in remote locations. While conventional vehicles could of course accomplish such tasks their effectiveness may be limited if forced to operate from bases a long way from the area of interest. Tail-Sitters: A Historical Perspective Although tail-sitter vehicles have been investigated over the last 50 years as a means to combine the operational advantages of vertical flight enjoyed by helicopters with the better horizontal flight attributes of conventional airplanes, no successful tail-sitter vehicles have ever been produced. One of the primary reasons for this is that tail-sitters such as the Convair XF-Y1 and Lockheed XF-V1 (Figure 2) experimental vehicles of the 1950s proved to be very difficult to pilot during vertical flight and the transition maneuvers. Figure 2: Convair XF-Y1 and Lockheed XF-V1 Tail-Sitter Aircraft. 2 With the advent of modern computing technology and improvements in sensor reliability, capability and cost it is now possible to overcome these piloting disadvantages by transitioning the concept to that of an unmanned vehicle. With the pilot replaced by modern control systems it should be possible to realise the original promise of the tail-sitter configuration. The tail-sitter aircraft considered in this paper differs substantially from its earlier counterparts and is most similar in configuration to the Boeing Heliwing vehicle of the early 1990s. This vehicle had a 1450-lb maximum takeoff weight (MTOW) with a 200-lb payload, 5-hour endurance and 180 kts maximum speed and used twin rotors powered by a single 240 SHP turbine engine. A picture of the Heliwing is shown in Figure 3. Figure 3: Boeing Heliwing Vehicle", "title": "" }, { "docid": "8092ba3c116d33900e72ff79994ac45c", "text": "We describe an expression-invariant method for face recognition by fitting an identity/expression separated 3D Morphable Model to shape data. The expression model greatly improves recognition and retrieval rates in the uncooperative setting, while achieving recognition rates on par with the best recognition algorithms in the face recognition great vendor test. The fitting is performed with a robust nonrigid ICP algorithm. It is able to perform face recognition in a fully automated scenario and on noisy data. The system was evaluated on two datasets, one with a high noise level and strong expressions, and the standard UND range scan database, showing that while expression invariance increases recognition and retrieval performance for the expression dataset, it does not decrease performance on the neutral dataset. The high recognition rates are achieved even with a purely shape based method, without taking image data into account.", "title": "" }, { "docid": "8c428f4a51091f62f1af26c85dd588fc", "text": "In this study, we explored application of Word2Vec and Doc2Vec for sentiment analysis of clinical discharge summaries. We applied unsupervised learning since the data sets did not have sentiment annotations. Note that unsupervised learning is a more realistic scenario than supervised learning which requires an access to a training set of sentiment-annotated data. We aim to detect if there exists any underlying bias towards or against a certain disease. We used SentiWordNet to establish a gold sentiment standard for the data sets and evaluate performance of Word2Vec and Doc2Vec methods. We have shown that the Word2vec and Doc2Vec methods complement each other’s results in sentiment analysis of the data sets.", "title": "" }, { "docid": "9a43387bb85efe85e9395a90a7934b5f", "text": "0. Introduction This is a manual for coding Centering Theory (Grosz et al., 1995) in Spanish. The manual is still under revision. The coding is being done on two sets of corpora: • ISL corpus. A set of task-oriented dialogues in which participants try to find a date where they can meet. Distributed by the Interactive Systems Lab at Carnegie Mellon University. Transcription conventions for this corpus can be found in Appendix A. • CallHome corpus. Spontaneous telephone conversations, distributed by the Linguistics Data Consortium at the University of Pennsylvania. Information about this corpus can be obtained from the LDC. This manual provides guidelines for how to segment discourse (Section 1), what to include in the list of forward-looking centers (Section 2), and how to rank the list (Section 3). In Section 4, we list some unresolved issues. 1. Utterance segmentation 1.1 Utterance In this section, we discuss how to segment discourse into utterances. Besides general segmentation of coordinated and subordinated clauses, we discuss how to treat some spoken language phenomena, such as false starts. In general, an utterance U is a tensed clause. Because we are analyzing telephone conversations, a turn may be a clause or it may be not. For those cases in which the turn is not a clause, a turn is considered an utterance if it contains entities. The first pass in segmentation is to break the speech into intonation units. For the ISL corpus, an utterance U is defined as an intonation unit marked by either {period}, {quest} or {seos} (see Appendix A for details on transcription). Note that {comma}, unless it is followed by {seos}, does not define an utterance. In the example below, (1c.) corresponds to the beginning of a turn by a different speaker. However, even though (1c.) is not a tensed clause, it is treated as an utterance because it contains entities, it is followed by {comma} {seos}, and it does not seem to belong to the following utterance.", "title": "" }, { "docid": "aa8ea8624477a02790df66898f86657b", "text": "Extensible Markup Language (XML) is an extremely simple dialect of SGML which is completely described in this document. The goal is to enable generic SGML to be served, received, and processed on the Web in the way that is now possible with HTML. For this reason, XML has been designed for ease of implementation, and for interoperability with both SGML and HTML. Note on status of this document: This is even more of a moving target than the typical W3C working draft. Several important decisions on the details of XML are still outstanding members of the W3C SGML Working Group will recognize these areas of particular volatility in the spec, but those who are not intimately familiar with the deliberative process should be careful to avoid actions based on the content of this document, until the notice you are now reading has been removed.", "title": "" } ]
scidocsrr
34324b45f68fae5a814a5a8aa36a3091
An Analysis of ISO 26262: Using Machine Learning Safely in Automotive Software
[ { "docid": "a3fe3b92fe53109888b26bb03c200180", "text": "Using Artificial Neural Networh (A\".) in critical applications can be challenging due to the often experimental nature of A\" construction and the \"black box\" label that is fiequently attached to A\".. Wellaccepted process models exist for algorithmic sofhyare development which facilitate software validation and acceptance. The sojiware development process model presented herein is targeted specifically toward artificial neural networks in crik-al appliicationr. 7% model is not unwieldy, and could easily be used on projects without critical aspects. This should be of particular interest to organizations that use AMVs and need to maintain or achieve a Capability Maturity Model (CM&?I or IS0 sofhyare development rating. Further, while this model is aimed directly at neural network development, with minor moda&ations, the model could be applied to any technique wherein knowledge is extractedfiom existing &ka, such as other numeric approaches or knowledge-based systems.", "title": "" } ]
[ { "docid": "55fe7f5acaa13878f2ac1403725fafa0", "text": "One of the obstacles in research activities concentrating on environmental sound classification is the scarcity of suitable and publicly available datasets. This paper tries to address that issue by presenting a new annotated collection of 2000 short clips comprising 50 classes of various common sound events, and an abundant unified compilation of 250000 unlabeled auditory excerpts extracted from recordings available through the Freesound project. The paper also provides an evaluation of human accuracy in classifying environmental sounds and compares it to the performance of selected baseline classifiers using features derived from mel-frequency cepstral coefficients and zero-crossing rate.", "title": "" }, { "docid": "60e055b99c8eb153e0f4ff8d9adc1c33", "text": "We present a recurrent neural net (RNN) model of text normalization — defined as the mapping of written text to its spoken form, and a description of the open-source dataset that we used in our experiments. We show that while the RNN model achieves very high overall accuracies, there remain errors that would be unacceptable in a speech application like TTS.We then show that a simple FST-based filter can help mitigate those errors. Even with that mitigation challenges remain, and we end the paper outlining some possible solutions. In releasing our data we are thereby inviting others to help solve this problem.", "title": "" }, { "docid": "a65e92a6c24d305f648974f72abe3cec", "text": "AIM\nTo estimate the prevalence of glaucoma among people worldwide.\n\n\nMETHODS\nAvailable published data on glaucoma prevalence were reviewed to determine the relation of open angle and angle closure glaucoma with age in people of European, African, and Asian origin. A comparison was made with estimated world population data for the year 2000.\n\n\nRESULTS\nThe number of people with primary glaucoma in the world by the year 2000 is estimated at nearly 66.8 million, with 6.7 million suffering from bilateral blindness. In developed countries, fewer than 50% of those with glaucoma are aware of their disease. In the developing world, the rate of known disease is even lower.\n\n\nCONCLUSIONS\nGlaucoma is the second leading cause of vision loss in the world. Improved methods of screening and therapy for glaucoma are urgently needed.", "title": "" }, { "docid": "205907b70ce7d0d68036b5a1d41fb898", "text": "Customer relationship management (CRM) is a combination of people, processes and technology that seeks to understand a company’s customers. It is an integrated approach to managing relationships by focusing on customer retention and relationship development. CRM has evolved from advances in information technology and organizational changes in customer-centric processes. Companies that successfully implement CRM will reap the rewards in customer loyalty and long run profitability. However, successful implementation is elusive to many companies, mostly because they do not understand that CRM requires company-wide, cross-functional, customer-focused business process re-engineering. Although a large portion of CRM is technology, viewing CRM as a technology-only solution is likely to fail. Managing a successful CRM implementation requires an integrated and balanced approach to technology, process, and people.", "title": "" }, { "docid": "339efad8a055a90b43abebd9a4884baa", "text": "The paper presents an investigation into the role of virtual reality and web technologies in the field of distance education. Within this frame, special emphasis is given on the building of web-based virtual learning environments so as to successfully fulfill their educational objectives. In particular, basic pedagogical methods are studied, focusing mainly on the efficient preparation, approach and presentation of learning content, and specific designing rules are presented considering the hypermedia, virtual and educational nature of this kind of applications. The paper also aims to highlight the educational benefits arising from the use of virtual reality technology in medicine and study the emerging area of web-based medical simulations. Finally, an innovative virtual reality environment for distance education in medicine is demonstrated. The proposed environment reproduces conditions of the real learning process and enhances learning through a real-time interactive simulator. Keywords—Distance education, medicine, virtual reality, web.", "title": "" }, { "docid": "78ca8024a825fc8d5539b899ad34fc18", "text": "In this paper, we examine whether managers use optimistic and pessimistic language in earnings press releases to provide information about expected future firm performance to the market, and whether the market responds to optimistic and pessimistic language usage in earnings press releases after controlling for the earnings surprise and other factors likely to influence the market’s response to the earnings announcement. We use textual-analysis software to measure levels of optimistic and pessimistic language for a sample of approximately 24,000 earnings press releases issued between 1998 and 2003. We find a positive (negative) association between optimistic (pessimistic) language usage and future firm performance and a significant incremental market response to optimistic and pessimistic language usage in earnings press releases. Results suggest managers use optimistic and pessimistic language to provide credible information about expected future firm performance to the market, and that the market responds to managers’ language usage.", "title": "" }, { "docid": "c51bcfb779a30dff8e0ef58c6a7a1634", "text": "We describe algorithms for creating, storing and viewing high-resolution immersive surround videos. Given a set of unit cameras designed to be almost aligned at a common nodal point, we first present a versatile process for stitching seamlessly synchronized streams of videos into a single surround video corresponding to the video of the multihead camera. We devise a general registration process onto raymaps based on minimizing a tailored objective function. We review and introduce new raymaps with good sampling properties. We then give implementation details on the surround video viewer and present experimental results on both real-world acquired and computer-graphics rendered full surround videos. We conclude by mentioning potential applications and discuss ongoing related activities. Video supplements: http://www.csl.sony.co.jp/person/nielsen", "title": "" }, { "docid": "35e73af4b9f6a32c0fd4e31fde871f8a", "text": "In this paper, a novel three-phase soft-switching inverter is presented. The inverter-switch turn on and turn off are performed under zero-voltage switching condition. This inverter has only one auxiliary switch, which is also soft switched. Having one auxiliary switch simplifies the control circuit considerably. The proposed inverter is analyzed, and its operating modes are explained in details. The design considerations of the proposed inverter are presented. The experimental results of the prototype inverter confirm the theoretical analysis.", "title": "" }, { "docid": "bffd767503e0ab9627fc8637ca3b2efb", "text": "Automatically searching for optimal hyperparameter configurations is of crucial importance for applying deep learning algorithms in practice. Recently, Bayesian optimization has been proposed for optimizing hyperparameters of various machine learning algorithms. Those methods adopt probabilistic surrogate models like Gaussian processes to approximate and minimize the validation error function of hyperparameter values. However, probabilistic surrogates require accurate estimates of sufficient statistics (e.g., covariance) of the error distribution and thus need many function evaluations with a sizeable number of hyperparameters. This makes them inefficient for optimizing hyperparameters of deep learning algorithms, which are highly expensive to evaluate. In this work, we propose a new deterministic and efficient hyperparameter optimization method that employs radial basis functions as error surrogates. The proposed mixed integer algorithm, called HORD, searches the surrogate for the most promising hyperparameter values through dynamic coordinate search and requires many fewer function evaluations. HORD does well in low dimensions but it is exceptionally better in higher dimensions. Extensive evaluations on MNIST and CIFAR-10 for four deep neural networks demonstrate HORD significantly outperforms the well-established Bayesian optimization methods such as GP, SMAC and TPE. For instance, on average, HORD is more than 6 times faster than GP-EI in obtaining the best configuration of 19 hyperparameters.", "title": "" }, { "docid": "9692ab0e46c6e370aeb171d3224f5d23", "text": "With the advent technology of Remote Sensing (RS) and Geographic Information Systems (GIS), a network transportation (Road) analysis within this environment has now become a common practice in many application areas. But a main problem in the network transportation analysis is the less quality and insufficient maintenance policies. This is because of the lack of funds for infrastructure. This demand for information requires new approaches in which data related to transportation network can be identified, collected, stored, retrieved, managed, analyzed, communicated and presented, for the decision support system of the organization. The adoption of newly emerging technologies such as Geographic Information System (GIS) can help to improve the decision making process in this area for better use of the available limited funds. The paper reviews the applications of GIS technology for transportation network analysis.", "title": "" }, { "docid": "db422d1fcb99b941a43e524f5f2897c2", "text": "AN INDIVIDUAL CORRELATION is a correlation in which the statistical object or thing described is indivisible. The correlation between color and illiteracy for persons in the United States, shown later in Table I, is an individual correlation, because the kind of thing described is an indivisible unit, a person. In an individual correlation the variables are descriptive properties of individuals, such as height, income, eye color, or race, and not descriptive statistical constants such as rates or means. In an ecological correlation the statistical object is a group of persons. The correlation between the percentage of the population which is Negro and the percentage of the population which is illiterate for the 48 states, shown later as Figure 2, is an ecological correlation. The thing described is the population of a state, and not a single individual. The variables are percentages, descriptive properties of groups, and not descriptive properties of individuals. Ecological correlations are used in an impressive number of quantitative sociological studies, some of which by now have attained the status of classics: Cowles’ ‘‘Statistical Study of Climate in Relation to Pulmonary Tuberculosis’’; Gosnell’s ‘‘Analysis of the 1932 Presidential Vote in Chicago,’’ Factorial and Correlational Analysis of the 1934 Vote in Chicago,’’ and the more elaborate factor analysis in Machine Politics; Ogburn’s ‘‘How women vote,’’ ‘‘Measurement of the Factors in the Presidential Election of 1928,’’ ‘‘Factors in the Variation of Crime Among Cities,’’ and Groves and Ogburn’s correlation analyses in American Marriage and Family Relationships; Ross’ study of school attendance in Texas; Shaw’s Delinquency Areas study of the correlates of delinquency, as well as The more recent analyses in Juvenile Delinquency in Urban Areas; Thompson’s ‘‘Some Factors Influencing the Ratios of Children to Women in American Cities, 1930’’; Whelpton’s study of the correlates of birth rates, in ‘‘Geographic and Economic Differentials in Fertility;’’ and White’s ‘‘The Relation of Felonies to Environmental Factors in Indianapolis.’’ Although these studies and scores like them depend upon ecological correlations, it is not because their authors are interested in correlations between the properties of areas as such. Even out-and-out ecologists, in studying delinquency, for example, rely primarily upon data describing individuals, not areas. In each study which uses ecological correlations, the obvious purpose is to discover something about the behavior of individuals. Ecological correlations are used simply because correlations between the properties of individuals are not available. In each instance, however, the substitution is made tacitly rather than explicitly. The purpose of this paper is to clarify the ecological correlation problem by stating, mathematically, the exact relation between ecological and individual correlations, and by showing the bearing of that relation upon the practice of using ecological correlations as substitutes for individual correlations.", "title": "" }, { "docid": "b754b1d245aa68aeeb37cf78cf54682f", "text": "This paper postulates that water structure is altered by biomolecules as well as by disease-enabling entities such as certain solvated ions, and in turn water dynamics and structure affect the function of biomolecular interactions. Although the structural and dynamical alterations are subtle, they perturb a well-balanced system sufficiently to facilitate disease. We propose that the disruption of water dynamics between and within cells underlies many disease conditions. We survey recent advances in magnetobiology, nanobiology, and colloid and interface science that point compellingly to the crucial role played by the unique physical properties of quantum coherent nanomolecular clusters of magnetized water in enabling life at the cellular level by solving the “problems” of thermal diffusion, intracellular crowding, and molecular self-assembly. Interphase water and cellular surface tension, normally maintained by biological sulfates at membrane surfaces, are compromised by exogenous interfacial water stressors such as cationic aluminum, with consequences that include greater local water hydrophobicity, increased water tension, and interphase stretching. The ultimate result is greater “stiffness” in the extracellular matrix and either the “soft” cancerous state or the “soft” neurodegenerative state within cells. Our hypothesis provides a basis for understanding why so many idiopathic diseases of today are highly stereotyped and pluricausal. OPEN ACCESS Entropy 2013, 15 3823", "title": "" }, { "docid": "264a6ada0d4ea06e06c1c91fbaa16752", "text": "Distributional semantic methods to approximate word meaning with context vectors have been very successful empirically, and the last years have seen a surge of interest in their compositional extension to phrases and sentences. We present here a new model that, like those of Coecke et al. (2010) and Baroni and Zamparelli (2010), closely mimics the standard Montagovian semantic treatment of composition in distributional terms. However, our approach avoids a number of issues that have prevented the application of the earlier linguistically-motivated models to full-fledged, real-life sentences. We test the model on a variety of empirical tasks, showing that it consistently outperforms a set of competitive rivals. 1 Compositional distributional semantics The research of the last two decades has established empirically that distributional vectors for words obtained from corpus statistics can be used to represent word meaning in a variety of tasks (Turney and Pantel, 2010). If distributional vectors encode certain aspects of word meaning, it is natural to expect that similar aspects of sentence meaning can also receive vector representations, obtained compositionally from word vectors. Developing a practical model of compositionality is still an open issue, which we address in this paper. One approach is to use simple, parameterfree models that perform operations such as pointwise multiplication or summing (Mitchell and Lapata, 2008). Such models turn out to be surprisingly effective in practice (Blacoe and Lapata, 2012), but they have obvious limitations. For instance, symmetric operations like vector addition are insensitive to syntactic structure, therefore meaning differences encoded in word order are lost in composition: pandas eat bamboo is identical to bamboo eats pandas. Guevara (2010), Mitchell and Lapata (2010), Socher et al. (2011) and Zanzotto et al. (2010) generalize the simple additive model by applying structure-encoding operators to the vectors of two sister nodes before addition, thus breaking the inherent symmetry of the simple additive model. A related approach (Socher et al., 2012) assumes richer lexical representations where each word is represented with a vector and a matrix that encodes its interaction with its syntactic sister. The training proposed in this model estimates the parameters in a supervised setting. Despite positive empirical evaluation, this approach is hardly practical for generalpurpose semantic language processing, since it requires computationally expensive approximate parameter optimization techniques, and it assumes task-specific parameter learning whose results are not meant to generalize across tasks. 1.1 The lexical function model None of the proposals mentioned above, from simple to elaborate, incorporates in its architecture the intuitive idea (standard in theoretical linguistics) that semantic composition is more than a weighted combination of words. Generally one of the components of a phrase, e.g., an adjective, acts as a function affecting the other component (e.g., a noun). This underlying intuition, adopted from formal semantics of natural language, motivated the creation of the lexical function model of composition (lf ) (Baroni and Zamparelli, 2010; Coecke et al., 2010). The lf model can be seen as a projection of the symbolic Montagovian approach to semantic composition in natural language onto the domain of vector spaces and linear operations on them (Baroni et al., 2013). In lf, arguments are vectors and functions taking arguments (e.g., adjectives that combine with nouns) are tensors, with the number of arguments (n) determining the", "title": "" }, { "docid": "ed2ad5cd12eb164a685a60dc0d0d4a06", "text": "Explainable Recommendation refers to the personalized recommendation algorithms that address the problem of why they not only provide users with the recommendations, but also provide explanations to make the user or system designer aware of why such items are recommended. In this way, it helps to improve the effectiveness, efficiency, persuasiveness, and user satisfaction of recommendation systems. In recent years, a large number of explainable recommendation approaches – especially model-based explainable recommendation algorithms – have been proposed and adopted in real-world systems. In this survey, we review the work on explainable recommendation that has been published in or before the year of 2018. We first highlight the position of explainable recommendation in recommender system research by categorizing recommendation problems into the 5W, i.e., what, when, who, where, and why. We then conduct a comprehensive survey of explainable recommendation itself in terms of three aspects: 1) We provide a chronological research line of explanations in recommender systems, including the user study approaches in the early years, as well as the more recent model-based approaches. 2) We provide a taxonomy for explainable recommendation algorithms, including user-based, item-based, model-based, and post-model explanations. 3) We summarize the application of explainable recommendation in different recommendation tasks, including product recommendation, social recommendation, POI recommendation, etc. We devote a section to discuss the explanation perspectives in the broader IR and machine learning settings, as well as their relationship with explainable recommendation research. We end the survey by discussing potential future research directions to promote the explainable recommendation research area. now Publishers Inc.. Explainable Recommendation: A Survey and New Perspectives. Foundations and Trends © in Information Retrieval, vol. XX, no. XX, pp. 1–87, 2018. DOI: 10.1561/XXXXXXXXXX.", "title": "" }, { "docid": "251ab6744b6517c727121ec11a11e515", "text": "This paper presents a qualitative-reasoning method for predicting the behavior of mechanisms characterized by continuous, time-varying parameters. The structure of a mechanism is described in terms of a set of parameters and the constraints that hold among them : essentially a 'qualitative differential equation'. The qualitative-behavior description consists of a discrete set of time-points, at which the values of the parameters are described in terms of ordinal relations and directions of change. The behavioral description, or envisionment, is derived by two sets of rules: propagation rules which elaborate the description of the current time-point, and prediction rules which determine what is known about the next qualitatively distinct state of the mechanism. A detailed example shows how the envisionment method can detect a previously unsuspected landmark point at which the system is in stable equilibrium.", "title": "" }, { "docid": "c95c46d75c2ff3c783437100ba06b366", "text": "Co-references are traditionally used when integrating data from different datasets. This approach has various benefits such as fault tolerance, ease of integration and traceability of provenance; however, it often results in the problem of entity consolidation, i.e., of objectively stating whether all the co-references do really refer to the same entity; and, when this is the case, whether they all convey the same intended meaning. Relying on the sole presence of a single equivalence (owl:sameAs) statement is often problematic and sometimes may even cause serious troubles. It has been observed that to indicate the likelihood of an equivalence one could use a numerically weighted measure, but the real hard questions of where precisely will these values come from arises. To answer this question we propose a methodology based on a graph clustering algorithm.", "title": "" }, { "docid": "f9570306e0d115d08cc6e69161955fcf", "text": "Abstract—In this paper, two basic switching cells, P-cell and Ncell, are presented to investigate the topological nature of power electronics circuits. Both cells consist of a switching device and a diode and are the basic building blocks for almost all power electronics circuits. The mirror relationship of the P-cell and Ncell will be revealed. This paper describes the two basic switching cells and shows how all dc-dc converters, voltage source inverters, current source inverters, and multilevel converters are constituted from the two basic cells. Through these two basic cells, great insights about the topology of all power electronics circuits can be obtained for the construction and decomposition of existing power electronic circuits. New power conversion circuits can now be easily derived and invented.", "title": "" }, { "docid": "e3dc44074fe921f4d42135a7e05bf051", "text": "This paper presents a 60 GHz antenna structure built on glass and flip-chipped on a ceramic module. A single antenna and a two antenna array have been fabricated and demonstrated good performances. The single antenna shows a return loss below −10 dB and a gain of 6–7 dBi over a 7 GHz bandwidth. The array shows a gain of 7–8 dBi over a 3 GHz bandwidth.", "title": "" }, { "docid": "e4d4a77d7b5ecfaf7450f5b82fe92d17", "text": "INTRODUCTION The Information Technology – Business Process Outsourcing (IT-BPO) Industry is one of the most dynamic emerging sectors of the Philippines. It has expanded widely and it exhibits great dynamism, but people have the notion that the BPO industry is solely comprised of call centers, when it is actually more diverse with back-offices, knowledge process outsourcing, software design and engineering, animation, game development, as well as transcription. These sub-sectors are still small in terms of the number of establishments, companies and employees, but they are growing steadily, supported by several government programs and industry associations. Given such support and in addition, the technology-intensive nature of the sector, the ITBPO industry could significantly shape the future of the services industry of the Philippines.", "title": "" } ]
scidocsrr
7fd7d4d2d0e478250dc077ef51fc399b
Fast and efficient implementation of Convolutional Neural Networks on FPGA
[ { "docid": "8999e010ddbc0aa7ef579d8a9e055769", "text": "Convolutional Neural Networks (CNNs) have gained popularity in many computer vision applications such as image classification, face detection, and video analysis, because of their ability to train and classify with high accuracy. Due to multiple convolution and fully-connected layers that are compute-/memory-intensive, it is difficult to perform real-time classification with low power consumption on today?s computing systems. FPGAs have been widely explored as hardware accelerators for CNNs because of their reconfigurability and energy efficiency, as well as fast turn-around-time, especially with high-level synthesis methodologies. Previous FPGA-based CNN accelerators, however, typically implemented generic accelerators agnostic to the CNN configuration, where the reconfigurable capabilities of FPGAs are not fully leveraged to maximize the overall system throughput. In this work, we present a systematic design space exploration methodology to maximize the throughput of an OpenCL-based FPGA accelerator for a given CNN model, considering the FPGA resource constraints such as on-chip memory, registers, computational resources and external memory bandwidth. The proposed methodology is demonstrated by optimizing two representative large-scale CNNs, AlexNet and VGG, on two Altera Stratix-V FPGA platforms, DE5-Net and P395-D8 boards, which have different hardware resources. We achieve a peak performance of 136.5 GOPS for convolution operation, and 117.8 GOPS for the entire VGG network that performs ImageNet classification on P395-D8 board.", "title": "" } ]
[ { "docid": "02447ce33a1fa5f8b4f156abf5d2f746", "text": "In this paper, we present TeleHuman, a cylindrical 3D display portal for life-size human telepresence. The TeleHuman 3D videoconferencing system supports 360 degree motion parallax as the viewer moves around the cylinder and optionally, stereoscopic 3D display of the remote person. We evaluated the effect of perspective cues on the conveyance of nonverbal cues in two experiments using a one-way telecommunication version of the system. The first experiment focused on how well the system preserves gaze and hand pointing cues. The second experiment evaluated how well the system conveys 3D body postural information. We compared 3 perspective conditions: a conventional 2D view, a 2D view with 360 degree motion parallax, and a stereoscopic view with 360 degree motion parallax. Results suggest the combined presence of motion parallax and stereoscopic cues significantly improved the accuracy with which participants were able to assess gaze and hand pointing cues, and to instruct others on 3D body poses. The inclusion of motion parallax and stereoscopic cues also led to significant increases in the sense of social presence and telepresence reported by participants.", "title": "" }, { "docid": "078287ad3a2f4794b38e6f6e24c676cd", "text": "Odontomas, benign tumors that develop in the jaw, rarely erupt into the oral cavity. We report an erupted odontoma which delayed eruption of the first molar. The patient was a 10-year-old Japanese girl who came to our hospital due to delayed eruption of the right maxillary first molar. All the deciduous teeth had been shed. The second premolar on the right side had erupted, but not the first molar. Slight inflammation of the alveolar mucosa around the first molar had exposed a tooth-like, hard tissue. Panoramic radiography revealed a radiopaque mass indicating a lesion approximately 1 cm in diameter. The border of the image was clear, and part of the mass was situated close to the occlusal surface of the first molar. The root of the maxillary right first molar was only half-developed. A clinical diagnosis of odontoma was made. The odontoma was subsequently extracted, allowing the crown of the first molar to erupt almost 5 months later. The dental germ of the permanent tooth had been displaced by the odontoma. However, after the odontoma had been extracted, the permanent tooth was still able to erupt spontaneously, as eruptive force still remained. When the eruption of a tooth is significantly delayed, we believe that it is necessary to examine the area radiographically. If there is any radiographic evidence of a physical obstruction that might delay eruption, that obstruction should be removed before any problems can arise. Regular dental checkups at schools might improve our ability to detect evidence of delayed eruption earlier.", "title": "" }, { "docid": "1c6e591999cd8b0eff7a637bf0753927", "text": "The last few years have seen the emergence of several Open Access (OA) options in scholarly communication, which can broadly be grouped into two areas referred to as gold and green roads. Several recent studies showed how big the extent of OA is, but there have been few studies showing impact of OA in the visibility of journals covering all scientific fields and geographical regions. This research shows the extent of OA from the perspective of the journals indexed in Scopus, as well as influence on visibility, in view of the various geographic and thematic distributions. The results show that in all the disciplinary groups the presence of green road journals widely surpasses the percentage of gold road publications. The peripheral and emerging regions have greater proportions of gold road journals. These journals pertain for the 2 most part to the last quartile. The benefits of open access on visibility of the journals are to be found on the green route, but paradoxically this advantage is not lent by the OA per se, but rather of the quality of the articles/journals themselves, regardless of their mode of access.", "title": "" }, { "docid": "a0c78ea9b045b492eacdc0e87f01c6e7", "text": "We describe a graphical logical form as a semantic representation for text understanding. This representation was designed to bridge the gap between highly expressive “deep” representations of logical forms and more shallow semantic encodings such as word senses and semantic relations. We also present an evaluation metric for the representation and report on the current performance on the TRIPS parser on the common task paragraphs.", "title": "" }, { "docid": "db89d618c127dbf45cac1062ae5117ab", "text": "A language-independent means of gauging topical similarity in unrestricted text is described. The method combines information derived from n-grams (consecutive sequences of n characters) with a simple vector-space technique that makes sorting, categorization, and retrieval feasible in a large multilingual collection of documents. No prior information about document content or language is required. Context, as it applies to document similarity, can be accommodated by a well-defined procedure. When an existing document is used as an exemplar, the completeness and accuracy with which topically related documents are retrieved is comparable to that of the best existing systems. The results of a formal evaluation are discussed, and examples are given using documents in English and Japanese.", "title": "" }, { "docid": "be91ec9b4f017818f32af09cafbb2a9a", "text": "Brainard et al. 2 INTRODUCTION Object recognition is difficult because there is no simple relation between an object's properties and the retinal image. Where the object is located, how it is oriented, and how it is illuminated also affect the image. Moreover, the relation is under-determined: multiple physical configurations can give rise to the same retinal image. In the case of object color, the spectral power distribution of the light reflected from an object depends not only on the object's intrinsic surface reflectance but also factors extrinsic to the object, such as the illumination. The relation between intrinsic reflectance, extrinsic illumination, and the color signal reflected to the eye is shown schematically in Figure 1. The light incident on a surface is characterized by its spectral power distribution E(λ). A small surface element reflects a fraction of the incident illuminant to the eye. The surface reflectance function S(λ) specifies this fraction as a function of wavelength. The spectrum of the light reaching the eye is called the color signal and is given by C(λ) = E(λ)S(λ). Information about C(λ) is encoded by three classes of cone photoreceptors, the L-, M-, and Scones. The top two patches rendered in Plate 1 illustrate the large effect that a typical change in natural illumination (see Wyszecki and Stiles, 1982) can have on the color signal. This effect might lead us to expect that the color appearance of objects should vary radically, depending as much on the current conditions of illumination as on the object's surface reflectance. Yet the very fact that we can sensibly refer to objects as having a color indicates otherwise. Somehow our visual system stabilizes the color appearance of objects against changes in illumination, a perceptual effect that is referred to as color constancy. Because the illumination is the most salient object-extrinsic factor that affects the color signal, it is natural that emphasis has been placed on understanding how changing the illumination affects object color appearance. In a typical color constancy experiment, the independent variable is the illumination and the dependent variable is a measure of color appearance experiments employ different stimulus configurations and psychophysical tasks, but taken as a whole they support the view that human vision exhibits a reasonable degree of color constancy. Recall that the top two patches of Plate 1 illustrate the limiting case where a single surface reflectance is seen under multiple illuminations. Although this …", "title": "" }, { "docid": "a82621609fe26cf19d7d777f6b6f0600", "text": "A standard mode of inference in social and behavioral science is to establish stylized facts using statistical significance in quantitative studies. However, in a world in which measurements are noisy and effects are small, this will not work: selection on statistical significance leads to effect sizes which are overestimated and often in the wrong direction. After a brief discussion of two examples, one in economics and one in social psychology, we consider the procedural solution of open postpublication review, the design solution of devoting more effort to accurate measurements and within-person comparisons, and the statistical analysis solution of multilevel modeling and reporting all results rather than selection on significance. We argue that the current replication crisis in science arises in part from the ill effects of null hypothesis significance testing being used to study small effects with noisy data. In such settings, apparent success comes easy but truly replicable results require a more serious connection between theory, measurement, and data.", "title": "" }, { "docid": "5a355c69e7f8e4248a63ef83b06b7095", "text": "The interior permanent magnet (IPM) machine equipped with a fractional-slot concentrated winding (FSCW) has met an increasing interest in electric vehicle applications due to its higher power density and efficiency. Torque production is due to both PM and reluctance torques. However, one of the main challenges of FSCWs is their inability to produce a high-quality magnetomotive force (MMF) distribution, yielding undesirable rotor core and magnet eddy-current losses. Literature shows that the reduction of low-order space harmonics significantly reduces these loss components. Moreover, it has been previously shown that by employing a higher number of layers, although causing some reduction in the winding factor of the torque-producing MMF component, both machine saliency and reluctance torque components are improved. Recently, a dual three-phase winding connected in a star/delta connection has also shown promise to concurrently enhance machine torque of a surface-mounted PM machine while significantly reducing both rotor core and magnet losses. In this paper, a multilayer winding configuration and a dual three-phase winding connection are combined and applied to the well-known 12-slot/10-pole IPM machine with v-shaped magnets. The proposed winding layout is compared with a conventional double-layer winding, a dual three-phase double-layer winding, and a four-layer winding. The comparison is carried out using 2-D finite-element analysis. The comparison shows that the proposed winding layout, while providing similar output torque to a conventional double-layer three-phase winding, offers a significant reduction in core and magnet losses, correspondingly a higher efficiency, improves the machine saliency ratio, and maximizes the reluctance toque component.", "title": "" }, { "docid": "43071b49420f14d9c2affe3c12e229ae", "text": "The Gatekeeper is a vision-based door security system developed at the MIT Artificial Intelligence Laboratory. Faces are detected in a real-time video stream using an efficient algorithmic approach, and are recognized using principal component analysis with class specific linear projection. The system sends commands to an automatic sliding door, speech synthesizer, and touchscreen through a multi-client door control server. The software for the Gatekeeper was written using a set of tools created by the author to facilitate the development of real-time machine vision applications in Matlab, C, and Java.", "title": "" }, { "docid": "e2a9bb49fd88071631986874ea197bc1", "text": "We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale problems even with dense matrix data. However, such methods are also known to converge quite slowly. In this paper we present a new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA which is shown to be faster than ISTA by several orders of magnitude.", "title": "" }, { "docid": "c1305b1ccc199126a52c6a2b038e24d1", "text": "This study has devoted much effort to developing an integrated model designed to predict and explain an individual’s continued use of online services based on the concepts of the expectation disconfirmation model and the theory of planned behavior. Empirical data was collected from a field survey of Cyber University System (CUS) users to verify the fit of the hypothetical model. The measurement model indicates the theoretical constructs have adequate reliability and validity while the structured equation model is illustrated as having a high model fit for empirical data. Study’s findings show that a customer’s behavioral intention towards e-service continuance is mainly determined by customer satisfaction and additionally affected by perceived usefulness and subjective norm. Generally speaking, the integrated model can fully reflect the spirit of the expectation disconfirmation model and take advantage of planned behavior theory. After consideration of the impact of systemic features, personal characteristics, and social influence on customer behavior, the integrated model had a better explanatory advantage than other EDM-based models proposed in prior research. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "8b390d8e68a90c5fea67ffd2c6f61a6a", "text": "Today, online communities in the World Wide Web become increasingly interactive and networked. Web 2.0 technologies provide a multitude of platforms, such as blogs, wikis, and forums where for example consumers can disseminate data about products and manufacturers. This data provides an abundance of information on personal experiences and opinions which are extremely relevant for companies and sales organizations. Subjects of postings can be partly retrieved by state of the art text mining techniques. A much more challenging task is to detect factors influencing the evolvement of opinions within the social network. For such a kind of trend scouting you have to take into account the relationships among the community members. Social network analysis helps to explain social behavior of linked persons by providing quantitative measures of social interactions. A new approach based on social network analysis is presented, which allows detecting opinion leaders and opinion trends. This leads a better understanding of opinion formation. The overall concept based on text mining and social network analysis is introduced. An example is given which illustrates the analysis process.", "title": "" }, { "docid": "03d33ceac54b501c281a954e158d0224", "text": "HoloDesk is an interactive system combining an optical see through display and Kinect camera to create the illusion that users are directly interacting with 3D graphics. A virtual image of a 3D scene is rendered through a half silvered mirror and spatially aligned with the real-world for the viewer. Users easily reach into an interaction volume displaying the virtual image. This allows the user to literally get their hands into the virtual display and to directly interact with an spatially aligned 3D virtual world, without the need for any specialized head-worn hardware or input device. We introduce a new technique for interpreting raw Kinect data to approximate and track rigid (e.g., books, cups) and non-rigid (e.g., hands, paper) physical objects and support a variety of physics-inspired interactions between virtual and real. In particular the algorithm models natural human grasping of virtual objects with more fidelity than previously demonstrated. A qualitative study highlights rich emergent 3D interactions, using hands and real-world objects. The implementation of HoloDesk is described in full, and example application scenarios explored. Finally, HoloDesk is quantitatively evaluated in a 3D target acquisition task, comparing the system with indirect and glasses-based variants.", "title": "" }, { "docid": "bda9e81fb75544788401864c27356ab4", "text": "The design of a simple planar polarization reconfigurable monopole antenna for the Global Navigation Satellite System (GNSS) and Personal Communications System (PCS) is presented. The antenna consists of two meandered monopoles, a feeding network using the Wilkinson power divider, two switchable 90 0-phase shifters implemented using λ/4-microstrip lines and a defected ground structure (DGS). The meandered monopoles resonating at about 1.55 GHz are placed perpendicular to each other. The input signal is divided into two signals with equal amplitude and phase by the power divider and fed to the meandered monopoles via the phase shifters. The two signals arriving at the two monopoles have a phase difference of 90 0, -900 or 0 0, depending on the phase shifters controlled using six PIN-diode switches, hence generating a right/left-handed circularly polarized (CP) or linearly polarized (LP) signal. We propose a novel biasing technique to control the six PIN diodes using five voltages. Measurement results show that the antenna in CP has an impedance bandwidth of 1.06-1.64 GHz and an axial-ratio bandwidth of 1.43-1.84 GHz, and in LP has an impedance bandwidth of 1.63-1.88 GHz. Simulated and measured results on S11, AR, radiation pattern, and gains show good agreements.", "title": "" }, { "docid": "5a77d1bedb3599d0f4c20e5b59fbe8dd", "text": "To alleviate the expensive cost of data collection and annotation, many self-supervised learning methods were proposed to learn image representations without humanlabeled annotations. However, self-supervised learning for video representations is not yet well-addressed. In this paper, we propose a novel 3DConvNet-based fully selfsupervised framework to learn spatiotemporal video features without using any human-labeled annotations. First, a set of pre-designed geometric transformations (e.g. rotating 0◦, 90◦, 180◦, and 270◦) are applied to each video. Then a pretext task can be defined as ”recognizing the predesigned geometric transformations.” Therefore, the spatiotemporal video features can be learned in the process of accomplishing this pretext task without using humanlabeled annotations. The learned spatiotemporal video representations can further be employed as pre-trained features for different video-related applications. The proposed geometric transformations (e.g. rotations) are proved to be effective to learn representative spatiotemporal features in our 3DConvNet-based fully self-supervised framework. With the pre-trained spatiotemporal features from two large video datasets, the performance of action recognition is significantly boosted up by 20.4% on UCF101 dataset and 16.7% on HMDB51 dataset respectively compared to that from the model trained from scratch. Furthermore, our framework outperforms the state-of-the-arts of fully self-supervised methods on both UCF101 and HMDB51 datasets and achieves 62.9% and 33.7% accuracy respectively.", "title": "" }, { "docid": "347ffb664378b56a5ae3a45d1251d7b7", "text": "We present Essentia 2.0, an open-source C++ library for audio analysis and audio-based music information retrieval released under the Affero GPL license. It contains an extensive collection of reusable algorithms which implement audio input/output functionality, standard digital signal processing blocks, statistical characterization of data, and a large set of spectral, temporal, tonal and high-level music descriptors. The library is also wrapped in Python and includes a number of predefined executable extractors for the available music descriptors, which facilitates its use for fast prototyping and allows setting up research experiments very rapidly. Furthermore, it includes a Vamp plugin to be used with Sonic Visualiser for visualization purposes. The library is cross-platform and currently supports Linux, Mac OS X, and Windows systems. Essentia is designed with a focus on the robustness of the provided music descriptors and is optimized in terms of the computational cost of the algorithms. The provided functionality, specifically the music descriptors included in-the-box and signal processing algorithms, is easily expandable and allows for both research experiments and development of large-scale industrial applications.", "title": "" }, { "docid": "e6b9a05ecc3fd48df50aa769ce05b6a6", "text": "This paper presents an interactive exoskeleton device for hand rehabilitation, iHandRehab, which aims to satisfy the essential requirements for both active and passive rehabilitation motions. iHandRehab is comprised of exoskeletons for the thumb and index finger. These exoskeletons are driven by distant actuation modules through a cable/sheath transmission mechanism. The exoskeleton for each finger has 4 degrees of freedom (DOF), providing independent control for all finger joints. The joint motion is accomplished by a parallelogram mechanism so that the joints of the device and their corresponding finger joints have the same angular displacement when they rotate. Thanks to this design, the joint angles can be measured by sensors real time and high level motion control is therefore made very simple without the need of complicated kinematics. The paper also discusses important issues when the device is used by different patients, including its adjustable joint range of motion (ROM) and adjustable range of phalanx length (ROPL). Experimentally collected data show that the achieved ROM is close to that of a healthy hand and the ROPL covers the size of a typical hand, satisfying the size need of regular hand rehabilitation. In order to evaluate the performance when it works as a haptic device in active mode, the equivalent moment of inertia (MOI) of the device is calculated. The results prove that the device has low inertia which is critical in order to obtain good backdrivability. Experimental analysis shows that the influence of friction accounts for a large portion of the driving torque and warrants future investigation.", "title": "" }, { "docid": "555ad97bfc0df507ecc7b737c74c60be", "text": "Vehicle-to-grid power (V2G) uses electric-drive vehicles (battery, fuel cell, or hybrid) to provide power for specific electric markets. This article examines the systems and processes needed to tap energy in vehicles and implement V2G. It quantitatively compares today’s light vehicle fleet with the electric power system. The vehicle fleet has 20 times the power capacity, less than one-tenth the utilization, and one-tenth the capital cost per prime mover kW. Conversely, utility generators have 10–50 times longer operating life and lower operating costs per kWh. T id manager. T er the initial h calculations s ind, plus 8 identified. ©", "title": "" }, { "docid": "7228ebec1e9ffddafab50e3ac133ebad", "text": "Building robust low and mid-level image representations, beyond edge primitives, is a long-standing goal in vision. Many existing feature detectors spatially pool edge information which destroys cues such as edge intersections, parallelism and symmetry. We present a learning framework where features that capture these mid-level cues spontaneously emerge from image data. Our approach is based on the convolutional decomposition of images under a spar-sity constraint and is totally unsupervised. By building a hierarchy of such decompositions we can learn rich feature sets that are a robust image representation for both the analysis and synthesis of images.", "title": "" }, { "docid": "697587c98d882534a3ff120922ffb137", "text": "The MapReduce platform has been widely used for large-scale data processing and analysis recently. It works well if the hardware of a cluster is well configured. However, our survey has indicated that common hardware configurations in small-and medium-size enterprises may not be suitable for such tasks. This situation is more challenging for memory-constrained systems, in which the memory is a bottleneck resource compared with the CPU power and thus does not meet the needs of large-scale data processing. The traditional high performance computing (HPC) system is an example of the memory-constrained system according to our survey. In this paper, we have developed Mammoth, a new MapReduce system, which aims to improve MapReduce performance using global memory management. In Mammoth, we design a novel rule-based heuristic to prioritize memory allocation and revocation among execution units (mapper, shuffler, reducer, etc.), to maximize the holistic benefits of the Map/Reduce job when scheduling each memory unit. We have also developed a multi-threaded execution engine, which is based on Hadoop but runs in a single JVM on a node. In the execution engine, we have implemented the algorithm of memory scheduling to realize global memory management, based on which we further developed the techniques such as sequential disk accessing, multi-cache and shuffling from memory, and solved the problem of full garbage collection in the JVM. We have conducted extensive experiments to compare Mammoth against the native Hadoop platform. The results show that the Mammoth system can reduce the job execution time by more than 40 percent in typical cases, without requiring any modifications of the Hadoop programs. When a system is short of memory, Mammoth can improve the performance by up to 5.19 times, as observed for I/O intensive applications, such as PageRank. We also compared Mammoth with Spark. Although Spark can achieve better performance than Mammoth for interactive and iterative applications when the memory is sufficient, our experimental results show that for batch processing applications, Mammoth can adapt better to various memory environments and outperform Spark when the memory is insufficient, and can obtain similar performance as Spark when the memory is sufficient. Given the growing importance of supporting large-scale data processing and analysis and the proven success of the MapReduce platform, the Mammoth system can have a promising potential and impact.", "title": "" } ]
scidocsrr
69cd44f9a56f1ffbc46347ab2d330b6d
Ten Simple Rules for Reproducible Research in Jupyter Notebooks
[ { "docid": "8997bface175b49f1a96c71a6ddb21e1", "text": "Open-source software development has had significant impact, not only on society, but also on scientific research. Papers describing software published as open source are amongst the most widely cited publications (e.g., BLAST [1,2] and Clustal-W [3]), suggesting many scientific studies may not have been possible without some kind of open software to collect observations, analyze data, or present results. It is surprising, therefore, that so few papers are accompanied by open software, given the benefits that this may bring. Publication of the source code you write not only can increase your impact [4], but also is essential if others are to be able to reproduce your results. Reproducibility is a tenet of computational science [5], and critical for pipelines employed in datadriven biological research. Publishing the source for the software you created as well as input data and results allows others to better understand your methodology, and why it produces, or fails to produce, expected results. Public release might not always be possible, perhaps due to intellectual property policies at your or your collaborators’ institutes; and it is important to make sure you know the regulations that apply to you. Open licensing models can be incredibly flexible and do not always prevent commercial software release [5]. Simply releasing the source under an open license, however, is not sufficient if you wish your code to remain useful beyond its publication [6]. The sustainability of software after publication is probably the biggest problem faced by researchers who develop it, and it is here that participating in open development from the outset can make the biggest impact. Grant-based funding is often exhausted shortly after new software is released, and without support, in-house maintenance of the software and the systems it depends on becomes a struggle. As a consequence, the software will cease to work or become unavailable for download fairly quickly [7], which may contravene archival policies stipulated by your journal or funding body. A collaborative and open project allows you to spread the resource and maintenance load to minimize these risks, and significantly contributes to the sustainability of your software. If you have the choice, embracing an open approach to development has tremendous benefits. It allows you to build on the work of other scientists, and enables others to build on your own efforts. To make the development of open scientific software more rewarding and the experience of using software more positive, the following ten rules are intended to serve as a guide for any computational scientist.", "title": "" }, { "docid": "89cb3d192b0439b7e9022837acd19396", "text": "Computational science has led to exciting new developments, but the nature of the work has exposed limitations in our ability to evaluate published findings. Reproducibility has the potential to serve as a minimum standard for judging scientific claims when full independent replication of a study is not possible.", "title": "" }, { "docid": "8be1a6ae2328bbcc2d0265df167ecbb3", "text": "It is increasingly necessary for researchers in all fields to write computer code, and in order to reproduce research results, it is important that this code is published. We present Jupyter notebooks, a document format for publishing code, results and explanations in a form that is both readable and executable. We discuss various tools and use cases for notebook documents.", "title": "" }, { "docid": "c61a2c7f2e91d1e759133fc3e2ad734a", "text": "Computers are now essential in all branches of science, but most researchers are never taught the equivalent of basic lab skills for research computing. As a result, data can get lost, analyses can take much longer than necessary, and researchers are limited in how effectively they can work with software and data. Computing workflows need to follow the same practices as lab projects and notebooks, with organized data, documented steps, and the project structured for reproducibility, but researchers new to computing often don't know where to start. This paper presents a set of good computing practices that every researcher can adopt, regardless of their current level of computational skill. These practices, which encompass data management, programming, collaborating with colleagues, organizing projects, tracking work, and writing manuscripts, are drawn from a wide variety of published sources from our daily lives and from our work with volunteer organizations that have delivered workshops to over 11,000 people since 2010.", "title": "" } ]
[ { "docid": "8604589b2c45d6190fdbc50073dfda23", "text": "Many real world, complex phenomena have an underlying structure of evolving networks where nodes and links are added and removed over time. A central scientific challenge is the description and explanation of network dynamics, with a key test being the prediction of short and long term changes. For the problem of short-term link prediction, existing methods attempt to determine neighborhood metrics that correlate with the appearance of a link in the next observation period. Here, we provide a novel approach to predicting future links by applying an evolutionary algorithm (Covariance Matrix Evolution) to weights which are used in a linear combination of sixteen neighborhood and node similarity indices. We examine reciprocal reply networks of Twitter users constructed at the time scale of weeks, both as a test of our general method and as a problem of scientific interest in itself. Our evolved predictors exhibit a thousand-fold improvement over random link prediction, to our knowledge strongly outperforming all extant methods. Based on our findings, we suggest possible factors which may be driving the evolution of Twitter reciprocal reply networks.", "title": "" }, { "docid": "140a9255e8ee104552724827035ee10a", "text": "Our goal is to design architectures that retain the groundbreaking performance of CNNs for landmark localization and at the same time are lightweight, compact and suitable for applications with limited computational resources. To this end, we make the following contributions: (a) we are the first to study the effect of neural network binarization on localization tasks, namely human pose estimation and face alignment. We exhaustively evaluate various design choices, identify performance bottlenecks, and more importantly propose multiple orthogonal ways to boost performance. (b) Based on our analysis, we propose a novel hierarchical, parallel and multi-scale residual architecture that yields large performance improvement over the standard bottleneck block while having the same number of parameters, thus bridging the gap between the original network and its binarized counterpart. (c) We perform a large number of ablation studies that shed light on the properties and the performance of the proposed block. (d) We present results for experiments on the most challenging datasets for human pose estimation and face alignment, reporting in many cases state-of-the-art performance. Code can be downloaded from https://www.adrianbulat.com/binary-cnn-landmarks", "title": "" }, { "docid": "4c1060bf3e7d01f817e6ce84d1d6fac0", "text": "1668 The smaller the volume (or share) of imports from the trading partner, the larger the impact of a preferential trade agreement on home country welfare—because the smaller the imports, the smaller the loss in tariff revenue. And the home country is better off as a small member of a large bloc than as a large member of a small bloc. Summary findings There has been a resurgence of preferential trade agreements (PTAs) partly because of the deeper European integration known as EC-92, which led to a fear of a Fortress Europe; and partly because of the U.S. decision to form a PTA with Canada. As a result, there has been a domino effect: a proliferation of PTAs, which has led to renewed debate about how PTAs affect both welfare and the multilateral system. Schiff examines two issues: the welfare impact of preferential trade agreements (PTAs) and the effect of structural and policy changes on PTAs. He asks how the PTA's effect on home-country welfare is affected by higher demand for imports; the efficiency of production of the partner or rest of the world (ROW); the share imported from the partner (ROW); and the initial protection on imports from the partner (ROW). Among his findings: • An individual country benefits more from a PTA if it imports less from its partner countries (with imports measured either in volume or as a share of total imports). This result has important implications for choice of partners. • A small home country loses from forming a free trade agreement (FTA) with a small partner country but gains from forming one with the rest of the world. In other words, the home country is better off as a small member of a large bloc than as a large member of a small bloc. This result need not hold if smuggling is a factor. • Home country welfare after formation of a FTA is higher when imports from the partner country are smaller, whether the partner country is large or small. Welfare worsens as imports from the partner country increase. • In general, a PTA is more beneficial (or less harmful) for a country with lower import demand. A PTA is also more beneficial for a country with a more efficient import-substituting sector, as this will result in a lower demand for imports. • A small country may gain from forming a PTA when smuggling …", "title": "" }, { "docid": "d35cac8677052d0371d2863d54a59597", "text": "A high-power short-pulse generator based on the diode step recovery phenomenon and high repetition rate discharges in a two-electrode gas discharge tube is presented. The proposed circuit is simple and low cost and driven by a low-power source. A full analysis of this generator is presented which, considering the nonlinear behavior of the gas tube, predicts the waveform of the output pulse. The proposed method has been shown to work properly by implementation of a kW-range prototype. Experimental measurements of the output pulse characteristics showed a rise time of 3.5 ns, with pulse repetition rate of 2.3 kHz for a 47- $\\Omega $ load. The input peak power was 2.4 W, which translated to about 0.65-kW output, showing more than 270 times increase in the pulse peak power. The efficiency of the prototype was 57%. The overall price of the employed components in the prototype was less than U.S. $2.0. An excellent agreement between the analytical and experimental test results was established. The analysis predicts that the proposed circuit can generate nanosecond pulses with more than 100-kW peak powers by using a subkW power supply.", "title": "" }, { "docid": "5c50099c8a4e638736f430e3b5622b1d", "text": "BACKGROUND\nAccording to the existential philosophers, meaning, purpose and choice are necessary for quality of life. Qualitative researchers exploring the perspectives of people who have experienced health crises have also identified the need for meaning, purpose and choice following life disruptions. Although espousing the importance of meaning in occupation, occupational therapy theory has been primarily preoccupied with purposeful occupations and thus appears inadequate to address issues of meaning within people's lives.\n\n\nPURPOSE\nThis paper proposes that the fundamental orientation of occupational therapy should be the contributions that occupation makes to meaning in people's lives, furthering the suggestion that occupation might be viewed as comprising dimensions of meaning: doing, being, belonging and becoming. Drawing upon perspectives and research from philosophers, social scientists and occupational therapists, this paper will argue for a renewed understanding of occupation in terms of dimensions of meaning rather than as divisible activities of self-care, productivity and leisure.\n\n\nPRACTICE IMPLICATIONS\nFocusing on meaningful, rather than purposeful occupations more closely aligns the profession with its espoused aspiration to enable the enhancement of quality of life.", "title": "" }, { "docid": "56287b9aea445b570aa7fe77f1b7751a", "text": "Unsupervised machine translation—i.e., not assuming any cross-lingual supervision signal, whether a dictionary, translations, or comparable corpora—seems impossible, but nevertheless, Lample et al. (2018a) recently proposed a fully unsupervised machine translation (MT) model. The model relies heavily on an adversarial, unsupervised alignment of word embedding spaces for bilingual dictionary induction (Conneau et al., 2018), which we examine here. Our results identify the limitations of current unsupervised MT: unsupervised bilingual dictionary induction performs much worse on morphologically rich languages that are not dependent marking, when monolingual corpora from different domains or different embedding algorithms are used. We show that a simple trick, exploiting a weak supervision signal from identical words, enables more robust induction, and establish a near-perfect correlation between unsupervised bilingual dictionary induction performance and a previously unexplored graph similarity metric.", "title": "" }, { "docid": "c5bc0cd14aa51c24a00107422fc8ca10", "text": "This paper proposes a new high-voltage Pulse Generator (PG), fed from low voltage dc supply Vs. This input supply voltage is utilized to charge two arms of N series-connected modular multilevel converter sub-module capacitors sequentially through a resistive-inductive branch, such that each arm is charged to NVS. With a step-up nano-crystalline transformer of n turns ratio, the proposed PG is able to generate bipolar rectangular pulses of peak ±nNVs, at high repetition rates. However, equal voltage-second area of consecutive pulse pair polarities should be assured to avoid transformer saturation. Not only symmetrical pulses can be generated, but also asymmetrical pulses with equal voltage-second areas are possible. The proposed topology is tested via simulations and a scaled-down experimentation, which establish the viability of the topology for water treatment applications.", "title": "" }, { "docid": "d30b28679196feade4874e036ecd68b8", "text": "Domain adaptation (DA) aims to generalize a learning model across training and testing data despite the mismatch of their data distributions. In light of a theoretical estimation of upper error bound, we argue in this paper that an effective DA method should 1) search a shared feature subspace where source and target data are not only aligned in terms of distributions as most state of the art DA methods do, but also discriminative in that instances of different classes are well separated; 2) account for the geometric structure of the underlying data manifold when inferring data labels on the target domain. In comparison with a baseline DA method which only cares about data distribution alignment between source and target, we derive three different DA models, namely CDDA, GA-DA, and DGA-DA, to highlight the contribution of Close yet Discriminative DA(CDDA) based on 1), Geometry Aware DA (GA-DA) based on 2), and finally Discriminative and Geometry Aware DA (DGA-DA) implementing jointly 1) and 2). Using both synthetic and real data, we show the effectiveness of the proposed approach which consistently outperforms state of the art DA methods over 36 image classification DA tasks through 6 popular benchmarks. We further carry out in-depth analysis of the proposed DA method in quantifying the contribution of each term of our DA model and provide insights into the proposed DA methods in visualizing both real and synthetic data. Index Terms Domain adaptation, Transfer Learning, Visual classification, Discriminative learning, Data distribution matching, Data manifold geometric structure alignment.", "title": "" }, { "docid": "20ed67f3f410c3be15c0cabefa4effd8", "text": "The content selection component of a natural language generation system decides which information should be communicated in its output. We use information from reports on the game of cricket. We first describe a simple factoid-to-text alignment algorithm then treat content selection as a collective classification problem and demonstrate that simple ‘grouping’ of statistics at various levels of granularity yields substantially improved results over a probabilistic baseline. We additionally show that holding back of specific types of input data, and linking database structures with commonality further increase performance.", "title": "" }, { "docid": "e40ac3775c0891951d5f375c10928ca0", "text": "The present study investigates the role of process and social oriented smartphone usage, emotional intelligence, social stress, self-regulation, gender, and age in relation to habitual and addictive smartphone behavior. We conducted an online survey among 386 respondents. The results revealed that habitual smartphone use is an important contributor to addictive smartphone behavior. Process related smartphone use is a strong determinant for both developing habitual and addictive smartphone behavior. People who extensively use their smartphones for social purposes develop smartphone habits faster, which in turn might lead to addictive smartphone behavior. We did not find an influence of emotional intelligence on habitual or addictive smartphone behavior, while social stress positively influences addictive smartphone behavior, and a failure of self-regulation seems to cause a higher risk of addictive smartphone behavior. Finally, men experience less social stress than women, and use their smartphones less for social purposes. The result is that women have a higher chance in developing habitual or addictive smartphone behavior. Age negatively affects process and social usage, and social stress. There is a positive effect on self-regulation. Older people are therefore less likely to develop habitual or addictive smartphone behaviors. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "a9e30e02bcbac0f117820d21bf9941da", "text": "The question of how identity is affected when diagnosed with dementia is explored in this capstone thesis. With the rise of dementia diagnoses (Goldstein-Levitas, 2016) there is a need for understanding effective approaches to care as emotional components remain intact. The literature highlights the essence of personhood and how person-centered care (PCC) is essential to preventing isolation and impacting a sense of self and well-being (Killick, 2004). Meeting spiritual needs in the sense of hope and purpose may also improve quality of life and delay symptoms. Dance/movement therapy (DMT) is specifically highlighted as an effective approach as sessions incorporate the components to physically, emotionally, and spiritually stimulate the individual with dementia. A DMT intervention was developed and implemented at an assisted living facility in the Boston area within a specific unit dedicated to the care of residents who had a primary diagnosis of mild to severe dementia. A Chacian framework is used with sensory stimulation techniques to address physiological needs. Results indicated positive experiences from observations and merited the need to conduct more research to credit DMT’s effectiveness with geriatric populations.", "title": "" }, { "docid": "43b0358c4d3fec1dd58600847bf0c1b8", "text": "The transformative promises and potential of Big and Open Data are substantial for e-government services, openness and transparency, governments, and the interaction between governments, citizens, and the business sector. From “smart” government to transformational government, Big and Open Data can foster collaboration; create real-time solutions to challenges in agriculture, health, transportation, and more; promote greater openness; and usher in a new era of policyand decision-making. There are, however, a range of policy challenges to address regarding Big and Open Data, including access and dissemination; digital asset management, archiving and preservation; privacy; and security. After presenting a discussion of the open data policies that serve as a foundation for Big Data initiatives, this paper examines the ways in which the current information policy framework fails to address a number of these policy challenges. It then offers recommendations intended to serve as a beginning point for a revised policy framework to address significant issues raised by the U.S. government’s engagement in Big Data efforts.", "title": "" }, { "docid": "bb74f46ee60239073fec5d3b74b41758", "text": "OBJECTIVE\nTo demonstrate that virtual reality (VR) training transfers technical skills to the operating room (OR) environment.\n\n\nSUMMARY BACKGROUND DATA\nThe use of VR surgical simulation to train skills and reduce error risk in the OR has never been demonstrated in a prospective, randomized, blinded study.\n\n\nMETHODS\nSixteen surgical residents (PGY 1-4) had baseline psychomotor abilities assessed, then were randomized to either VR training (MIST VR simulator diathermy task) until expert criterion levels established by experienced laparoscopists were achieved (n = 8), or control non-VR-trained (n = 8). All subjects performed laparoscopic cholecystectomy with an attending surgeon blinded to training status. Videotapes of gallbladder dissection were reviewed independently by two investigators blinded to subject identity and training, and scored for eight predefined errors for each procedure minute (interrater reliability of error assessment r > 0.80).\n\n\nRESULTS\nNo differences in baseline assessments were found between groups. Gallbladder dissection was 29% faster for VR-trained residents. Non-VR-trained residents were nine times more likely to transiently fail to make progress (P <.007, Mann-Whitney test) and five times more likely to injure the gallbladder or burn nontarget tissue (chi-square = 4.27, P <.04). Mean errors were six times less likely to occur in the VR-trained group (1.19 vs. 7.38 errors per case; P <.008, Mann-Whitney test).\n\n\nCONCLUSIONS\nThe use of VR surgical simulation to reach specific target criteria significantly improved the OR performance of residents during laparoscopic cholecystectomy. This validation of transfer of training skills from VR to OR sets the stage for more sophisticated uses of VR in assessment, training, error reduction, and certification of surgeons.", "title": "" }, { "docid": "a3628ca53dfbe7b3e10593cc361cdaac", "text": "In order to ensure the safe supply of the drinking water the quality needs to be monitor in real time. In this paper we present a design and development of a low cost system for real time monitoring of the water quality in IOT(internet of things).the system consist of several sensors is used to measuring physical and chemical parameters of the water. The parameters such as temperature, PH, turbidity, conductivity, dissolved oxygen of the water can be measured. The measured values from the sensors can be processed by the core controller. The raspberry PI B+ model can be used as a core controller. Finally, the sensor data can be viewed on internet using cloud computing.", "title": "" }, { "docid": "8d176debd26505d424dcbf8f5cfdb4d1", "text": "We present a system for training deep neural networks for object detection using synthetic images. To handle the variability in real-world data, the system relies upon the technique of domain randomization, in which the parameters of the simulator-such as lighting, pose, object textures, etc.-are randomized in non-realistic ways to force the neural network to learn the essential features of the object of interest. We explore the importance of these parameters, showing that it is possible to produce a network with compelling performance using only non-artistically-generated synthetic data. With additional fine-tuning on real data, the network yields better performance than using real data alone. This result opens up the possibility of using inexpensive synthetic data for training neural networks while avoiding the need to collect large amounts of hand-annotated real-world data or to generate high-fidelity synthetic worlds-both of which remain bottlenecks for many applications. The approach is evaluated on bounding box detection of cars on the KITTI dataset.", "title": "" }, { "docid": "68c1b4e7e881bce5f72bfd8bfeb09acb", "text": "Objective. To study the efficacy and safety of pregabalin (Lyrica) in the complex therapy of opioid withdrawal syndrome (OWS). Materials and methods. The study design was a randomized, symptom-controlled, simple, blind study with active controls. A total of 34 patients with OWS were randomized to two groups. Patients of group 1 (19 subjects) received pregabalin at a dose of up to 600 mg/day as the main substance for treating OWS, in combination with symptomatic treatment (basal and symptom-triggered). Patients of group 2 (15 subjects) received clonidine (Clofelin, up to 600 mg/day) as the main treatment agent, in combination with basal and symptom-triggered treatment. The severity of OWS, cravings for opiates, sleep disorders, anxiety, depression, and side effects were assessed daily using international validated quantified assessment scales. Results. In group 1, 15 patients (79%) completed OWS treatment, compared with seven (47%) in group 2 (p = 0.05, Fisher’s exact test). There were no statistically significant differences between groups in terms of the dynamics of the severity of OWS (perhaps because of the limited number of patients). In the pregabalin-treated group, measures of the intensity of opiate cravings decreased during treatment as compared with group 2 (p = 0.05), and similar changes were seen in relation to anxiety (p = 0.05) and depression (p < 0.05); self-assessments of wellbeing increased (p < 0.05). There were no significant between-group differences in the overall incidence of side effects, though treatment tolerance was better in group 1. Conclusions. The treatment scheme including pregabalin was effective and safe and was well tolerated by patients, providing more successful completion of detoxification programs.", "title": "" }, { "docid": "d90bf10e386cc2f660e8a0c91ade0daa", "text": "We show how an agent can acquire conceptual knowledge by sensorimotor interaction with its environment. The method has much in common with the notion of image-schemas, which are central to Mandler’s theory of conceptual development. We show that Mandler’s approach is feasible in an artificial agent.", "title": "" }, { "docid": "21197ea03a0c9ce6061ea524aca10b52", "text": "Developers of gamified business applications face the challenge of creating motivating gameplay strategies and creative design techniques to deliver subject matter not typically associated with games in a playful way. We currently have limited models that frame what makes gamification effective (i.e., engaging people with a business application). Thus, we propose a design-centric model and analysis tool for gamification: The kaleidoscope of effective gamification. We take a look at current models of game design, self-determination theory and the principles of systems design to deconstruct the gamification layer in the design of these applications. Based on the layers of our model, we provide design guidelines for effective gamification of business applications.", "title": "" }, { "docid": "9a8f782acaf09a6a09ceeacfa0fd9fee", "text": "The objective of the current study was to compare the effects of sensory-integration therapy (SIT) and a behavioral intervention on rates of challenging behavior (including self-injurious behavior) in four children diagnosed with Autism Spectrum Disorder. For each of the participants a functional assessment was conducted to identify the variables maintaining challenging behavior. Results of these assessments were used to design function-based behavioral interventions for each participant. Recommendations for the sensory-integration treatment were designed by an Occupational Therapist, trained in the use of sensory-integration theory and techniques. The sensory-integration techniques were not dependent on the results of the functional assessments. The study was conducted within an alternating treatments design, with initial baseline and final best treatment phase. For each participant, results demonstrated that the behavioral intervention was more effective than the sensory integration therapy in the treatment of challenging behavior. In the best treatment phase, the behavioral intervention alone was implemented and further reduction was observed in the rate of challenging behavior. Analysis of saliva samples revealed relatively low levels of cortisol and very little stress-responsivity across the SIT condition and the behavioral intervention condition, which may be related to the participants' capacity to perceive stress in terms of its social significance.", "title": "" } ]
scidocsrr
dad1b6b6c959939ca82f5a0e537161bd
The Bilingual Child
[ { "docid": "d3b248232b7a01bba1d165908f55a316", "text": "Two views of bilingualism are presented--the monolingual or fractional view which holds that the bilingual is (or should be) two monolinguals in one person, and the bilingual or wholistic view which states that the coexistence of two languages in the bilingual has produced a unique and specific speaker-hearer. These views affect how we compare monolinguals and bilinguals, study language learning and language forgetting, and examine the speech modes--monolingual and bilingual--that characterize the bilingual's everyday interactions. The implications of the wholistic view on the neurolinguistics of bilingualism, and in particular bilingual aphasia, are discussed.", "title": "" } ]
[ { "docid": "99ac342b283dd0d657d4c59cb47434f1", "text": "Numerous robotic devices have been developed to assist hand rehabilitation; however, a majority of these are difficult for stroke survivors to wear. The purpose of this study was to develop an assistive device for treating flexion contracture, which supports the extension of each finger and may easily be worn on a paralyzed hand. To facilitate ease of use, we suggested a new wearing method for this wire-driven device with an elastic skeleton, allowing users to extend the device from the back of the hand onto the fingertip. The functional capacity of this device was measured through fingertip contact force and estimations of supporting torque. Results showed the device provides sufficient torque for finger extension with controlled wire tension. Moreover, experimental results confirmed that the novel design significantly decreased the time it took users to don the device compared to other designs.", "title": "" }, { "docid": "6f34ef57fcf0a2429e7dc2a3e56a99fd", "text": "Service-Oriented Architecture (SOA) provides a flexible framework for service composition. Using standard-based protocols (such as SOAP and WSDL), composite services can be constructed by integrating atomic services developed independently. Algorithms are needed to select service components with various QoS levels according to some application-dependent performance requirements. We design a broker-based architecture to facilitate the selection of QoS-based services. The objective of service selection is to maximize an application-specific utility function under the end-to-end QoS constraints. The problem is modeled in two ways: the combinatorial model and the graph model. The combinatorial model defines the problem as a multidimension multichoice 0-1 knapsack problem (MMKP). The graph model defines the problem as a multiconstraint optimal path (MCOP) problem. Efficient heuristic algorithms for service processes of different composition structures are presented in this article and their performances are studied by simulations. We also compare the pros and cons between the two models.", "title": "" }, { "docid": "137c30f07ac24f6dafd1429aabe3b931", "text": "Although demonstrated to be efficient and scalable to large-scale data sets, clustering-based recommender systems suffer from relatively low accuracy and coverage. To address these issues, we develop a multiview clustering method through which users are iteratively clustered from the views of both rating patterns and social trust relationships. To accommodate users who appear in two different clusters simultaneously, we employ a support vector regression model to determine a prediction for a given item, based on user-, itemand prediction-related features. To accommodate (cold) users who cannot be clustered due to insufficient data, we propose a probabilistic method to derive a prediction from the views of both ratings and trust relationships. Experimental results on three real-world data sets demonstrate that our approach can effectively improve both the accuracy and coverage of recommendations as well as in the cold start situation, moving clustering-based recommender systems closer towards practical use.", "title": "" }, { "docid": "57104614eb2ff83893f05fbb2ff65a7d", "text": "We have developed a novel assembly task partner robot to support workers in their task. This system, PaDY (in-time Parts/tools Delivery to You robot), delivers parts and tools to a worker by recognizing the worker's behavior in the car production line; thus, improving the efficiency of the work by reducing the worker's physical workload for picking parts and tools. For this purpose, it is necessary to plan the trajectory of the robot before the worker moves to the next location for another assembling task. First a prediction method for the worker's trajectory using a Markov model for a discretized work space into cells is proposed, then motion planning method is proposed using the predicted worker's trajectory and a mixture Gaussian distribution for each area corresponding to each procedure of the work process in the automobile coordinate system. Experimental results illustrate the validity of the proposed motion planning method.", "title": "" }, { "docid": "96412f11cdde09eddaf4397d2573278f", "text": "The repeated lifting of heavy weights has been identified as a risk factor for low back pain (LBP). Whether squat lifting leads to lower spinal loads than stoop lifting and whether lifting a weight laterally results in smaller forces than lifting the same weight in front of the body remain matters of debate. Instrumented vertebral body replacements (VBRs) were used to measure the in vivo load in the lumbar spine in three patients at level L1 and in one patient at level L3. Stoop lifting and squat lifting were compared in 17 measuring sessions, in which both techniques were performed a total of 104 times. The trunk inclination and amount of knee bending were simultaneously estimated from recorded images. Compared with the aforementioned lifting tasks, the patients additionally lifted a weight laterally with one hand 26 times. Only a small difference (4%) in the measured resultant force was observed between stoop lifting and squat lifting, although the knee-bending angle (stoop 10°, squat 45°) and trunk inclination (stoop 52°, squat 39°) differed considerably at the time points of maximal resultant forces. Lifting a weight laterally caused 14% less implant force on average than lifting the same weight in front of the body. The current in vivo biomechanical study does not provide evidence that spinal loads differ substantially between stoop and squat lifting. The anterior-posterior position of the lifted weight relative to the spine appears to be crucial for spinal loading.", "title": "" }, { "docid": "086c9316fff234e156eb0597732198f8", "text": "E-government services is growing at a considerable pace, especially in developing countries as government seeks to make use of ICT to serve its citizens efficiently and effectively. E-government projects cost are enormous and therefore it becomes imperative for governments to continuously evaluate these projects with a view of identifying the benefits, justifying investment made and improving the quality of services they offer to the citizens among other reasons through constant evaluation to gauge the success of these projects. Like the evaluation of all other information systems initiatives, the evaluation of e-governments in both theory and practice has proved to be important but complex. The complexity of evaluation is mostly due to the multiple perspectives involved, the difficulties of quantifying benefits, and the social and technical context of use. Existing frameworks for e-government software projects evaluation were analyzed with the aim of developing a post-implementation evaluation framework for e-government systems with the citizens/clients as the central focus. The main aim of this study was to investigate the citizen's perspective in evaluating e-government systems and to propose a set of evaluating factors that can be used for evaluation of e-government systems. A survey of users of Kenya Public Service Commission of Kenya (PSCK) Online Recruitment and Selection Database System was conducted. The study used four constructs for evaluation of e-government system namely; financial, social, technical and delivery platform constructs. Furthermore, specific factors to measure these four constructs with a consideration of the level of e-government in Kenya were identified. The study found that about 74% of the respondents were satisfied with the online job application system of Public Service Commission of Kenya.", "title": "" }, { "docid": "f6249304dbd2b275a70b2b12faeb4712", "text": "This paper describes a system, built and refined over the past five years, that automatically analyzes student programs assigned in a computer organization course. The system tests a student's program, then e-mails immediate feedback to the student to assist and encourage the student to continue testing, debugging, and optimizing his or her program. The automated feedback system improves the students' learning experience by allowing and encouraging them to improve their program iteratively until it is correct. The system has also made it possible to add challenging parts to each project, such as optimization and testing, and it has enabled students to meet these challenges. Finally, the system has reduced the grading load of University of Michigan's large classes significantly and helped the instructors handle the rapidly increasing enrollments of the 1990s. Initial experience with the feedback system showed that students depended too heavily on the feedback system as a substitute for their own testing. This problem was addressed by requiring students to submit a comprehensive test suite along with their program and by applying automated feedback techniques to help students learn how to write good test suites. Quantitative iterative feedback has proven to be extremely helpful in teaching students specific concepts about computer organization and general concepts on computer programming and testing.", "title": "" }, { "docid": "737b07a559fccc77c62a51abcad49f2b", "text": "Markov Logic Networks (MLNs) are well-suited for expressing statistics such as “with high probability a smoker knows another smoker” but not for expressing statements such as “there is a smoker who knows most other smokers”, which is necessary for modeling, e.g. influencers in social networks. To overcome this shortcoming, we study quantified MLNs which generalize MLNs by introducing statistical universal quantifiers, allowing to express also the latter type of statistics in a principled way. Our main technical contribution is to show that the standard reasoning tasks in quantified MLNs, maximum a posteriori and marginal inference, can be reduced to their respective MLN counterparts in polynomial time.", "title": "" }, { "docid": "ae4cebb3b37c1d168a827249c314af6f", "text": "A broadcast news stream consists of a number of stories and each story consists of several sentences. We capture this structure using a hierarchical model based on a word-level Recurrent Neural Network (RNN) sentence modeling layer and a sentence-level bidirectional Long Short-Term Memory (LSTM) topic modeling layer. First, the word-level RNN layer extracts a vector embedding the sentence information from the given transcribed lexical tokens of each sentence. These sentence embedding vectors are fed into a bidirectional LSTM that models the sentence and topic transitions. A topic posterior for each sentence is estimated discriminatively and a Hidden Markov model (HMM) follows to decode the story sequence and identify story boundaries. Experiments on the topic detection and tracking (TDT2) task indicate that the hierarchical RNN topic modeling achieves the best story segmentation performance with a higher F1-measure compared to conventional state-of-the-art methods. We also compare variations of our model to infer the optimal structure for the story segmentation task.", "title": "" }, { "docid": "04c7d8265e8b41aee67e5b11b3bc4fa2", "text": "Stretchable microelectromechanical systems (MEMS) possess higher mechanical deformability and adaptability than devices based on conventional solid and flexible substrates, hence they are particularly desirable for biomedical, optoelectronic, textile and other innovative applications. The stretchability performance can be evaluated by the failure strain of the embedded routing and the strain applied to the elastomeric substrate. The routings are divided into five forms according to their geometry: straight; wavy; wrinkly; island-bridge; and conductive-elastomeric. These designs are reviewed and their resistance-to-failure performance is investigated. The failure modeling, numerical analysis, and fabrication of routings are presented. The current review concludes with the essential factors of the stretchable electrical routing for achieving high performance, including routing angle, width and thickness. The future challenges of device integration and reliability assessment of the stretchable routings are addressed.", "title": "" }, { "docid": "319a24bca0b0849e05ce8cce327c549b", "text": "This paper presents a summary of the Computational Linguistics and Clinical Psychology (CLPsych) 2015 shared and unshared tasks. These tasks aimed to provide apples-to-apples comparisons of various approaches to modeling language relevant to mental health from social media. The data used for these tasks is from Twitter users who state a diagnosis of depression or post traumatic stress disorder (PTSD) and demographically-matched community controls. The unshared task was a hackathon held at Johns Hopkins University in November 2014 to explore the data, and the shared task was conducted remotely, with each participating team submitted scores for a held-back test set of users. The shared task consisted of three binary classification experiments: (1) depression versus control, (2) PTSD versus control, and (3) depression versus PTSD. Classifiers were compared primarily via their average precision, though a number of other metrics are used along with this to allow a more nuanced interpretation of the performance measures.", "title": "" }, { "docid": "1f50a6d6e7c48efb7ffc86bcc6a8271d", "text": "Creating short summaries of documents with respect to a query has applications in for example search engines, where it may help inform users of the most relevant results. Constructing such a summary automatically, with the potential expressiveness of a human-written summary, is a difficult problem yet to be fully solved. In this thesis, a neural network model for this task is presented. We adapt an existing dataset of news article summaries for the task and train a pointer-generator model using this dataset to summarize such articles. The generated summaries are then evaluated by measuring similarity to reference summaries. We observe that the generated summaries exhibit abstractive properties, but also that they have issues, such as rarely being truthful. However, we show that a neural network summarization model, similar to existing neural network models for abstractive summarization, can be constructed to make use of queries for more targeted summaries.", "title": "" }, { "docid": "f1cbd60e1bd721e185bbbd12c133ad91", "text": "Defect prediction models are a well-known technique for identifying defect-prone files or packages such that practitioners can allocate their quality assurance efforts (e.g., testing and code reviews). However, once the critical files or packages have been identified, developers still need to spend considerable time drilling down to the functions or even code snippets that should be reviewed or tested. This makes the approach too time consuming and impractical for large software systems. Instead, we consider defect prediction models that focus on identifying defect-prone (“risky”) software changes instead of files or packages. We refer to this type of quality assurance activity as “Just-In-Time Quality Assurance,” because developers can review and test these risky changes while they are still fresh in their minds (i.e., at check-in time). To build a change risk model, we use a wide range of factors based on the characteristics of a software change, such as the number of added lines, and developer experience. A large-scale study of six open source and five commercial projects from multiple domains shows that our models can predict whether or not a change will lead to a defect with an average accuracy of 68 percent and an average recall of 64 percent. Furthermore, when considering the effort needed to review changes, we find that using only 20 percent of the effort it would take to inspect all changes, we can identify 35 percent of all defect-inducing changes. Our findings indicate that “Just-In-Time Quality Assurance” may provide an effort-reducing way to focus on the most risky changes and thus reduce the costs of developing high-quality software.", "title": "" }, { "docid": "00b97abc30a5e30791a6214fef185474", "text": "A key challenge for grid computing is creating large-scale, end-to-end scientific applications that draw from pools of specialized scientific components to derive elaborate new results. We develop Pegasus, an AI planning system which is integrated into the grid environment that takes a user's highly specified desired results, generates valid workflows that take into account available resources, and submits the workflows for execution on the grid. We also begin to extend it as a more distributed and knowledge-rich architecture.", "title": "" }, { "docid": "08bef09a01414bafcbc778fea85a7c0a", "text": "The use.of energy-minimizing curves, known as “snakes,” to extract features of interest in images has been introduced by Kass, Witkhr & Terzopoulos (Znt. J. Comput. Vision 1, 1987,321-331). We present a model of deformation which solves some of the problems encountered with the original method. The external forces that push the curve to the edges are modified to give more stable results. The original snake, when it is not close enough to contours, is not attracted by them and straightens to a line. Our model makes the curve behave like a balloon which is inflated by an additional force. The initial curve need no longer be close to the solution to converge. The curve passes over weak edges and is stopped only if the edge is strong. We give examples of extracting a ventricle in medical images. We have also made a first step toward 3D object reconstruction, by tracking the extracted contour on a series of successive cross sections.", "title": "" }, { "docid": "88db40bacaf7c684cd3f6cfe74a43354", "text": "Resting-state functional Magnetic Resonance Imaging (rsfMRI) of the human brain has revealed multiple large-scale neural networks within a hierarchical and complex structure of coordinated functional activity. These distributed neuroanatomical systems provide a sensitive window on brain function and its disruption in a variety of neuropathological conditions. The study of macroscale intrinsic connectivity networks in preclinical species, where genetic and environmental conditions can be controlled and manipulated with high specificity, offers the opportunity to elucidate the biological determinants of these alterations. While rsfMRI methods are now widely used in human connectivity research, these approaches have only relatively recently been back-translated into laboratory animals. Here we review recent progress in the study of functional connectivity in rodent species, emphasising the ability of this approach to resolve large-scale brain networks that recapitulate neuroanatomical features of known functional systems in the human brain. These include, but are not limited to, a distributed set of regions identified in rats and mice that may represent a putative evolutionary precursor of the human default mode network (DMN). The impact and control of potential experimental and methodological confounds are also critically discussed. Finally, we highlight the enormous potential and some initial application of connectivity mapping in transgenic models as a tool to investigate the neuropathological underpinnings of the large-scale connectional alterations associated with human neuropsychiatric and neurological conditions. We conclude by discussing the translational potential of these methods in basic and applied neuroscience.", "title": "" }, { "docid": "939593fce7d86ae8ccd6a67532fd78b0", "text": "In patients with spinal cord injury, the primary or mechanical trauma seldom causes total transection, even though the functional loss may be complete. In addition, biochemical and pathological changes in the cord may worsen after injury. To explain these phenomena, the concept of the secondary injury has evolved for which numerous pathophysiological mechanisms have been postulated. This paper reviews the concept of secondary injury with special emphasis on vascular mechanisms. Evidence is presented to support the theory of secondary injury and the hypothesis that a key mechanism is posttraumatic ischemia with resultant infarction of the spinal cord. Evidence for the role of vascular mechanisms has been obtained from a variety of models of acute spinal cord injury in several species. Many different angiographic methods have been used for assessing microcirculation of the cord and for measuring spinal cord blood flow after trauma. With these techniques, the major systemic and local vascular effects of acute spinal cord injury have been identified and implicated in the etiology of secondary injury. The systemic effects of acute spinal cord injury include hypotension and reduced cardiac output. The local effects include loss of autoregulation in the injured segment of the spinal cord and a marked reduction of the microcirculation in both gray and white matter, especially in hemorrhagic regions and in adjacent zones. The microcirculatory loss extends for a considerable distance proximal and distal to the site of injury. Many studies have shown a dose-dependent reduction of spinal cord blood flow varying with the severity of injury, and a reduction of spinal cord blood flow which worsens with time after injury. The functional deficits due to acute spinal cord injury have been measured electrophysiologically with techniques such as motor and somatosensory evoked potentials and have been found proportional to the degree of posttraumatic ischemia. The histological effects include early hemorrhagic necrosis leading to major infarction at the injury site. These posttraumatic vascular effects can be treated. Systemic normotension can be restored with volume expansion or vasopressors, and spinal cord blood flow can be improved with dopamine, steroids, nimodipine, or volume expansion. The combination of nimodipine and volume expansion improves posttraumatic spinal cord blood flow and spinal cord function measured by evoked potentials. These results provide strong evidence that posttraumatic ischemia is an important secondary mechanism of injury, and that it can be counteracted.", "title": "" }, { "docid": "d0e3e1a5d5bfaa2aecc046dbd9be8e48", "text": "Wind power generation studies of slow phenomena using a detailed model can be difficult to perform with a conventional offline simulation program. Due to the computational power and high-speed input and output, a real-time simulator is capable of conducting repetitive simulations of wind profiles in a short time with detailed models of critical components and allows testing of prototype controllers through hardware-in-the-loop (HIL). This paper discusses methods to overcome the challenges of real-time simulation of wind systems, characterized by their complexity and high-frequency switching. A hybrid flow-battery supercapacitor energy storage system (ESS), coupled in a wind turbine generator to smooth wind power, is studied by real-time HIL simulation. The prototype controller is embedded in one real-time simulator, while the rest of the system is implemented in another independent simulator. The simulation results of the detailed wind system model show that the hybrid ESS has a lower battery cost, higher battery longevity, and improved overall efficiency over its reference ESS.", "title": "" }, { "docid": "b47d83764ac8e98fd17c39c200e92bf5", "text": "This study was conducted to analyze whether internal (IR) and external (ER) rotator shoulder muscles weakness and/or imbalance collected through a preseason assessment could be predictors of subsequent shoulder injury during a season in handball players. In preseason, 16 female elite handball players (HPG) and 14 healthy female nonathletes (CG) underwent isokinetic IR and ER strength test with use of a Con-Trex® dynamometer in a seated position with 45° shoulder abduction in scapular plane, at 60, 120 and 240°/s in concentric and at 60°/s in eccentric, for both sides. An imbalanced muscular strength profile was determined using -statistically selected cut-offs from CG values. For HPG, all newly incurred shoulder injuries were reported during the season. There were significant differences between HPG and CG only for dominant eccentric IR strength, ER/IR ratio at 240°/s and for IRecc/ERcon ratio. In HPG, IR and ER strength was higher, and ER/IR ratios lower for dominant than for nondominant side. The relative risk was 2.57 (95%CI: 1.60-3.54; P<0.05) if handball players had an imbalanced muscular strength profile. In youth female handball players IR and ER muscle strength increases on the dominant side without ER/IR imbalances; and higher injury risk was associated with imbalanced muscular strength profile.", "title": "" }, { "docid": "e6c7713b9ff08aa01d98c9fec77ebf7a", "text": "Everyday many users purchases product, book travel tickets, buy goods and services through web. Users also share their views about product, hotel, news, and topic on web in the form of reviews, blogs, comments etc. Many users read review information given on web to take decisions such as buying products, watching movie, going to restaurant etc. Reviews contain user's opinion about product, event or topic. It is difficult for web users to read and understand contents from large number of reviews. Important and useful information can be extracted from reviews through opinion mining and summarization process. We presented machine learning and Senti Word Net based method for opinion mining from hotel reviews and sentence relevance score based method for opinion summarization of hotel reviews. We obtained about 87% of accuracy of hotel review classification as positive or negative review by machine learning method. The classified and summarized hotel review information helps web users to understand review contents easily in a short time.", "title": "" } ]
scidocsrr
9c1c529ed3b714cf3847c9c4d9eb3c21
Effects of Dance Movement Therapy on Adult Patients with Autism Spectrum Disorder: A Randomized Controlled Trial
[ { "docid": "bb94ef2ab26fddd794a5b469f3b51728", "text": "This study examines the treatment outcome of a ten weeks dance movement therapy intervention on quality of life (QOL). The multicentred study used a subject-design with pre-test, post-test, and six months follow-up test. 162 participants who suffered from stress were randomly assigned to the dance movement therapy treatment group (TG) (n = 97) and the wait-listed control group (WG) (65). The World Health Organization Quality of Life Questionnaire 100 (WHOQOL-100) and Munich Life Dimension List were used in both groups at all three measurement points. Repeated measures ANOVA revealed that dance movement therapy participants in all QOL dimensions always more than the WG. In the short term, they significantly improved in the Psychological domain (p > .001, WHOQOL; p > .01, Munich Life Dimension List), Social relations/life (p > .10, WHOQOL; p > .10, Munich Life Dimension List), Global value (p > .05, WHOQOL), Physical health (p > .05, Munich Life Dimension List), and General life (p > .10, Munich Life Dimension List). In the long term, dance movement therapy significantly enhanced the psychological domain (p > .05, WHOQOL; p > .05, Munich Life Dimension List), Spirituality (p > .10, WHOQOL), and General life (p > .05, Munich Life Dimension List). Dance movement therapy is effective in the shortand long-term to improve QOL. © 2012 Elsevier Inc. All rights reserved.", "title": "" } ]
[ { "docid": "f5d9d701bcc3b629dc90db57448c443c", "text": "IoT is a driving force for the next generation of cyber-physical manufacturing systems. The construction and operation of these systems is a big challenge. In this paper, a framework that exploits model driven engineering to address the increasing complexity in this kind of systems is presented. The framework utilizes the model driven engineering paradigm to define a domain specific development environment that allows the control engineer, a) to transform the mechanical units of the plant to Industrial Automation Things (IAT), i.e., to IoT-compliant manufacturing cyber-physical components, and, b) to specify the cyber components, which implement the plant processes, as physical mashups, i.e., compositions of plant services provided by IATs. The UML4IoT profile is extended to address the requirements of the framework. The approach was successfully applied on a laboratory case study to demonstrate its effectiveness in terms of flexibility and responsiveness.", "title": "" }, { "docid": "a338df86cf504d246000c42512473f93", "text": "Natural Language Processing (NLP) has emerged with a wide scope of research in the area. The Burmese language, also called the Myanmar Language is a resource scarce, tonal, analytical, syllable-timed and principally monosyllabic language with Subject-Object-Verb (SOV) ordering. NLP of Burmese language is also challenged by the fact that it has no white spaces and word boundaries. Keeping these facts in view, the current paper is a first formal attempt to present a bibliography of research works pertinent to NLP tasks in Burmese language. Instead of presenting mere catalogue, the current work is also specifically elaborated by annotations as well as classifications of NLP task research works in NLP related categories. The paper presents the state-of-the-art of Burmese NLP tasks. Both annotations and classifications of NLP tasks of Burmese language are useful to the scientific community as it shows where the field of research in Burmese NLP is going. In fact, to the best of author’s knowledge, this is first work of its kind worldwide for any language. For a period spanning more than 25 years, the paper discusses Burmese language Word Identification, Segmentation, Disambiguation, Collation, Semantic Parsing and Tokenization followed by Part-Of-Speech (POS) Tagging, Machine Translation Systems (MTS), Text Keying/Input, Recognition and Text Display Methods. Burmese language WordNet, Search Engine and influence of other languages on Burmese language are also discussed.", "title": "" }, { "docid": "94b6d4d28d708303530394270a3cfe75", "text": "The search for the legendary, highly erogenous vaginal region, the Gräfenberg spot (G-spot), has produced important data, substantially improving understanding of the complex anatomy and physiology of sexual responses in women. Modern imaging techniques have enabled visualization of dynamic interactions of female genitals during self-sexual stimulation or coitus. Although no single structure consistent with a distinct G-spot has been identified, the vagina is not a passive organ but a highly dynamic structure with an active role in sexual arousal and intercourse. The anatomical relationships and dynamic interactions between the clitoris, urethra, and anterior vaginal wall have led to the concept of a clitourethrovaginal (CUV) complex, defining a variable, multifaceted morphofunctional area that, when properly stimulated during penetration, could induce orgasmic responses. Knowledge of the anatomy and physiology of the CUV complex might help to avoid damage to its neural, muscular, and vascular components during urological and gynaecological surgical procedures.", "title": "" }, { "docid": "bb5f1836b7e694a571f7e9a0d6845761", "text": "Rheumatic heart disease (RHD) results in morbidity and mortality that is disproportionate among individuals in developing countries compared to those living in economically developed countries. The global burden of disease is uncertain because most previous studies to determine the prevalence of RHD in children relied on clinical screening criteria that lacked the sensitivity to detect most cases. The present study was performed to determine the prevalence of RHD in children and young adults in León, Nicaragua, an area previously thought to have a high prevalence of RHD. This was an observational study of 3,150 children aged 5 to 15 years and 489 adults aged 20 to 35 years randomly selected from urban and rural areas of León. Cardiopulmonary examinations and Doppler echocardiographic studies were performed on all subjects. Doppler echocardiographic diagnosis of RHD was based on predefined consensus criteria that were developed by a working group of the World Health Organization and the National Institutes of Health. The overall prevalence of RHD in children was 48 in 1,000 (95% confidence interval 35 in 1,000 to 60 in 1,000). The prevalence in urban children was 34 in 1,000, and in rural children it was 80 in 1,000. Using more stringent Doppler echocardiographic criteria designed to diagnose definite RHD in adults, the prevalence was 22 in 1,000 (95% confidence interval 8 in 1,000 to 37 in 1,000). In conclusion, the prevalence of RHD among children and adults in this economically disadvantaged population far exceeds previously predicted rates. The findings underscore the potential health and economic burden of acute rheumatic fever and RHD and support the need for more effective measures of prevention, which may include safe, effective, and affordable vaccines to prevent the streptococcal infections that trigger the disease.", "title": "" }, { "docid": "15b38be44110ded3407b152af2f65457", "text": "What will 5G be? What it will not be is an incremental advance on 4G. The previous four generations of cellular technology have each been a major paradigm shift that has broken backward compatibility. Indeed, 5G will need to be a paradigm shift that includes very high carrier frequencies with massive bandwidths, extreme base station and device densities, and unprecedented numbers of antennas. However, unlike the previous four generations, it will also be highly integrative: tying any new 5G air interface and spectrum together with LTE and WiFi to provide universal high-rate coverage and a seamless user experience. To support this, the core network will also have to reach unprecedented levels of flexibility and intelligence, spectrum regulation will need to be rethought and improved, and energy and cost efficiencies will become even more critical considerations. This paper discusses all of these topics, identifying key challenges for future research and preliminary 5G standardization activities, while providing a comprehensive overview of the current literature, and in particular of the papers appearing in this special issue.", "title": "" }, { "docid": "93662f82b9c59b53575d5d255814e20d", "text": "The process of attributing paintings relies partly upon recognition of an artist’s hand. Around the middle of the last century, Maurits M. van Dantzig (1903–1960) attempted to define the ‘characteristic touch’ of the painter, Vincent van Gogh (1853–1890). His broader aim was to develop a flexible yet precise method to measure the features of both spontaneity and inhibition, evident in the style of brushwork for example. The underlying idea that these qualities can be used to separate a genuine work from a second-rate copy or forgery still plays a key role in attribution studies today. A recent initiative explores the potential of advanced computer image analysis techniques to help identify and quantify these properties at the scale of brushwork. This paper describes the basic principles of the method used, involving statistical analysis of different size wavelets present in digital images of paintings by Van Gogh or other artists.", "title": "" }, { "docid": "d73d1f314245cdcf5265edb32c8b6405", "text": "An intriguing and unexpected result for students learning numerical analysis is that Newton’s method, applied to the simple polynomial z − 1 = 0 in the complex plane, leads to intricately interwoven basins of attraction of the roots. As an example of an interesting open question that may help to stimulate student interest in numerical analysis, we investigate the question of whether a damping method, which is designed to increase the likelihood of convergence for Newton’s method, modifies the fractal structure of the basin boundaries. The overlap of the frontiers of numerical analysis and nonlinear dynamics provides many other problems that can help to make numerical analysis courses interesting.", "title": "" }, { "docid": "efcbb22d4e1eb3e24dac8ab6698c8c75", "text": "/ 0+120+354768491;:549<>=?49@BA CEDGFHCJIKFML'N OQP?0+3?N LRPS<UT VXW5<B4Y3?N 0 Z\\[^]_[^` acb deaf]_g afhji+aUkjlnm_`+hompacb qrh st4Y3?u20+1v4 W 1;LR<>0+@>N+P w <>L9x\\0+1yV5zG{|I }~aU`+m_[^]pa b R]_€Uho[^k‚^ƒ ln„+ho€U„…[^]†b y€>‡ s 0+3?3?u2@M{':549@>:?4 ˆ LRP?<>493‰TGCJ3?@fTBuŠTBPST>09V5DŒ‹8z VzG{|I k‚g aUk‚g a>ln^k^b `Ž+ƒb [Ji+ƒ ‘ <Uu2N {|u2’ LR3 CEDGFHCJI“FHL'N OQP?0 3SN LRP?<fT VXW5<B493SN 0 ”Qhom_>b • m_‡ €U`…lnm_`+hompa b qrh ˆ <Uu2@UT>uv493'–JIGPS—RP?@UT>u23˜{S4Yu;T4 CEDGFHCJI“FHL'N OQP?0 3SN LRP?<fT VXW5<B493SN 0 ™ hom_kvšompaU`+›;œXƒ >ƒ+kvšom_`Yb •+afm_šja>lnm_`+hompa b qžh", "title": "" }, { "docid": "9e1e42d27521eb20b6fef10087dd2d9a", "text": "This paper identifies the need for developing new ways to study curiosity in the context of today’s pervasive technologies and unprecedented information access. Curiosity is defined in this paper in a way which incorporates the concomitant constructs of interest and engagement. A theoretical model for curiosity, interest and engagement in new media technology-pervasive learning environments is advanced, taking into consideration personal, situational and contextual factors as influencing variables. While the path associated with curiosity, interest, and engagement during learning and research has remained essentially the same, how individuals tackle research and information-seeking tasks and factors which sustain such efforts have changed. Learning modalities for promoting this theoretical model are discussed leading to a series of recommendations for future research. This article offers a multi-lens perspective on curiosity and suggests a multi-method research agenda for validating such a perspective.", "title": "" }, { "docid": "1e18f23ad8ddc4333406c4703d51d92b", "text": "from its introductory beginning and across its 446 pages, centered around the notion that computer simulations and games are not at all disparate but very much aligning concepts. This not only makes for an interesting premise but also an engaging book overall which offers a resource into an educational subject (for it is educational simulations that the authors predominantly address) which is not overly saturated. The aim of the book as a result of this decision, which is explained early on, but also because of its subsequent structure, is to enlighten its intended audience in the way that effective and successful simulations/games operate (on a theoretical/conceptual and technical level, although in the case of the latter the book intentionally never delves into the realms of software programming specifics per se), can be designed, built and, finally, evaluated. The book is structured in three different and distinct parts, with four chapters in the first, six chapters in the second and six chapters in the third and final one. The first chapter is essentially a \" teaser \" , according to the authors. There are a couple of more traditional simulations described, a couple of well-known mainstream games (Mario Kart and Portal 2, interesting choices, especially the first one) and then the authors proceed to present applications which show the simulation and game convergence. These applications have a strong educational outlook (covering on this occasion very diverse topics, from flood prevention to drink driving awareness, amongst others). This chapter works very well in initiating the audience in the subject matter and drawing the necessary parallels. With all of the simula-tions/games/educational applications included BOOK REVIEW", "title": "" }, { "docid": "86f273bc450b9a3b6acee0e8d183b3cd", "text": "This paper aims to determine which is the best human action recognition method based on features extracted from RGB-D devices, such as the Microsoft Kinect. A review of all the papers that make reference to MSR Action3D, the most used dataset that includes depth information acquired from a RGB-D device, has been performed. We found that the validation method used by each work differs from the others. So, a direct comparison among works cannot be made. However, almost all the works present their results comparing them without taking into account this issue. Therefore, we present different rankings according to the methodology used for the validation in orden to clarify the existing confusion.", "title": "" }, { "docid": "eab5e6b1cf3df1d9141f9192cf59d753", "text": "Horse racing is a popular sport in Mauritius which attracts huge crowds to Champ de Mars. Nevertheless, bettors face many difficulties in predicting winning horses to make profit. The principal factors affecting a race were determined. Each factor, namely jockeys, new horses, favourite horses, previous performance, draw, type of horses, weight, rating and stable have been examined and appropriate weights have been assigned to each of them depending on their importance. Furthermore, data for the whole racing season of 2010 was considered. The results of 240 races of 2010 have been used to determine the degree to which each factor affect the chance of each horse. The weights were then summed up to predict winners. The system can predict winners with an accuracy of 58% which is 4. 7 out of 8 winners on average. The software outperformed the predictions made by the best professional tipsters in Mauritius who could forecast only 3. 6 winners out of 8 races.", "title": "" }, { "docid": "e1a1faf5d2121a3d5cd993d0f9c257a5", "text": "This paper is the product of an area-exam study. It intends to explain the concept of ontology in the context of knowledge engineering research, which is a sub-area of artiicial intelligence research. It introduces the state of the art on methodologies and tools for building ontologies. It also tries to point out some possible future directions for ontology research.", "title": "" }, { "docid": "0660b717561bedaa8d6da4f59266fabe", "text": "Printed quasi-Yagi antennas [1] have been used in a number of applications requiring broad-band planar end-fire antennas. So far they have been mostly realized on high dielectric constant substrates with moderate thickness in order to excite the TE0 surface wave along the dielectric substrate. An alternative design of a printed Yagi-Uda antenna, developed on a low dielectric constant material, was presented in [2]. In this design, an additional director and a reflector were used to increase the gain of the antenna. However the achieved bandwidth of the antenna is quite narrow (about 3–4%) compared to the bandwidth of a quasi-Yagi antenna fabricated on a high dielectric constant substrate [1]. Another disadvantage of a conventional quasi-Yagi antenna fabricated on a low dielectric permittivity substrate is that the length of the driver is increased and it is difficult to achieve 0.5 λ0 spacing between the elements required for scanning arrays, where λ0 corresponds to a free-space wavelength at the center frequency of the antenna.", "title": "" }, { "docid": "41fe7d2febb05a48daf69b4a41c77251", "text": "Multi-objective evolutionary algorithms for the construction of neural ensembles is a relatively new area of research. We recently proposed an ensemble learning algorithm called DIVACE (DIVerse and ACcurate Ensemble learning algorithm). It was shown that DIVACE tries to find an optimal trade-off between diversity and accuracy as it searches for an ensemble for some particular pattern recognition task by treating these two objectives explicitly separately. A detailed discussion of DIVACE together with further experimental studies form the essence of this paper. A new diversity measure which we call Pairwise Failure Crediting (PFC) is proposed. This measure forms one of the two evolutionary pressures being exerted explicitly in DIVACE. Experiments with this diversity measure as well as comparisons with previously studied approaches are hence considered. Detailed analysis of the results show that DIVACE, as a concept, has promise. Mathematical Subject Classification (2000): 68T05, 68Q32, 68Q10.", "title": "" }, { "docid": "80edb9c7e4bdaca1391ec671f7445381", "text": "We propose an efficient multikernel adaptive filtering algorithm with double regularizers, providing a novel pathway towards online model selection and learning. The task is the challenging nonlinear adaptive filtering under no knowledge about a suitable kernel. Under this limited-knowledge assumption on an underlying model of a system of interest, many possible kernels are employed and one of the regularizers, a block ℓ1 norm for kernel groups, contributes to selecting a proper model (relevant kernels) in online and adaptive fashion, preventing a nonlinear filter from overfitting to noisy data. The other regularizer is the block ℓ1 norm for data groups, contributing to updating the dictionary adaptively. As the resulting cost function contains two nonsmooth (but proximable) terms, we approximate the latter regularizer by its Moreau envelope and apply the adaptive proximal forward-backward splitting method to the approximated cost function. Numerical examples show the efficacy of the proposed algorithm.", "title": "" }, { "docid": "1348fe81e61e656c808f42f80b6f2bd0", "text": "BACKGROUND\nA robust medical claims review system is crucial for addressing fraud and abuse and ensuring financial viability of health insurance organisations. This paper assesses claims adjustment rate of the paper- and electronic-based claims reviews of the National Health Insurance Scheme (NHIS) in Ghana.\n\n\nMETHODS\nThe study was a cross-sectional comparative assessment of paper- and electronic-based claims reviews of the NHIS. Medical claims of subscribers for the year, 2014 were requested from the claims directorate and analysed. Proportions of claims adjusted by the paper- and electronic-based claims reviews were determined for each type of healthcare facility. Bivariate analyses were also conducted to test for differences in claims adjustments between healthcare facility types, and between the two claims reviews.\n\n\nRESULTS\nThe electronic-based review made overall adjustment of 17.0% from GHS10.09 million (USD2.64 m) claims cost whilst the paper-based review adjusted 4.9% from a total of GHS57.50 million (USD15.09 m) claims cost received, and the difference was significant (p < 0.001). However, there were no significant differences in claims cost adjustment rate between healthcare facility types by the electronic-based (p = 0.0656) and by the paper-based reviews (p = 0.6484).\n\n\nCONCLUSIONS\nThe electronic-based review adjusted significantly higher claims cost than the paper-based claims review. Scaling up the electronic-based review to cover claims from all accredited care providers could reduce spurious claims cost to the scheme and ensure long term financial sustainability.", "title": "" }, { "docid": "2b4a2165cebff8326f97cab3063e1a62", "text": "Pneumatic artificial muscles (PAMs) are becoming more commonly used as actuators in modern robotics. The most made and common type of these artificial muscles in use is the McKibben artificial muscle that was developed in 1950’s. This paper presents the geometric model of PAM and different Matlab models for pneumatic artificial muscles. The aim of our models is to relate the pressure and length of the pneumatic artificial muscles to the force it exerts along its entire exists.", "title": "" }, { "docid": "ae4270f39129e2bb6e0481bd914a195d", "text": "This paper illustrates a proposal for the development of an annotation scheme and a corpus for storyline extraction and evaluation from large collections of documents clustered around a topic. The scheme extends existing annotation efforts for event coreference and temporal processing, introducing additional layers and addressing shortcomings. We also show how a storyline can be derived from the annotated data.", "title": "" }, { "docid": "f7deaa9b65be6b8de9f45fb0dec3879d", "text": "This paper reports the first 8kV+ ESD-protected SP10T transmit/receive (T/R) antenna switch for quad-band (0.85/0.9/1.8/1.9-GHz) GSM and multiple W-CDMA smartphones fabricated in an 180-nm SOI CMOS. A novel physics-based switch-ESD co-design methodology is applied to ensure full-chip optimization for a SP10T test chip and its ESD protection circuit simultaneously.", "title": "" } ]
scidocsrr
c5f31024015505c4ebf259537cff0fa7
Sharing Knowledge and Expertise: The CSCW View of Knowledge Management
[ { "docid": "7c013039b4efc57fb5eda53518944014", "text": "Online question-answering services provide mechanisms for knowledge exchange by allowing users to ask and answer questions on a wide range of topics. A key question for designing such services is whether charging a price has an effect on answer quality. Two field experiments using one such service, Google Answers, offer conflicting answers to this question. To resolve this inconsistency, we re-analyze data from Harper et al. [5] and Chen et al. [2] to study the price effect in greater depth. Decomposing the price effect into two different levels yields results that reconcile those of the two field experiments. Specifically, we find that: (1) a higher price significantly increases the likelihood that a question receives an answer and (2) for questions that receive an answer, there is no significant price effect on answer quality. Additionally, we find that the rater background makes a difference in evaluating answer quality.", "title": "" } ]
[ { "docid": "d44b351cb1263cbd28cc7fc8c5ebb811", "text": "Online distributed applications are becoming more and more important for users nowadays. There are an increasing number of individuals and companies developing applications and selling them online. In the past couple of years, Apple Inc. has successfully built an online application distribution platform -- iTunes App Store, which is facilitated by their fashionable hardware such like iPad or iPhone. Unlike other traditional selling networks, iTunes has some unique features to advertise their application, for example, daily application ranking, application recommendation, free trial application usage, application update, and user comments. All of these make us wonder what makes an application popular in the iTunes store and why users are interested in some specific type of applications. We plan to answer these questions by using machine learning techniques.", "title": "" }, { "docid": "be3c8186c6e818e7cdba74cc4e7148e2", "text": "A network latency emulator allows IT architects to thoroughly investigate how network latencies impact workload performance. Software-based emulation tools have been widely used by researchers and engineers. It is possible to use commodity server computers for emulation and set up an emulation environment quickly without outstanding hardware cost. However, existing software-based tools built in the network stack of an operating system are not capable of supporting the bandwidth of today's standard interconnects (e.g., 10GbE) and emulating sub-milliseconds latencies likely caused by network virtualization in a datacenter. In this paper, we propose a network latency emulator (DEMU) supporting broad bandwidth traffic with sub-milliseconds accuracy, which is based on an emerging packet processing framework, DPDK. It avoids the overhead of the network stack by directly interacting with NIC hardware. Through experiments, we confirmed that DEMU can emulate latencies on the order of 10 µs for short-packet traffic at the line rate of 10GbE. The standard deviation of inserted delays was only 2–3 µs. This is a significant improvement from a network emulator built in the Linux Kernel (i.e., NetEm), which loses more than 50% of its packets for the same 10GbE traffic. For 1 Gbps traffic, the latency deviation of NetEm was approximately 20 µs, while that of our mechanism was 2 orders of magnitude smaller (i.e., only 0.3 µs).", "title": "" }, { "docid": "398040041440f597b106c49c79be27ea", "text": "BACKGROUND\nRecently, human germinal center-associated lymphoma (HGAL) gene protein has been proposed as an adjunctive follicular marker to CD10 and BCL6.\n\n\nMETHODS\nOur aim was to evaluate immunoreactivity for HGAL in 82 cases of follicular lymphomas (FLs)--67 nodal, 5 cutaneous and 10 transformed--which were all analysed histologically, by immunohistochemistry and PCR.\n\n\nRESULTS\nImmunostaining for HGAL was more frequently positive (97.6%) than that for BCL6 (92.7%) and CD10 (90.2%) in FLs; the cases negative for bcl6 and/or for CD10 were all positive for HGAL, whereas the two cases negative for HGAL were positive with BCL6; no difference in HGAL immunostaining was found among different malignant subtypes or grades.\n\n\nCONCLUSIONS\nTherefore, HGAL can be used in the immunostaining of FLs as the most sensitive germinal center (GC)-marker; when applied alone, it would half the immunostaining costs, reserving the use of the other two markers only to HGAL-negative cases.", "title": "" }, { "docid": "49ca032d3d62eae113fdaa81538151d1", "text": "Wikipedia articles contain, besides free text, various types of structured information in the form of wiki markup. The type of wiki content that is most valuable for search are Wikipedia infoboxes, which display an article’s most relevant facts as a table of attribute-value pairs on the top right-hand side of the Wikipedia page. Infobox data is not used by Wikipedia’s own search engine. Standard Web search engines like Google or Yahoo also do not take advantage of the data. In this paper, we present Faceted Wikipedia Search, an alternative search interface for Wikipedia, which facilitates infobox data in order to enable users to ask complex questions against Wikipedia knowledge. By allowing users to query Wikipedia like a structured database, Faceted Wikipedia Search helps them to truly exploit Wikipedia’s collective intelligence.", "title": "" }, { "docid": "6a6691d92503f98331ad7eed61a9c357", "text": "This paper presents a new 3D point cloud classification benchmark data set with over four billion manually labelled points, meant as input for data-hungry (deep) learning methods. We also discuss first submissions to the benchmark that use deep convolutional neural networks (CNNs) as a work horse, which already show remarkable performance improvements over state-of-the-art. CNNs have become the de-facto standard for many tasks in computer vision and machine learning like semantic segmentation or object detection in images, but have no yet led to a true breakthrough for 3D point cloud labelling tasks due to lack of training data. With the massive data set presented in this paper, we aim at closing this data gap to help unleash the full potential of deep learning methods for 3D labelling tasks. Our semantic3D.net data set consists of dense point clouds acquired with static terrestrial laser scanners. It contains 8 semantic classes and covers a wide range of urban outdoor scenes: churches, streets, railroad tracks, squares, villages, soccer fields and castles. We describe our labelling interface and show that our data set provides more dense and complete point clouds with much higher overall number of labelled points compared to those already available to the research community. We further provide baseline method descriptions and comparison between methods submitted to our online system. We hope semantic3D.net will pave the way for deep learning methods in 3D point cloud labelling to learn richer, more general 3D representations, and first submissions after only a few months indicate that this might indeed be the case.", "title": "" }, { "docid": "a74fc2476ec43b07eccfe2be1c9ef2cb", "text": "In this paper, a broadband high efficiency Class-AB balanced power amplifier (PA) is presented. The proposed PA offers a high efficiency of > 43% for a band of 400 MHz to 1.4 GHz. The broadband matching circuits were realized with microstrip-radial-stubs (MRS) on low loss Rogers 5880 substrate with 0.78 mm thickness and 2.2 dielectric constant. The input and output matching is better than −13 dB throughout the band. The PA delivers maximum output power of 41.5 dBm with a flat gain of 11.4–13.5 dB. Due to high gain, stability, efficiency, and broadband, the proposed PA is thus suitable for recent and upcoming wireless communication systems.", "title": "" }, { "docid": "01e1adcce109994a36a3a59625831b87", "text": "A mother who murders her child challenges the empathic skills of evaluating clinicians. In this chapter, original research, supplemented by detailed case histories, compares women adjudicated criminally responsible for the murders of their children with those adjudicated not guilty by reason of insanity.", "title": "" }, { "docid": "a00e157ed3160d880baad5a16952e00c", "text": "In current minimally invasive surgery techniques, the tactile information available to the surgeon is limited. Improving tactile sensation could enhance the operability of surgical instruments. Considering surgical applications, requirements such as having electrical safety, a simple structure, and sterilization capability should be considered. The current study sought to develop a grasper that can measure grasping force at the tip, based on a previously proposed tactile sensing method using acoustic reflection. This method can satisfy the requirements for surgical applications because it has no electrical element within the part that is inserted into the patient’s body. We integrated our acoustic tactile sensing method into a conventional grasping forceps instrument. We designed the instrument so that acoustic cavities within a grasping arm and a fork sleeve were connected by a small cavity in a pivoting joint. In this design, when the angle between the two grasping arms changes during grasping, the total length and local curvature of the acoustic cavity remain unchanged. Thus, the grasping force can be measured regardless of the orientation of the grasping arm. We developed a prototype sensorized grasper based on our proposed design. Fundamental tests revealed that sensor output increased with increasing contact force applied to the grasping arm, and the angle of the grasping arm did not significantly affect the sensor output. Moreover, the results of a grasping test, in which objects with different softness characteristics were held by the grasper, revealed that the grasping force could be appropriately adjusted to handle different objects on the basis of sensor output. Experimental results demonstrated that the prototype grasper can measure grasping force, enabling safe and stable grasping.", "title": "" }, { "docid": "76cd577955213ce193dcc5c821e05cf6", "text": "Although much biological research depends upon species diagnoses, taxonomic expertise is collapsing. We are convinced that the sole prospect for a sustainable identification capability lies in the construction of systems that employ DNA sequences as taxon 'barcodes'. We establish that the mitochondrial gene cytochrome c oxidase I (COI) can serve as the core of a global bioidentification system for animals. First, we demonstrate that COI profiles, derived from the low-density sampling of higher taxonomic categories, ordinarily assign newly analysed taxa to the appropriate phylum or order. Second, we demonstrate that species-level assignments can be obtained by creating comprehensive COI profiles. A model COI profile, based upon the analysis of a single individual from each of 200 closely allied species of lepidopterans, was 100% successful in correctly identifying subsequent specimens. When fully developed, a COI identification system will provide a reliable, cost-effective and accessible solution to the current problem of species identification. Its assembly will also generate important new insights into the diversification of life and the rules of molecular evolution.", "title": "" }, { "docid": "abddff6b429cff002070eee4a18e498b", "text": "A receiver-oriented perspective on capacity scaling in mobile ad hoc networks (MANETs) suggests that broadcast and multicast may be more natural traffic models for these systems than the random unicast pairs typically considered. Furthermore, traffic loads for the most promising near-term application for MANET technology — namely, networking at the tactical edge — are largely broadcast. The development of novel MANET approaches targeting broadcast first and foremost, however, has not been reported. Instead, existing system designs largely rely on fundamentally link-based, layered architectures, which are best suited to unicast traffic. In response to the demands of tactical edge communications, TrellisWare Technologies, Inc. developed a MANET system based on Barrage Relay Networks (BRNs). BRNs utilize an autonomous cooperative communication scheme that eliminates the need for link-level collision avoidance. The fundamental physical layer resource in BRNs is not a link, but a portion in space and time of a cooperative, multihop transport fabric. While initial hardware prototypes of BRNs were being refined into products by TrellisWare, a number of concepts similar to those that underlie BRNs were reported independently in the literature. That TrellisWare's tactical edge MANET system design and academic research reconsidering the standard networking approach for MANETs arrived at similar design concepts lends credence to the value of these emerging wireless network approaches.", "title": "" }, { "docid": "fbb71a8a7630350a7f33f8fb90b57965", "text": "As the Web of Things (WoT) broadens real world interaction via the internet, there is an increasing need for a user centric model for managing and interacting with real world objects. We believe that online social networks can provide that capability and can enhance existing and future WoT platforms leading to a Social WoT. As both social overlays and user interface containers, online social networks (OSNs) will play a significant role in the evolution of the web of things. As user interface containers and social overlays, they can be used by end users and applications as an on-line entry point for interacting with things, both receiving updates from sensors and controlling things. Conversely, access to user identity and profile information, content and social graphs can be useful in physical social settings like cafés. In this paper we describe some of the key features of social networks used by existing social WoT systems. We follow this with a discussion of open research questions related to integration of OSNs and how OSNs may evolve to be more suitable for integration with places and things. Several ongoing projects in our lab leverage OSNs to connect places and things to online communities.", "title": "" }, { "docid": "9d95535e6aee8acb6a613211223c3341", "text": "We report a method to convert discrete representations of molecules to and from a multidimensional continuous representation. This model allows us to generate new molecules for efficient exploration and optimization through open-ended spaces of chemical compounds. A deep neural network was trained on hundreds of thousands of existing chemical structures to construct three coupled functions: an encoder, a decoder, and a predictor. The encoder converts the discrete representation of a molecule into a real-valued continuous vector, and the decoder converts these continuous vectors back to discrete molecular representations. The predictor estimates chemical properties from the latent continuous vector representation of the molecule. Continuous representations of molecules allow us to automatically generate novel chemical structures by performing simple operations in the latent space, such as decoding random vectors, perturbing known chemical structures, or interpolating between molecules. Continuous representations also allow the use of powerful gradient-based optimization to efficiently guide the search for optimized functional compounds. We demonstrate our method in the domain of drug-like molecules and also in a set of molecules with fewer that nine heavy atoms.", "title": "" }, { "docid": "dc22d5dbb59b7e9b4a857e1e3dddd234", "text": "Issuer Delisting; Order Granting the Application of General Motors Corporation to Withdraw its Common Stock, $1 2/3 par value, from Listing and Registration on the Chicago Stock Exchange, Inc. File No. 1-00043 April 4, 2006 On March 2, 2006, General Motors Corporation, a Delaware corporation (\"Issuer\"), filed an application with the Securities and Exchange Commission (\"Commission\"), pursuant to Section 12(d) of the Securities Exchange Act of 1934 (\"Act\") and Rule 12d2-2(d) thereunder, to withdraw its common stock, $1 2/3 par value (\"Security\"), from listing and registration on the Chicago Stock Exchange, Inc. (\"CHX\"). Notice of such application requesting comments was published in the Federal Register on March 10, 2006. No comments were received. As discussed below, the Commission is granting the application. The Administrative Committee of the Issuer's Board of Directors (\"Board\") approved a resolution on September 9, 2005, to delist the Security from listing and registration on CHX. The Issuer stated that the purposes for seeking to delist the Security from CHX are to avoid dual regulatory oversight and dual listing fees. The Security is traded, and will continue to trade, on the New York Stock Exchange (\"NYSE\"). In addition, the Issuer stated that CHX advised the Issuer that the Security will continue to trade on CHX under unlisted trading privileges. The Issuer stated in its application that it has complied with applicable rules of CHX by providing CHX with the required documents governing the withdrawal of securities from listing and registration on CHX. The Issuer's application relates solely to the", "title": "" }, { "docid": "cc8cdd520eae69b0faa842cebe07ffb1", "text": "Warehouse-scale data center operators need much-higher-bandwidth intra-data center networks (DCNs) to sustain the increase of network traffic due to cloud computing and other emerging web applications. Current DCNs based on commodity switches require excessive amounts of power to face this traffic increase. Optical intra-DCN interconnection networks have recently emerged as a promising solution that can provide higher throughput while consuming less power. This article provides an update on recent developments in the field of ultra-highcapacity optical interconnects for intra-DCN communication. Several recently proposed architectures and technologies are examined and compared, while future trends and research challenges are outlined.", "title": "" }, { "docid": "b923857ed8a59315c66a7c1411435928", "text": "In recent years, Bitcoin, a peer-to-peer network based crypto digital currency, has attracted a lot of attentions from the media, the academia, and the general public. A user in Bitcoin network can create Bitcoins by packing and verifying new transactions in the network using their computation power. Driven by the price surge of Bitcoin, users are increasingly investing on expensive specialized hardware for Bitcoin mining. To obtain steady payouts, users also pool their computation resources to conduct pool mining. In this paper, we study the evolution of Bitcoin miners by analyzing the complete transaction blockchain. We characterize how the productivity, computation power and transaction activity of miners evolve over time. We also conduct an in-depth study on the largest mining pool F2Pool. We show how it grows over time and how computation power is distributed among its miners. Finally, we build a simple economic model to explain the evolution of Bitcoin miners.", "title": "" }, { "docid": "6c284026d7f798377c2f7c7ba3b57501", "text": "In this paper, for the first time, an InAs/Si heterojunction double-gate tunnel FET (H-DGTFET) has been analyzed for low-power high-frequency applications. For this purpose, the suitability of the device for low-power applications is investigated by extracting the threshold voltage of the device using a transconductance change method and a constant current method. Furthermore, the effects of uniform and Gaussian drain doping profile on dc characteristics and analog/RF performances are investigated for different channel lengths. A highly doped layer is placed in the channel near the source-channel junction, and this decreases the width of the depletion region, which improves the ON-current (ION) and the RF performance. Furthermore, the circuit-level performance assessment is done by implementing a common source amplifier using the H-DGTFET; a 3-dB roll-off frequency of 230.11 GHz and a unity-gain frequency of 5.4 THz were achieved.", "title": "" }, { "docid": "a52bac75c0b605c6205572a2c35444bb", "text": "This article reviews five approximate statistical tests for determining whether one learning algorithm outperforms another on a particular learning task. These test sare compared experimentally to determine their probability of incorrectly detecting a difference when no difference exists (type I error). Two widely used statistical tests are shown to have high probability of type I error in certain situations and should never be used: a test for the difference of two proportions and a paired-differences t test based on taking several random train-test splits. A third test, a paired-differences t test based on 10-fold cross-validation, exhibits somewhat elevated probability of type I error. A fourth test, McNemar's test, is shown to have low type I error. The fifth test is a new test, 5 2 cv, based on five iterations of twofold cross-validation. Experiments show that this test also has acceptable type I error. The article also measures the power (ability to detect algorithm differences when they do exist) of these tests. The cross-validated t test is the most powerful. The 52 cv test is shown to be slightly more powerful than McNemar's test. The choice of the best test is determined by the computational cost of running the learning algorithm. For algorithms that can be executed only once, Mc-Nemar's test is the only test with acceptable type I error. For algorithms that can be executed 10 times, the 5 2 cv test is recommended, because it is slightly more powerful and because it directly measures variation due to the choice of training set.", "title": "" }, { "docid": "8a8d8b029a23d0d20ff9bd40fe0420bc", "text": "Humans interact with the environment through sensory and motor acts. Some of these interactions require synchronization among two or more individuals. Multiple-trial designs, which we have used in past work to study interbrain synchronization in the course of joint action, constrain the range of observable interactions. To overcome the limitations of multiple-trial designs, we conducted single-trial analyses of electroencephalography (EEG) signals recorded from eight pairs of guitarists engaged in musical improvisation. We identified hyper-brain networks based on a complex interplay of different frequencies. The intra-brain connections primarily involved higher frequencies (e.g., beta), whereas inter-brain connections primarily operated at lower frequencies (e.g., delta and theta). The topology of hyper-brain networks was frequency-dependent, with a tendency to become more regular at higher frequencies. We also found hyper-brain modules that included nodes (i.e., EEG electrodes) from both brains. Some of the observed network properties were related to musical roles during improvisation. Our findings replicate and extend earlier work and point to mechanisms that enable individuals to engage in temporally coordinated joint action.", "title": "" }, { "docid": "ae0d63126ff55961533dc817554bcb82", "text": "This paper presents a novel bipedal robot concept and prototype that takes inspiration from humanoids but features fundamental differences that drastically improve its agility and stability while reducing its complexity and cost. This Non-Anthropomorphic Bipedal Robotic System (NABiRoS) modifies the traditional bipedal form by aligning the legs in the sagittal plane and adding a compliance to the feet. The platform is comparable in height to a human, but weighs much less because of its lightweight architecture and novel leg configuration. The inclusion of the compliant element showed immense improvements in the stability and robustness of walking gaits on the prototype, allowing the robot to remain stable during locomotion without any inertial feedback control. NABiRoS was able to achieve walking speeds of up to 0.75km/h (0.21m/s) using a simple pre-processed ZMP based gait and a positioning accuracy of +/- 0.04m with a preprocessed quasi-static algorithm.", "title": "" } ]
scidocsrr
6c46d77647e1724950cdd984f79f8c63
Asymmetric and Context-Dependent Semantic Similarity among Ontology Instances
[ { "docid": "a69747683329667c0d697f3127fa58c1", "text": "Clustering is the process of grouping a set of objects into classes of similar objects. Although definitions of similarity vary from one clustering model to another, in most of these models the concept of similarity is based on distances, e.g., Euclidean distance or cosine distance. In other words, similar objects are required to have close values on at least a set of dimensions. In this paper, we explore a more general type of similarity. Under the pCluster model we proposed, two objects are similar if they exhibit a coherent pattern on a subset of dimensions. For instance, in DNA microarray analysis, the expression levels of two genes may rise and fall synchronously in response to a set of environmental stimuli. Although the magnitude of their expression levels may not be close, the patterns they exhibit can be very much alike. Discovery of such clusters of genes is essential in revealing significant connections in gene regulatory networks. E-commerce applications, such as collaborative filtering, can also benefit from the new model, which captures not only the closeness of values of certain leading indicators but also the closeness of (purchasing, browsing, etc.) patterns exhibited by the customers. Our paper introduces an effective algorithm to detect such clusters, and we perform tests on several real and synthetic data sets to show its effectiveness.", "title": "" }, { "docid": "fee50f8ab87f2b97b83ca4ef92f57410", "text": "Ontologies now play an important role for many knowledge-intensive applications for which they provide a source of precisely defined terms. However, with their wide-spread usage there come problems concerning their proliferation. Ontology engineers or users frequently have a core ontology that they use, e.g., for browsing or querying data, but they need to extend it with, adapt it to, or compare it with the large set of other ontologies. For the task of detecting and retrieving relevant ontologies, one needs means for measuring the similarity between ontologies. We present a set of ontology similarity measures and a multiple-phase empirical evaluation.", "title": "" } ]
[ { "docid": "035341c7862f31eb6a4de0126ae569b5", "text": "Understanding how the human brain is able to efficiently perceive and understand a visual scene is still a field of ongoing research. Although many studies have focused on the design and optimization of neural networks to solve visual recognition tasks, most of them either lack neurobiologically plausible learning rules or decision-making processes. Here we present a large-scale model of a hierarchical spiking neural network (SNN) that integrates a low-level memory encoding mechanism with a higher-level decision process to perform a visual classification task in real-time. The model consists of Izhikevich neurons and conductance-based synapses for realistic approximation of neuronal dynamics, a spike-timing-dependent plasticity (STDP) synaptic learning rule with additional synaptic dynamics for memory encoding, and an accumulator model for memory retrieval and categorization. The full network, which comprised 71,026 neurons and approximately 133 million synapses, ran in real-time on a single off-the-shelf graphics processing unit (GPU). The network was constructed on a publicly available SNN simulator that supports general-purpose neuromorphic computer chips. The network achieved 92% correct classifications on MNIST in 100 rounds of random sub-sampling, which is comparable to other SNN approaches and provides a conservative and reliable performance metric. Additionally, the model correctly predicted reaction times from psychophysical experiments. Because of the scalability of the approach and its neurobiological fidelity, the current model can be extended to an efficient neuromorphic implementation that supports more generalized object recognition and decision-making architectures found in the brain.", "title": "" }, { "docid": "f5cee800ad1a2cef72e9f24710cce48b", "text": "LiNi0.5Mn1.5O4 nanorods wrapped with graphene nanosheets have been prepared and investigated as high energy and high power cathode material for lithium-ion batteries. The structural characterization by X-ray diffraction, Raman spectroscopy, and Fourier transform infrared spectroscopy indicates the LiNi0.5Mn1.5O4 nanorods prepared from β-MnO2 nanowires have ordered spinel structure with P4332 space group. The morphological characterization by scanning electron microscopy and transmission electron microscopy reveals that the LiNi0.5Mn1.5O4 nanorods of 100-200 nm in diameter are well dispersed and wrapped in the graphene nanosheets for the composite. Benefiting from the highly conductive matrix provided by graphene nanosheets and one-dimensional nanostructure of the ordered spinel, the composite electrode exhibits superior rate capability and cycling stability. As a result, the LiNi0.5Mn1.5O4-graphene composite electrode delivers reversible capacities of 127.6 and 80.8 mAh g(-1) at 0.1 and 10 C, respectively, and shows 94% capacity retention after 200 cycles at 1 C, greatly outperforming the bare LiNi0.5Mn1.5O4 nanorod cathode. The outstanding performance of the LiNi0.5Mn1.5O4-graphene composite makes it promising as cathode material for developing high energy and high power lithium-ion batteries.", "title": "" }, { "docid": "358a8ab77d93a06fc43c878c1e79d2a7", "text": "Learning-based hashing is a leading approach of approximate nearest neighbor search for large-scale image retrieval. In this paper, we develop a deep supervised hashing method for multi-label image retrieval, in which we propose to learn a binary “mask” map that can identify the approximate locations of objects in an image, so that we use this binary “mask” map to obtain length-limited hash codes which mainly focus on an image’s objects but ignore the background. The proposed deep architecture consists of four parts: 1) a convolutional sub-network to generate effective image features; 2) a binary “mask” sub-network to identify image objects’ approximate locations; 3) a weighted average pooling operation based on the binary “mask” to obtain feature representations and hash codes that pay most attention to foreground objects but ignore the background; and 4) the combination of a triplet ranking loss designed to preserve relative similarities among images and a cross entropy loss defined on image labels. We conduct comprehensive evaluations on four multi-label image data sets. The results indicate that the proposed hashing method achieves superior performance gains over the state-of-the-art supervised or unsupervised hashing baselines.", "title": "" }, { "docid": "da061d5d22132b070af0d9ed99a85092", "text": "Current metrics-based approaches to visualise unfamiliar software systems face two key limitations: (1) They are limited in terms of the number of dimensions that can be projected, and (2) they use fixed layout algorithms where the resulting positions of entities can be vulnerable to mis-interpretation. In this paper we show how computer games technology can be used to address these problems. We present the PhysVis software exploration system, where software metrics can be variably mapped to parameters of a physical model and displayed via a particle system. Entities can be imbued with attributes such as mass, gravity, and (for relationships) strength or springiness, alongside traditional attributes such as position, colour and size. The resulting visualisation is a dynamic scene; the relative positions of entities are not determined by a fixed layout algorithm, but by intuitive physical notions such as gravity, mass, and drag. The implementation is openly available, and we evaluate it on a selection of visualisation tasks for two openly-available software systems.", "title": "" }, { "docid": "872ccba4f0a0ba6a57500d4b73384ce1", "text": "This research demonstrates the application of association rule mining to spatio-temporal data. Association rule mining seeks to discover associations among transactions encoded in a database. An association rule takes the form A → B where A (the antecedent) and B (the consequent) are sets of predicates. A spatio-temporal association rule occurs when there is a spatio-temporal relationship in the antecedent or consequent of the rule. As a case study, association rule mining is used to explore the spatial and temporal relationships among a set of variables that characterize socioeconomic and land cover change in the Denver, Colorado, USA region from 1970–1990. Geographic Information Systems (GIS)-based data pre-processing is used to integrate diverse data sets, extract spatio-temporal relationships, classify numeric data into ordinal categories, and encode spatio-temporal relationship data in tabular format for use by conventional (non-spatio-temporal) association rule mining software. Multiple level association rule mining is supported by the development of a hierarchical classification scheme (concept hierarchy) for each variable. Further research in spatiotemporal association rule mining should address issues of data integration, data classification, the representation and calculation of spatial relationships, and strategies for finding ‘interesting’ rules.", "title": "" }, { "docid": "2891ce3327617e9e957488ea21e9a20c", "text": "Recently, remote healthcare systems have received increasing attention in the last decade, explaining why intelligent systems with physiology signal monitoring for e-health care are an emerging area of development. Therefore, this study adopts a system which includes continuous collection and evaluation of multiple vital signs, long-term healthcare, and a cellular connection to a medical center in emergency case and it transfers all acquired raw data by the internet in normal case. The proposed system can continuously acquire four different physiological signs, for example, ECG, SpO2, temperature, and blood pressure and further relayed them to an intelligent data analysis scheme to diagnose abnormal pulses for exploring potential chronic diseases. The proposed system also has a friendly web-based interface for medical staff to observe immediate pulse signals for remote treatment. Once abnormal event happened or the request to real-time display vital signs is confirmed, all physiological signs will be immediately transmitted to remote medical server through both cellular networks and internet. Also data can be transmitted to a family member's mobile phone or doctor's phone through GPRS. A prototype of such system has been successfully developed and implemented, which will offer high standard of healthcare with a major reduction in cost for our society.", "title": "" }, { "docid": "f472388e050e80837d2d5129ba8a358b", "text": "Voice control has emerged as a popular method for interacting with smart-devices such as smartphones, smartwatches etc. Popular voice control applications like Siri and Google Now are already used by a large number of smartphone and tablet users. A major challenge in designing a voice control application is that it requires continuous monitoring of user?s voice input through the microphone. Such applications utilize hotwords such as \"Okay Google\" or \"Hi Galaxy\" allowing them to distinguish user?s voice command and her other conversations. A voice control application has to continuously listen for hotwords which significantly increases the energy consumption of the smart-devices.\n To address this energy efficiency problem of voice control, we present AccelWord in this paper. AccelWord is based on the empirical evidence that accelerometer sensors found in today?s mobile devices are sensitive to user?s voice. We also demonstrate that the effect of user?s voice on accelerometer data is rich enough so that it can be used to detect the hotwords spoken by the user. To achieve the goal of low energy cost but high detection accuracy, we combat multiple challenges, e.g. how to extract unique signatures of user?s speaking hotwords only from accelerometer data and how to reduce the interference caused by user?s mobility.\n We finally implement AccelWord as a standalone application running on Android devices. Comprehensive tests show AccelWord has hotword detection accuracy of 85% in static scenarios and 80% in mobile scenarios. Compared to the microphone based hotword detection applications such as Google Now and Samsung S Voice, AccelWord is 2 times more energy efficient while achieving the accuracy of 98% and 92% in static and mobile scenarios respectively.", "title": "" }, { "docid": "fba55845801b1d145ff45b47efce8155", "text": "This paper presents a technique for substantially reducing the noise of a CMOS low noise amplifier implemented in the inductive source degeneration topology. The effects of the gate induced current noise on the noise performance are taken into account, and the total output noise is strongly reduced by inserting a capacitance of appropriate value in parallel with the amplifying MOS transistor of the LNA. As a result, very low noise figures become possible already at very low power consumption levels.", "title": "" }, { "docid": "013e96c212f7f58698acdae0adfcf374", "text": "Since our ability to engineer biological systems is directly related to our ability to control gene expression, a central focus of synthetic biology has been to develop programmable genetic regulatory systems. Researchers are increasingly turning to RNA regulators for this task because of their versatility, and the emergence of new powerful RNA design principles. Here we review advances that are transforming the way we use RNAs to engineer biological systems. First, we examine new designable RNA mechanisms that are enabling large libraries of regulators with protein-like dynamic ranges. Next, we review emerging applications, from RNA genetic circuits to molecular diagnostics. Finally, we describe new experimental and computational tools that promise to accelerate our understanding of RNA folding, function and design.", "title": "" }, { "docid": "b6bd380108803bec62dae716d9e0a83e", "text": "With the advent of statistical modeling in sports, predicting the outcome of a game has been established as a fundamental problem. Cricket is one of the most popular team games in the world. With this article, we embark on predicting the outcome of a One Day International (ODI) cricket match using a supervised learning approach from a team composition perspective. Our work suggests that the relative team strength between the competing teams forms a distinctive feature for predicting the winner. Modeling the team strength boils down to modeling individual player’s batting and bowling performances, forming the basis of our approach. We use career statistics as well as the recent performances of a player to model him. Player independent factors have also been considered in order to predict the outcome of a match. We show that the k-Nearest Neighbor (kNN) algorithm yields better results as compared to other classifiers.", "title": "" }, { "docid": "8c24c76b78f6b658d944cf5fa0499343", "text": "Anomaly detection is a classical problem in computer vision, namely the determination of the normal from the abnormal when datasets are highly biased towards one class (normal) due to the insufficient sample size of the other class (abnormal). While this can be addressed as a supervised learning problem, a significantly more challenging problem is that of detecting the unknown/unseen anomaly case that takes us instead into the space of a one-class, semi-supervised learning paradigm. We introduce such a novel anomaly detection model, by using a conditional generative adversarial network that jointly learns the generation of high-dimensional image space and the inference of latent space. Employing encoder-decoder-encoder sub-networks in the generator network enables the model to map the input image to a lower dimension vector, which is then used to reconstruct the generated output image. The use of the additional encoder network maps this generated image to its latent representation. Minimizing the distance between these images and the latent vectors during training aids in learning the data distribution for the normal samples. As a result, a larger distance metric from this learned data distribution at inference time is indicative of an outlier from that distribution — an anomaly. Experimentation over several benchmark datasets, from varying domains, shows the model efficacy and superiority over previous state-of-the-art approaches.", "title": "" }, { "docid": "a2105c4e70bf29dba01e30066667a22c", "text": "Background: Acne vulgaris is a common skin disease that affects not only teenagers but also the general population. Although acne is not physically disabling, its psychological impact can be striking, contributing to low self-esteem, depression, and anxiety. As a result, there is a significant demand for effective acne therapies. Antihistamine is a widely used medication to treat several allergic skin conditions and yet it also has been found to decrease complications of acne and improve acne symptoms. For the severe cystic acne vulgaris, oral retinoids such as isotretinoin is the primary treatment; however, health care providers hesitate to prescribe isotretinoin due to its adverse drug reactions. On the other hand, antihistamine is well known by its safe and minimal side effects. Can an antihistamine intervention in standardized treatment of acne vulgaris significantly impact the improvement of acne symptoms and reduce sebum production? Methods: An exhaustive search was conducted by MEDLINE-OVID, CINAHL, UptoDate, Web of Science, Google scholar, MEDLINE-PubMed, Clinicalkey, and ProQuest by using keywords: acne vulgaris and antihistamine. Relevant articles were assessed for quality using GRADE. Results: After the exhaustive search, two studies met the inclusion criteria and eligibility criteria. Effect of antihistamine as an adjuvant treatment of isotretinoin in acne:a randomized, controlled comparative study contains the comparison of 20 patients with moderate acne are treated with isotretinoin and another 20 patients with moderate acne are treated with additional antihistamine. Identification of Histamine Receptors and Reduction of Squalene Levels by an Antihistamine in Sebocytes was conducted on human tissue to verify the decrease of sebum production by the antihistamine’s effect. Conclusion: Both studies demonstrate the usefulness of an histamine antagonist in reducing sebum production and improving acne symptoms. Due to its low cost and safety, a recommendation can be made for antihistamine to treat acne vulgaris as an adjuvant therapy to standardized treatment. Degree Type Capstone Project Degree Name Master of Science in Physician Assistant Studies First Advisor David Keene PA-C", "title": "" }, { "docid": "64ce725037b72921b979583f6fdc4f27", "text": "We describe an approach to object retrieval which searches for and localizes all the occurrences of an object in a video, given a query image of the object. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject those that are unstable. Efficient retrieval is achieved by employing methods from statistical text retrieval, including inverted file systems, and text and document frequency weightings. This requires a visual analogy of a word which is provided here by vector quantizing the region descriptors. The final ranking also depends on the spatial layout of the regions. The result is that retrieval is immediate, returning a ranked list of shots in the manner of Google. We report results for object retrieval on the full length feature films ‘Groundhog Day’ and ‘Casablanca’.", "title": "" }, { "docid": "2016b2279a473379c4d63a50ae27fa5b", "text": "Twitter has attracted hundred millions of users to share and disseminate most up-to-date information. However, the noisy and short nature of tweets makes many applications in information retrieval (IR) and natural language processing (NLP) challenging. Recently, segment-based tweet representation has demonstrated effectiveness in named entity recognition (NER) and event detection from tweet streams. To split tweets into meaningful phrases or segments, the previous work is purely based on external knowledge bases, which ignores the rich local context information embedded in the tweets. In this paper, we propose a novel framework for tweet segmentation in a batch mode, called HybridSeg. HybridSeg incorporates local context knowledge with global knowledge bases for better tweet segmentation. HybridSeg consists of two steps: learning from off-the-shelf weak NERs and learning from pseudo feedback. In the first step, the existing NER tools are applied to a batch of tweets. The named entities recognized by these NERs are then employed to guide the tweet segmentation process. In the second step, HybridSeg adjusts the tweet segmentation results iteratively by exploiting all segments in the batch of tweets in a collective manner. Experiments on two tweet datasets show that HybridSeg significantly improves tweet segmentation quality compared with the state-of-the-art algorithm. We also conduct a case study by using tweet segments for the task of named entity recognition from tweets. The experimental results demonstrate that HybridSeg significantly benefits the downstream applications.", "title": "" }, { "docid": "67d704317471c71842a1dfe74ddd324a", "text": "Agile software development methods have caught the attention of software engineers and researchers worldwide. Scientific research is yet scarce. This paper reports results from a study, which aims to organize, analyze and make sense out of the dispersed field of agile software development methods. The comparative analysis is performed using the method's life-cycle coverage, project management support, type of practical guidance, fitness-for-use and empirical evidence as the analytical lenses. The results show that agile software development methods, without rationalization, cover certain/different phases of the software development life-cycle and most of the them do not offer adequate support for project management. Yet, many methods still attempt to strive for universal solutions (as opposed to situation appropriate) and the empirical evidence is still very limited Based on the results, new directions are suggested In principal it is suggested to place emphasis on methodological quality -- not method quantity.", "title": "" }, { "docid": "c9e16e74152636203e07e5f2526ea4d3", "text": "The recommendations provided in this document provide a data-supported approach to the diagnosis, staging and treatment of patients diagnosed with hepatocellular carcinoma (HCC). They are based on the following: (a) formal review and analysis of the recently-published world literature on the topic (Medline search through early 2010); (b) American College of Physicians Manual for Assessing Health Practices and Designing Practice Guidelines; (c) guideline policies, including the AASLD Policy on the Development and Use of Practice Guidelines and the American Gastroenterology Association Policy Statement on Guidelines; (d) the experience of the authors. These recommendations suggest preferred approaches to the diagnostic, therapeutic, and preventive aspects of care. In an attempt to characterize the quality of evidence supporting recommendations, the Practice Guidelines Committee of the American Association for Study of Liver Disease (AASLD) requires a category to be assigned and reported with each recommendation (Table 1). These recommendations are fully endorsed by the American Association for the Study of Liver Diseases.", "title": "" }, { "docid": "32b2cd6b63c6fc4de5b086772ef9d319", "text": "Link prediction for knowledge graphs is the task of predicting missing relationships between entities. Previous work on link prediction has focused on shallow, fast models which can scale to large knowledge graphs. However, these models learn less expressive features than deep, multi-layer models – which potentially limits performance. In this work we introduce ConvE, a multi-layer convolutional network model for link prediction, and report state-of-the-art results for several established datasets. We also show that the model is highly parameter efficient, yielding the same performance as DistMult and R-GCN with 8x and 17x fewer parameters. Analysis of our model suggests that it is particularly effective at modelling nodes with high indegree – which are common in highlyconnected, complex knowledge graphs such as Freebase and YAGO3. In addition, it has been noted that the WN18 and FB15k datasets suffer from test set leakage, due to inverse relations from the training set being present in the test set – however, the extent of this issue has so far not been quantified. We find this problem to be severe: a simple rule-based model can achieve state-of-the-art results on both WN18 and FB15k. To ensure that models are evaluated on datasets where simply exploiting inverse relations cannot yield competitive results, we investigate and validate several commonly used datasets – deriving robust variants where necessary. We then perform experiments on these robust datasets for our own and several previously proposed models, and find that ConvE achieves state-of-the-art Mean Reciprocal Rank across all datasets.", "title": "" }, { "docid": "9ac8ce316225509a0fb644001d960535", "text": "The display of statistical information is ubiquitous in all fields of visualization. Whether aided by graphs, tables, plots, or integrated into the visualizations themselves, understanding the best way to convey statistical information is important. Highlighting the box plot, a survey of traditional methods for expressing specific statistical characteristics of data is presented. Reviewing techniques for the expression of statistical measures will be increasingly important as data quality, confidence and uncertainty are becoming influential characteristics to integrate into visualizations.", "title": "" }, { "docid": "dabfcb6d1b2df628113a8f68ed0555a5", "text": "With the fast-growing demand of location-based services in indoor environments, indoor positioning based on fingerprinting has attracted significant interest due to its high accuracy. In this paper, we present a novel deep-learning-based indoor fingerprinting system using channel state information (CSI), which is termed DeepFi. Based on three hypotheses on CSI, the DeepFi system architecture includes an offline training phase and an online localization phase. In the offline training phase, deep learning is utilized to train all the weights of a deep network as fingerprints. Moreover, a greedy learning algorithm is used to train the weights layer by layer to reduce complexity. In the online localization phase, we use a probabilistic method based on the radial basis function to obtain the estimated location. Experimental results are presented to confirm that DeepFi can effectively reduce location error, compared with three existing methods in two representative indoor environments.", "title": "" }, { "docid": "1d82d994635a0bd0137febd74b8c3835", "text": "research A. Agrawal J. Basak V. Jain R. Kothari M. Kumar P. A. Mittal N. Modani K. Ravikumar Y. Sabharwal R. Sureka Marketing decisions are typically made on the basis of research conducted using direct mailings, mall intercepts, telephone interviews, focused group discussion, and the like. These methods of marketing research can be time-consuming and expensive, and can require a large amount of effort to ensure accurate results. This paper presents a novel approach for conducting online marketing research based on several concepts such as active learning, matched control and experimental groups, and implicit and explicit experiments. These concepts, along with the opportunity provided by the increasing numbers of online shoppers, enable rapid, systematic, and cost-effective marketing research.", "title": "" } ]
scidocsrr
be820c1c96c30eb441d557fa8d540115
Incorporating Soft Variables Into System Dynamics Models : A Suggested Method and Basis for Ongoing Research
[ { "docid": "577bdd2d53ddac7d59b7e1f8655bcecb", "text": "Thoughtful leaders increasingly recognize that we are not only failing to solve the persistent problems we face, but are in fact causing them. System dynamics is designed to help avoid such policy resistance and identify high-leverage policies for sustained improvement. What does it take to be an effective systems thinker, and to teach system dynamics fruitfully? Understanding complex systems requires mastery of concepts such as feedback, stocks and flows, time delays, and nonlinearity. Research shows that these concepts are highly counterintuitive and poorly understood. It also shows how they can be taught and learned. Doing so requires the use of formal models and simulations to test our mental models and develop our intuition about complex systems. Yet, though essential, these concepts and tools are not sufficient. Becoming an effective systems thinker also requires the rigorous and disciplined use of scientific inquiry skills so that we can uncover our hidden assumptions and biases. It requires respect and empathy for others and other viewpoints. Most important, and most difficult to learn, systems thinking requires understanding that all models are wrong and humility about the limitations of our knowledge. Such humility is essential in creating an environment in which we can learn about the complex systems in which we are embedded and work effectively to create the world we truly desire. The paper is based on the talk the author delivered at the 2002 International System Dynamics Conference upon presentation of the Jay W. Forrester Award. Copyright  2002 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "7b46cf9aa63423485f4f48d635cb8f5c", "text": "It sounds good when knowing the multiple criteria decision analysis an integrated approach in this website. This is one of the books that many people looking for. In the past, many people ask about this book as their favourite book to read and collect. And now, we present hat you need quickly. It seems to be so happy to offer you this famous book. It will not become a unity of the way for you to get amazing benefits at all. But, it will serve something that will let you get the best time and moment to spend for reading the book.", "title": "" } ]
[ { "docid": "4e3d8acaa6bb7ae628e2fd6b8ed99d29", "text": "This paper proposes the use of multiple thesaurus types for query expansion in information retrieval. Hand-crafted thesaurus, corpus-based co-occurrence-based thesaurus and syntactic-relation-based thesaurus are combined and used as a tool for query expansion. A simple word sense disambiguation is performed to avoid misleading expansion terms. Experiments using TREC-7 collection proved that this method could improve the information retrieval performance significantly. Failure analysis was done on the cases in which the proposed method fail to improve the retrieval effectiveness. We found that queries containing negative statements and multiple aspects might cause problems in the proposed method.", "title": "" }, { "docid": "443fb61dbb3cc11060104ed6ed0c645c", "text": "An interactive framework for soft segmentation and matting of natural images and videos is presented in this paper. The proposed technique is based on the optimal, linear time, computation of weighted geodesic distances to user-provided scribbles, from which the whole data is automatically segmented. The weights are based on spatial and/or temporal gradients, considering the statistics of the pixels scribbled by the user, without explicit optical flow or any advanced and often computationally expensive feature detectors. These could be naturally added to the proposed framework as well if desired, in the form of weights in the geodesic distances. An automatic localized refinement step follows this fast segmentation in order to further improve the results and accurately compute the corresponding matte function. Additional constraints into the distance definition permit to efficiently handle occlusions such as people or objects crossing each other in a video sequence. The presentation of the framework is complemented with numerous and diverse examples, including extraction of moving foreground from dynamic background in video, natural and 3D medical images, and comparisons with the recent literature.", "title": "" }, { "docid": "c2ed6ac38a6014db73ba81dd898edb97", "text": "The ability of personality traits to predict important life outcomes has traditionally been questioned because of the putative small effects of personality. In this article, we compare the predictive validity of personality traits with that of socioeconomic status (SES) and cognitive ability to test the relative contribution of personality traits to predictions of three critical outcomes: mortality, divorce, and occupational attainment. Only evidence from prospective longitudinal studies was considered. In addition, an attempt was made to limit the review to studies that controlled for important background factors. Results showed that the magnitude of the effects of personality traits on mortality, divorce, and occupational attainment was indistinguishable from the effects of SES and cognitive ability on these outcomes. These results demonstrate the influence of personality traits on important life outcomes, highlight the need to more routinely incorporate measures of personality into quality of life surveys, and encourage further research about the developmental origins of personality traits and the processes by which these traits influence diverse life outcomes.", "title": "" }, { "docid": "38e7a36e4417bff60f9ae0dbb7aaf136", "text": "Asynchronous implementation techniques, which measure logic delays at runtime and activate registers accordingly, are inherently more robust than their synchronous counterparts, which estimate worst case delays at design time and constrain the clock cycle accordingly. Desynchronization is a new paradigm to automate the design of asynchronous circuits from synchronous specifications, thus, permitting widespread adoption of asynchronicity without requiring special design skills or tools. In this paper, different protocols for desynchronization are first studied, and their correctness is formally proven using techniques originally developed for distributed deployment of synchronous language specifications. A taxonomy of existing protocols for asynchronous latch controllers, covering, in particular, the four-phase handshake protocols devised in the literature for micropipelines, is also provided. A new controller that exhibits provably maximal concurrency is then proposed, and the performance of desynchronized circuits is analyzed with respect to the original synchronous optimized implementation. Finally, this paper proves the feasibility and effectiveness of the proposed approach by showing its application to a set of real designs, including a complete implementation of the DLX microprocessor architecture", "title": "" }, { "docid": "1c55a303c577495de6efd8099f7f1adc", "text": "Image segmentation is the most important part of image processing, separating an image into multiple meaningful parts. In this area of the image segmentation, new technologies emerge day after day. In this paper, an in-depth analysis is carried out on some frequently adopted image segmentation techniques such as thresholding based techniques, edge detection based or boundary based techniques, region-based techniques, clustering etc. and also discuss their advantages and disadvantages.", "title": "" }, { "docid": "550ac6565bf42f42ec35d63f8c3b1e01", "text": "A fully planar ultrawideband phased array with wide scan and low cross-polarization performance is introduced. The array is based on Munk's implementation of the current sheet concept, but it employs a novel feeding scheme for the tightly coupled horizontal dipoles that enables simple PCB fabrication. This feeding eliminates the need for “cable organizers” and external baluns, and when combined with dual-offset dual-polarized lattice arrangements the array can be implemented in a modular, tile-based fashion. Simple physical explanations and circuit models are derived to explain the array's operation and guide the design process. The theory and insights are subsequently used to design an exemplary dual-polarized infinite array with 5:1 bandwidth and VSWR <; 2.1 at broadside, and cross-polarization ≈ -15 dB out to θ = 45° in the D- plane.", "title": "" }, { "docid": "1ef2bb601d91d77287d3517c73b453fe", "text": "Proteins from silver-stained gels can be digested enzymatically and the resulting peptide analyzed and sequenced by mass spectrometry. Standard proteins yield the same peptide maps when extracted from Coomassie- and silver-stained gels, as judged by electrospray and MALDI mass spectrometry. The low nanogram range can be reached by the protocols described here, and the method is robust. A silver-stained one-dimensional gel of a fraction from yeast proteins was analyzed by nano-electrospray tandem mass spectrometry. In the sequencing, more than 1000 amino acids were covered, resulting in no evidence of chemical modifications due to the silver staining procedure. Silver staining allows a substantial shortening of sample preparation time and may, therefore, be preferable over Coomassie staining. This work removes a major obstacle to the low-level sequence analysis of proteins separated on polyacrylamide gels.", "title": "" }, { "docid": "920b20fe7f4d7a63052b1058d67a50fc", "text": "Aroma is one of the most important quality traits of basmati rice (Oryza sativa L) that leads to high consumer acceptance. Earlier three significant QTLs for aroma, namely aro3-1, aro4-1 and aro8-1, have been mapped on rice chromosomes 3, 4 and 8, respectively, using a population of recombinant inbred lines (RILs) derived from a cross between Pusa 1121 (a basmati quality variety) and Pusa 1342 (a non-aromatic variety). For fine mapping of these QTLs, 184 F6 RILs were grown in the Kharif season of 2005 at New Delhi and Karnal, India. A total of 115 new SSR markers covering the three QTL intervals were designed and screened for parental polymorphism. Of these, 26 markers were polymorphic between parents, eight for the interval aro3-1, eight for the interval aro4-1 and ten for the interval aro8-1, thus enriching the density of SSR markers in these QTL intervals. Revised genetic maps were constructed by adding 23 of these new markers to the earlier map, by giving physical order of the markers in the pseudomolecules a preference. In the revised maps, the interval for QTL aro4-1 could not be improved further but QTL aro3-1 was narrowed down to an interval of 390 kbp from the earlier reported interval of 8.6 Mbp and similarly the QTL aro8-1 was narrowed down to a physical interval of 430 kbp. The numbers of candidate genes in the aro3-1 and aro8-1 intervals have now been reduced to 51 and 66, respectively. The badh2 gene on chromosome 8 was not associated with the aroma QTL on this chromosome.", "title": "" }, { "docid": "01eaab0d3c2ef1d4aec1adc08efd1b67", "text": "A printed circuit board, or (PCB) is used to mechanically support and electrically connect electronic components using conductive pathways, track or signal traces etched from copper sheets laminated onto anon conductive substrate. The automatic inspection of PCBs serves a purpose which is traditional in computer technology. The purpose is to relieve human inspectors of the tedious and inefficient task of looking for those defects in PCBs which could lead to electric failure. In this project Machine Vision PCB Inspection System is applied at the first step of manufacturing, i.e., the making of bare PCB. We first compare a PCB standard image with a PCB image, using a simple subtraction algorithm that can highlight the main problem-regions. We have also seen the effect of noise in a PCB image that at what level this method is suitable to detect the faulty image. Our focus is to detect defects on printed circuit boards & to see the effect of noise. Typical defects that can be detected are over etchings (opens), under-etchings (shorts), holes etc. Index terms – Machine vision, PCB defects, Image Subtraction Algorithm, PCB Inspection", "title": "" }, { "docid": "7405b1ea867fafd576e889ce17e5f13e", "text": "The objective of this study was to compare patients with obsessive-compulsive disorder (OCD) associated with pathologic skin picking (PSP) and/or trichotillomania, and patients with OCD without such comorbidities, for demographic and clinical characteristics. We assessed 901 individuals with a primary diagnosis of OCD, using the Structured Clinical Interview for Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) Axis I disorders. Diagnoses of PSP and trichotillomania were made in 16.3% and 4.9% of the sample, respectively. After the logistic regression analysis, the following factors retained an association with OCD-PSP/trichotillomania: younger (odds ratio [OR] = 0.979; P = .047), younger at the onset of compulsive symptoms (OR = 0.941; P = .007), woman (OR = 2.538; P < .001), with a higher level of education (OR = 1.055; P = .025), and with comorbid body dysmorphic disorder (OR = 2.363; P = .004). These findings support the idea that OCD accompanied by PSP/trichotillomania characterizes a specific subgroup.", "title": "" }, { "docid": "f99d0e24dece8b2de287b7d86c483f83", "text": "Recently, the Task Force on Process Mining released the Process Mining Manifesto. The manifesto is supported by 53 organizations and 77 process mining experts contributed to it. The active contributions from end-users, tool vendors, consultants, analysts, and researchers illustrate the growing relevance of process mining as a bridge between data mining and business process modeling. This paper summarizes the manifesto and explains why process mining is a highly relevant, but also very challenging, research area. This way we hope to stimulate the broader ACM SIGKDD community to look at process-centric knowledge discovery.", "title": "" }, { "docid": "29b4a9f3b3da3172e319d11b8f938a7b", "text": "Since social media have become very popular during the past few years, researchers have been focusing on being able to automatically process and extract sentiments information from large volume of social media data. This paper contributes to the topic, by focusing on sentiment analysis for Chinese social media. In this paper, we propose to rely on Part of Speech (POS) tags in order to extract unigrams and bigrams features. Bigrams are generated according to the grammatical relation between consecutive words. With those features, we have shown that focusing on a specific topic allows to reach higher estimation accuracy.", "title": "" }, { "docid": "2253d4fcef5289578595d6c72db3a905", "text": "Estimation of efficiency of firms in a non-competit ive market characterized by heterogeneous inputs and outputs along with their varying prices is questionable when factor-based technology sets are used in data envelopment analysis (DEA). In thi s scenario, a value-based technology becomes an appropriate reference technology against which e fficiency can be assessed. In this contribution, the value-based models of Tone (2002) are extended in a directional DEA set up to develop new directional costand revenue-based measures of eff iciency, which are then decomposed into their respective directional value-based technical and al loc tive efficiencies. These new directional value-based measures are more general, and include the xisting value-based measures as special cases. These measures satisfy several desirable pro p rties of an ideal efficiency measure. These new measures are advantageous over the existing ones in t rms of 1) their ability to satisfy the most important property of translation invariance; 2) ch oi es over the use of suitable direction vectors in handling negative data; and 3) flexibility in provi ding the decision makers with the option of specifying preferable direction vectors to incorpor ate their preferences. Finally, under the condition of no prior unit price information, a directional v alue-based measure of profit inefficiency is developed for firms whose underlying objectives are p ofit maximization. For an illustrative empirical application, our new measures are applied to a real-life data set of 50 US banks to draw inferences about the production correspondence of b anking industry.", "title": "" }, { "docid": "95b112886d7278a4596c49d5a5360fb5", "text": "The InfoVis 2004 contest led to the development of several bibliography visualization systems. Even though each of these systems offers some unique views of the bibliography data, there is no single best system offering all the desired views. We have thus studied how to consolidate the desirable functionalities of these systems into a cohesive design. We have also designed a few novel visualization methods. This paper presents our findings and creation: BiblioViz, a bibliography visualization system that gives the maximum number of views of the data using a minimum number of visualization constructs in a unified fashion.", "title": "" }, { "docid": "14520419a4b0e27df94edc4cf23cde65", "text": "In this paper we propose and examine non–parametric statistical tests to define similarity and homogeneity measure s for textures. The statistical tests are applied to the coeffi cients of images filtered by a multi–scale Gabor filter bank. We will demonstrate that these similarity measures are useful for both, texture based image retrieval and for unsupervised texture segmentation, and hence offer an unified approach to these closely related tasks. We present results on Brodatz–like micro–textures and a collection of real–word images.", "title": "" }, { "docid": "2bf6933a6352a35a449d7a8425e30822", "text": "The ability to remotely measure heart rate from videos without requiring any special setup is beneficial to many applications. In recent years, a number of papers on heart rate (HR) measurement from videos have been proposed. However, these methods typically require the human subject to be stationary and for the illumination to be controlled. For methods that do take into account motion and illumination changes, strong assumptions are still made about the environment (e.g. background can be used for illumination rectification). In this paper, we propose an HR measurement method that is robust to motion, illumination changes, and does not require use of an environment's background. We present conditions under which cardiac activity extraction from local regions of the face can be treated as a linear Blind Source Separation problem and propose a simple but robust algorithm for selecting good local regions. The independent HR estimates from multiple local regions are then combined in a majority voting scheme that robustly recovers the HR. We validate our algorithm on a large database of challenging videos.", "title": "" }, { "docid": "5ae4b1d4ef00afbde49edfaa2728934b", "text": "A wideband, low loss inline transition from microstrip line to rectangular waveguide is presented. This transition efficiently couples energy from a microstrip line to a ridge and subsequently to a TE10 waveguide. This unique structure requires no mechanical pressure for electrical contact between the microstrip probe and the ridge because the main planar circuitry and ridge sections are placed on a single housing. The measured insertion loss for back-to-back transition is 0.5 – 0.7 dB (0.25 – 0.35 dB/transition) in the band 50 – 72 GHz.", "title": "" }, { "docid": "028a64817ebf6975a55de11213b16eb6", "text": "This paper is the result of research conducted in response to a client’s question of the importance of color in their new school facility. The resulting information is a compilation of studies conducted by color psychologists, medical and design professionals. Introduction From psychological reactions to learned cultural interpretations, human reaction and relationship to color is riddle with complexities. The variety of nuances, however, does not dilute the amazing power of color on humans and its ability to enhance our experience of the learning environment. To formulate a better understanding of color’s impact, one must first form a basic understanding of Carl Jung’s theory of the collective unconscious. According to Jung, all of us are born with a basic psyche that can later be differentiated based upon personal experience. This basic psyche reflects the evolutionary traits that have helped humans to survive throughout history. For example, an infant has a pre-disposed affinity for two dark spots next to each other, an image that equals their visual interpretation of a human face. This affinity for the shapes is not learned, but preprogrammed into the collective unconscious of all human children. Just as we are programmed to identify with the human face, our body has a basic interpretation and reaction to certain colors. As proven in recent medical studies, however, the psychological reaction to color does not preclude the basic biological reaction that stems from human evolution. The human ability to see a wide range of color and our reaction to color is clearly articulated in Frank Mahnke’s color pyramid. The pyramid lists six levels of our color experience in an increasingly personalized interpretation. The clear hierarchy of the graphic, however, belies the immediate impact that mood, age and life experiences play in the moment to moment personal interpretation of color. Balancing the research of color interpretation with these personal interpretations becomes the designer’s task as environmental color choices are made. Understanding our Biological Processing When discussing color experiences in terms of the physical reactions of blood pressure, eyestrain and brain development, the power and importance of a well-designed environment crosses cultural and personal barriers. It does not cancel the importance of these experiences, but it does provide an objective edge to the argument for careful color application in an often subjective decisionmaking realm. Color elicits a total response from human beings because the energy produced by the light that carries color effects our body functions and influences our mind and emotion. In 1976, Rikard Kuller demonstrated how color and visual patterning affects not only the cortex but also the entire central nervous system1. Color has been shown to alter the level of alpha brain wave activity, which is used in the medical field to measure human alertness. In addition, it has been found that when color is transmitted through the human eye, the brain releases the hormone, hypothalamus, which affects our moods, mental clarity and energy level. Experiencing color, however, is not limited to our visual comprehension of hues. In a study conducted by Harry Wohlfarth and Catharine Sam of the University of Alberta, they learned that the change in the color environment of 14 severally handicapped and behaviorlly disturbed 8-11 Wednesday June 18, 2003 NeoCON The Impact of Color on Learning year olds resulted in a drop in blood pressure and reduction in aggressive behavior in both blind and sighted children. This passage of the benefits of varying color’s energy is plausible when one considers that color is after all light waves that bounce around and are absorbed by all surfaces. Further study by Antonio F. Torrice, resulted in his thesis that specific colors impact certain physical systems in the human body.2 In Torrice’s study, he proposes that the following systems are influenced by these particular hues: Motor Skill Activity – Red, Circulatory System – Orange, Cardiopulmonary – Yellow, Speech Skill Activity – Green, Eyes, Ears and Nose – Blue, Nonverbal Activity – Violet. The analysis of our biological reaction and processing of color is quickly linked to the psychological reactions that often simultaneously manifest themselves. The psychological reactions to color are particularly apparent in the qualitative descriptions (anxiety, aggression, sadness, quiet) offered in color analysis. Introducing Color in Schools When discussing color with school districts it is important to approach color choices as functional color rather than from a standpoint of aesthetics. Functional color focuses on using color to achieve an end result such as increased attention span and lower levels of eye fatigue. These color schemes are not measured by criteria of beauty but rather by tangible evidence.3 The following are the results of a variety of tests conducted on the impact of color in the environment. Viewed together, the results of these studies demonstrate a basic guideline for designers when evaluating color applications for schools. The tests do not offer a definitive color scheme for each school environment, but provide the functional guidelines and reasons why color is an important element in school interiors. Relieves eye fatigue: Eye strain is a medical ailment diagnosed by increased blinking, dilation of the pupil when light intensity is static, reduction in the ability to focus on clear objects and an inability to distinguish small differences in brightness. End wall treatments in a classroom can help to reduce instances of eyestrain for students by helping the eye to relax as students look up from a task. Studies suggest that the end wall colors should be a medium hue with the remaining walls a neutral tint such as Oyster white, Sandstone or Beige. The end wall treatment also helps to relieve the visual monotony of a classroom and stimulate a student’s brain. Increases productivity and accuracy As demonstrated by an environmental color coordination study conducted by the US Navy, in the three years following the introduction of color into the environment a drop of accident frequency from 6.4 to 4.6 or 28% was noted.4 This corroborates an independent study demonstrating white and off-white business environments resulted in a 25% or more drop in human efficiency.5 Color’s demonstrated effectiveness on improving student’s attention span as well as both student and teacher’s sense of time, is a further reason as to how color can increase the productivity in a classroom. The mental stimulation passively received by the color in a room, helps the student and teacher stay focused on the task at hand. This idea is further supported by Harry Wohlfarth’s 1983 study of four elementary schools that notes that schools that received improved lighting and color showed the largest improvements in academic performance and IQ scores.6 Wednesday June 18, 2003 NeoCON The Impact of Color on Learning The demonstrated negatives of monotone environments also support the positives demonstrated by colorful environments. For example, apes left alone surrounded by blank walls were found to withdraw into themselves in a manner similar to schizophrenics. Humans were also found to turn inward in monotone environments, which may induce feelings of anxiety, fear and distress resulting from understimulation. This lack of stimulation further creates a sense of restlessness, excessive emotional response, difficulty in concentration and irritation. Aids in wayfinding With the growing focusing on smaller learning communities, many schools are organizing their facilites around a school within a school plan. Using color to further articulate these smaller learning communities aids in developing place identity. The color can create a system of order and help to distinguish important and unimportant elements in the environment. The use of color and graphics to aid wayfinding is particularly important for primary school children who starting at the age of three have begun to recognize and match colors and finds design’s that emphasize a child as a unique and separate person can be stimulating. Supports developmental processes Being sensitive to each age group’s different responses to color is key in creating an environment stimulating to their educational experience. Children’s rejection or acceptance of certain colors is a mirror of their development into adulthood.7 Younger children find high contrast and bright colors stimulating with a growing pechant for colors that create patterns. Once students transition into adolescence, however, the cooler colors and more subdued hues provide enough stimulation to them without proving distracting or stress-inducing. Guidelines for Academic Environments: Frank H. Mahnke in his book, Color, Environment and Human Response, offers designers guidelines specifically for integrating color in the educational environment. His guidelines stem from his own research in the fields of color and environmental psychology. • Preschool and Elementary school prefer a warm, bright color scheme that compliments their natural extroverted nature . • Cool colors are recommended for upper grade and secondary classrooms for their ability to focus concentration. • Hallways can have more colored range than in the classroom and be used to give the school a distintive personality. • Libraries utilize a pale or light green creating an effect that enhances quietness and concentration. Additional color application guidelines gleaned from the many sources reviewed are: • Maximum ratio of brightness difference of 3 to 1 between ceiling and furniture finish. (White celining at 90% reflectance, desk finish at 30% reflectance) • Brightness ratio in general field of view is within 5 to 1 promotion smooth unencumbered vision that enables average school tasks to be performed comfortably • End wall treatments in mediu", "title": "" }, { "docid": "b3ffb805b3dcffc4e5c9cec47f90e566", "text": "Real-time ride-sharing, which enables on-the-fly matching between riders and drivers (even en-route), is an important problem due to its environmental and societal benefits. With the emergence of many ride-sharing platforms (e.g., Uber and Lyft), the design of a scalable framework to match riders and drivers based on their various constraints while maximizing the overall profit of the platform becomes a distinguishing business strategy.\n A key challenge of such framework is to satisfy both types of the users in the system, e.g., reducing both riders' and drivers' travel distances. However, the majority of the existing approaches focus only on minimizing the total travel distance of drivers which is not always equivalent to shorter trips for riders. Hence, we propose a fair pricing model that simultaneously satisfies both the riders' and drivers' constraints and desires (formulated as their profiles). In particular, we introduce a distributed auction-based framework where each driver's mobile app automatically bids on every nearby request taking into account many factors such as both the driver's and the riders' profiles, their itineraries, the pricing model, and the current number of riders in the vehicle. Subsequently, the server determines the highest bidder and assigns the rider to that driver. We show that this framework is scalable and efficient, processing hundreds of tasks per second in the presence of thousands of drivers. We compare our framework with the state-of-the-art approaches in both industry and academia through experiments on New York City's taxi dataset. Our results show that our framework can simultaneously match more riders to drivers (i.e., higher service rate) by engaging the drivers more effectively. Moreover, our frame-work schedules shorter trips for riders (i.e., better service quality). Finally, as a consequence of higher service rate and shorter trips, our framework increases the overall profit of the ride-sharing platforms.", "title": "" }, { "docid": "27f0723e95930400d255c8cd40ea53b0", "text": "We investigated the use of context-dependent deep neural network hidden Markov models, or CD-DNN-HMMs, to improve speech recognition performance for a better assessment of children English language learners (ELLs). The ELL data used in the present study was obtained from a large language assessment project administered in schools in a U.S. state. Our DNN-based speech recognition system, built using rectified linear units (ReLU), greatly outperformed recognition accuracy of Gaussian mixture models (GMM)-HMMs, even when the latter models were trained with eight times more data. Large improvement was observed for cases of noisy and/or unclear responses, which are common in ELL children speech. We further explored the use of content and manner-of-speaking features, derived from the speech recognizer output, for estimating spoken English proficiency levels. Experimental results show that the DNN-based recognition approach achieved 31% relative WER reduction when compared to GMM-HMMs. This further improved the quality of the extracted features and final spoken English proficiency scores, and increased overall automatic assessment performance to the human performance level, for various open-ended spoken language tasks.", "title": "" } ]
scidocsrr
b9b0bb1449077e28f7eef4a1b602ec0f
An iterative pruning algorithm for feedforward neural networks
[ { "docid": "e5c625ceaf78c66c2bfb9562970c09ec", "text": "A continuing question in neural net research is the size of network needed to solve a particular problem. If training is started with too small a network for the problem no learning can occur. The researcher must then go through a slow process of deciding that no learning is taking place, increasing the size of the network and training again. If a network that is larger than required is used, then processing is slowed, particularly on a conventional von Neumann computer. An approach to this problem is discussed that is based on learning with a net which is larger than the minimum size network required to solve the problem and then pruning the solution network. The result is a small, efficient network that performs as well or better than the original which does not give a complete answer to the question, since the size of the initial network is still largely based on guesswork but it gives a very useful partial answer and sheds some light on the workings of a neural network in the process.<<ETX>>", "title": "" } ]
[ { "docid": "f2492c40f98e3cccc3ac3ab7accf4af7", "text": "Accurate detection of single-trial event-related potentials (ERPs) in the electroencephalogram (EEG) is a difficult problem that requires efficient signal processing and machine learning techniques. Supervised spatial filtering methods that enhance the discriminative information in EEG data are commonly used to improve single-trial ERP detection. We propose a convolutional neural network (CNN) with a layer dedicated to spatial filtering for the detection of ERPs and with training based on the maximization of the area under the receiver operating characteristic curve (AUC). The CNN is compared with three common classifiers: 1) Bayesian linear discriminant analysis; 2) multilayer perceptron (MLP); and 3) support vector machines. Prior to classification, the data were spatially filtered with xDAWN (for the maximization of the signal-to-signal-plus-noise ratio), common spatial pattern, or not spatially filtered. The 12 analytical techniques were tested on EEG data recorded in three rapid serial visual presentation experiments that required the observer to discriminate rare target stimuli from frequent nontarget stimuli. Classification performance discriminating targets from nontargets depended on both the spatial filtering method and the classifier. In addition, the nonlinear classifier MLP outperformed the linear methods. Finally, training based AUC maximization provided better performance than training based on the minimization of the mean square error. The results support the conclusion that the choice of the systems architecture is critical and both spatial filtering and classification must be considered together.", "title": "" }, { "docid": "8518dc45e3b0accfc551111489842359", "text": "PURPOSE\nRobot-assisted surgery has been rapidly adopted in the U.S. for prostate cancer. Its adoption has been driven by market forces and patient preference, and debate continues regarding whether it offers improved outcomes to justify the higher cost relative to open surgery. We examined the comparative effectiveness of robot-assisted vs open radical prostatectomy in cancer control and survival in a nationally representative population.\n\n\nMATERIALS AND METHODS\nThis population based observational cohort study of patients with prostate cancer undergoing robot-assisted radical prostatectomy and open radical prostatectomy during 2003 to 2012 used data captured in the SEER (Surveillance, Epidemiology, and End Results)-Medicare linked database. Propensity score matching and time to event analysis were used to compare all cause mortality, prostate cancer specific mortality and use of additional treatment after surgery.\n\n\nRESULTS\nA total of 6,430 robot-assisted radical prostatectomies and 9,161 open radical prostatectomies performed during 2003 to 2012 were identified. The use of robot-assisted radical prostatectomy increased from 13.6% in 2003 to 2004 to 72.6% in 2011 to 2012. After a median followup of 6.5 years (IQR 5.2-7.9) robot-assisted radical prostatectomy was associated with an equivalent risk of all cause mortality (HR 0.85, 0.72-1.01) and similar cancer specific mortality (HR 0.85, 0.50-1.43) vs open radical prostatectomy. Robot-assisted radical prostatectomy was also associated with less use of additional treatment (HR 0.78, 0.70-0.86).\n\n\nCONCLUSIONS\nRobot-assisted radical prostatectomy has comparable intermediate cancer control as evidenced by less use of additional postoperative cancer therapies and equivalent cancer specific and overall survival. Longer term followup is needed to assess for differences in prostate cancer specific survival, which was similar during intermediate followup. Our findings have significant quality and cost implications, and provide reassurance regarding the adoption of more expensive technology in the absence of randomized controlled trials.", "title": "" }, { "docid": "ba6102d758444d135ec3f73ae0cfe38f", "text": "This paper describes Purify, a software testing and quality assurance tool that detects memory leaks and access errors. Purify inserts additional checking instructions directly into the object code produced by existing compilers. These instructions check every memory read and write performed by the program-undertest and detect several types of access errors, such as reading uninitialized memory or writing to freed memory. Purify inserts checking logic into all of the code in a program, including third-party and vendor object-code libraries, and verifies system call interfaces. In addition, Purify tracks memory usage and identifies individual memory leaks using a novel adaptation of garbage collection techniques. Purify produces standard executable files compatible with existing debuggers. Purify's nearly comprehensive memory access checking slows the target program down typically by less than a factor of three and has resulted in significantly more reliable software for several development groups.", "title": "" }, { "docid": "cdb937def5a92e3843a761f57278783e", "text": "We design a novel, communication-efficient, failure-robust protocol for secure aggregation of high-dimensional data. Our protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner (i.e. without learning each user's individual contribution), and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network. We prove the security of our protocol in the honest-but-curious and active adversary settings, and show that security is maintained even if an arbitrarily chosen subset of users drop out at any time. We evaluate the efficiency of our protocol and show, by complexity analysis and a concrete implementation, that its runtime and communication overhead remain low even on large data sets and client pools. For 16-bit input values, our protocol offers $1.73 x communication expansion for 210 users and 220-dimensional vectors, and 1.98 x expansion for 214 users and 224-dimensional vectors over sending data in the clear.", "title": "" }, { "docid": "45054f9c619fc6d69e02754fe7b2655c", "text": "The combination of the spoke type interior permanent magnet synchronous motor together with the NdFeB is an effective solution when high torque density is required. On the other hand, in the last years, the instability and the high price of the rare earth make the magnet minimization and the optimization of the motor design critical and mandatory. This paper deals with the design criteria of a spoke type, fractional slot concentrated winding, interior permanent magnet motor, especially as regards the minimization of the magnet volume, maximization of the airgap flux density and optimization of the split ratio. An analytical procedure is presented and validated by means of finite element analysis. At last, the advantages and the differences among the solutions are highlighted.", "title": "" }, { "docid": "f985b4db1646afdd014b2668267e947f", "text": "The encode-decoder framework has shown recent success in image captioning. Visual attention, which is good at detailedness, and semantic attention, which is good at comprehensiveness, have been separately proposed to ground the caption on the image. In this paper, we propose the Stepwise Image-Topic Merging Network (simNet) that makes use of the two kinds of attention at the same time. At each time step when generating the caption, the decoder adaptively merges the attentive information in the extracted topics and the image according to the generated context, so that the visual information and the semantic information can be effectively combined. The proposed approach is evaluated on two benchmark datasets and reaches the state-of-the-art performances.1", "title": "" }, { "docid": "537d6fdfb26e552fb3254addfbb6ac49", "text": "We propose a unified framework for building unsupervised representations of entities and their compositions, by viewing each entity as a histogram (or distribution) over its contexts. This enables us to take advantage of optimal transport and construct representations that effectively harness the geometry of the underlying space containing the contexts. Our method captures uncertainty via modelling the entities as distributions and simultaneously provides interpretability with the optimal transport map, hence giving a novel perspective for building rich and powerful feature representations. As a guiding example, we formulate unsupervised representations for text, and demonstrate it on tasks such as sentence similarity and word entailment detection. Empirical results show strong advantages gained through the proposed framework. This approach can potentially be used for any unsupervised or supervised problem (on text or other modalities) with a co-occurrence structure, such as any sequence data. The key tools at the core of this framework are Wasserstein distances and Wasserstein barycenters, hence raising the question from our title.", "title": "" }, { "docid": "ac3f7a9557988101fb9e2eea0c1aa652", "text": "Against the background of increasing awareness and appreciation of issues such as global warming and the impact of mankind's activities such as agriculture on the global environment, this paper updates previous assessments of some key environmental impacts that crop biotechnology has had on global agriculture. It focuses on the environmental impacts associated with changes in pesticide use and greenhouse gas emissions arising from the use of GM crops. The adoption of the technology has reduced pesticide spraying by 503 million kg (-8.8%) and, as a result, decreased the environmental impact associated with herbicide and insecticide use on these crops (as measured by the indicator the Environmental Impact Quotient [EIQ]) by 18.7%. The technology has also facilitated a significant reduction in the release of greenhouse gas emissions from this cropping area, which, in 2012, was equivalent to removing 11.88 million cars from the roads.", "title": "" }, { "docid": "0222a78f29796f0747a11b027b2fe0d8", "text": "Since last decade, face recognition has replaced almost all biometric authentication techniques available. Many algorithms are in existence today based on various features. In this paper, we have compared the performance of various classifiers like correlation, Artificial Neural Network (ANN) and Support Vector Machine (SVM) for Face Recognition. We have proposed face recognition based on discriminative features. Holistic featuresbased methods Fisher Discriminant Analysis (FDA) usused to extract outdiscriminative features from the input face image respectively. These features are used to train classifiers like Artificial Neural Network (ANN) and Support Vector Machine (SVM). Results in the last section describe the accuracy of proposed scheme. Keywords-Face Recognition, Fisher Discriminant Analysis, Artificial Neural Network, Support Vector Machine.", "title": "" }, { "docid": "2a1dbae338e2ea6f40f14dd62d86ebba", "text": "Top-level performances in endurance sports require several years of hard training loads. A major objective of this endurance training is to reach the most elevated metabolic adaptations the athlete will be able to support. As a consequence, overtraining is a recurrent problem that highly-trained athletes may experience during their career. Many studies have revealed that overtraining could be highlighted by various biochemical markers but a principal discrepancy in the diagnosis of overtraining stems from the fact that none of these markers may be considered as universal. In endurance sports, the metabolic aspects of training fatigue appear to be the most relevant parameters that may characterise overtraining when recovery is not sufficient, or when dietary habits do not allow an optimal replenishment of substrate stores. From the skeletal muscle functions to the overall energetic substrate availability during exercise, six metabolic schemes have been studied in relation to overtraining, each one related to a central parameter, i.e. carbohydrates, branched-chain amino acids, glutamine, polyunsaturated fatty acids, leptin, and proteins. We summarise the current knowledge on these metabolic hypotheses regarding the occurrence of overtraining in endurance sports.", "title": "" }, { "docid": "cbd8e376ae26ad4f8b253ca4ad3aa94a", "text": "Social media allow for an unprecedented amount of interactions and information exchange between people online. A fundamental aspect of human social behavior, however, is the tendency of people to associate themselves with like-minded individuals, forming homogeneous social circles both online and offline. In this work, we apply a new model that allows us to distinguish between social ties of varying strength, and to observe evidence of homophily with regards to politics, music, health, residential sector & year in college, within the online and offline social network of 74 college students. We present a multiplex network approach to social tie strength, here applied to mobile communication data calls, text messages, and co-location, allowing us to dimensionally identify relationships by considering the number of channels utilized between students. We find that strong social ties are characterized by maximal use of communication channels, while weak ties by minimal use. We are able to identify 75% of close friendships, 90% of weaker ties, and 90% of Facebook friendships as compared to reported ground truth. We then show that stronger ties exhibit greater profile similarity than weaker ones. Apart from high homogeneity in social circles with respect to political and health aspects, we observe strong homophily driven by music, residential sector and year in college. Despite Facebook friendship being highly dependent on residence and year, exposure to less homogeneous content can be found in the online rather than the offline social circles of students, most notably in political and music aspects.", "title": "" }, { "docid": "9a8b397bb95b9123a8d41342a850a456", "text": "We present a novel task: the chronological classification of Hafez’s poems (ghazals). We compiled a bilingual corpus in digital form, with consistent idiosyncratic properties. We have used Hooman’s labeled ghazals in order to train automatic classifiers to classify the remaining ghazals. Our classification framework uses a Support Vector Machine (SVM) classifier with similarity features based on Latent Dirichlet Allocation (LDA). In our analysis of the results we use the LDA topics’ main terms that are passed on to a Principal Component Analysis (PCA) module.", "title": "" }, { "docid": "f48ee93659a25bee9a49e8be6c789987", "text": "what design is from a theoretical point of view, which is a role of the descriptive model. However, descriptive models are not necessarily helpful in directly deriving either the architecture of intelligent CAD or the knowledge representation for intelligent CAD. For this purpose, we need a computable design process model that should coincide, at least to some extent, with a cognitive model that explains actual design activities. One of the major problems in developing so-called intelligent computer-aided design (CAD) systems (ten Hagen and Tomiyama 1987) is the representation of design knowledge, which is a two-part process: the representation of design objects and the representation of design processes. We believe that intelligent CAD systems will be fully realized only when these two types of representation are integrated. Progress has been made in the representation of design objects, as can be seen, for example, in geometric modeling; however, almost no significant results have been seen in the representation of design processes, which implies that we need a design theory to formalize them. According to Finger and Dixon (1989), design process models can be categorized into a descriptive model that explains how design is done, a cognitive model that explains the designer’s behavior, a prescriptive model that shows how design must be done, and a computable model that expresses a method by which a computer can accomplish a task. A design theory for intelligent CAD is not useful when it is merely descriptive or cognitive; it must also be computable. We need a general model of design Articles", "title": "" }, { "docid": "465c1ecc79617d96c9509106badc8673", "text": "Bacterial replicative DNA polymerases such as Polymerase III (Pol III) share no sequence similarity with other polymerases. The crystal structure, determined at 2.3 A resolution, of a large fragment of Pol III (residues 1-917), reveals a unique chain fold with localized similarity in the catalytic domain to DNA polymerase beta and related nucleotidyltransferases. The structure of Pol III is strikingly different from those of members of the canonical DNA polymerase families, which include eukaryotic replicative polymerases, suggesting that the DNA replication machinery in bacteria arose independently. A structural element near the active site in Pol III that is not present in nucleotidyltransferases but which resembles an element at the active sites of some canonical DNA polymerases suggests that, at a more distant level, all DNA polymerases may share a common ancestor. The structure also suggests a model for interaction of Pol III with the sliding clamp and DNA.", "title": "" }, { "docid": "7cff63256e913e8590ce704ea5cc93c7", "text": "We report on a probabilistic weighting approach to indexing the scanned images of very short documents. This fully automatic process copes with short and very noisy texts (67% word accuracy) derived from the images by Optical Character Recognition (OCR). The probabilistic term weighting approach is based on a theoretical proof explaining how the retrieval effectiveness is affected by recognition errors. We have evaluated our probabilistic weighting approach on a sample of index cards from an alphabetic library catalogue where, on the average, a card contains only 23 terms. We have demonstrated over 30% improvement in retrieval effectiveness over a conventional weighted retrieval method where the recognition errors are not taken into account, We also show how we can take advantage of the ordering information of the alphabetic library catalogue.", "title": "" }, { "docid": "1241bc6b7d3522fe9e285ae843976524", "text": "In many new high performance designs, the leakage component of power consumption is comparable to the switching component. Reports indicate that 40% or even higher percentage of the total power consumption is due to the leakage of transistors. This percentage will increase with technology scaling unless effective techniques are introduced to bring leakage under control. This article focuses on circuit optimization and design automation techniques to accomplish this goal. The first part of the article provides an overview of basic physics and process scaling trends that have resulted in a significant increase in the leakage currents in CMOS circuits. This part also distinguishes between the standby and active components of the leakage current. The second part of the article describes a number of circuit optimization techniques for controlling the standby leakage current, including power gating and body bias control. The third part of the article presents techniques for active leakage control, including use of multiple-threshold cells, long channel devices, input vector design, transistor stacking to switching noise, and sizing with simultaneous threshold and supply voltage assignment.", "title": "" }, { "docid": "8b4261fe27581adf0bdfbdba2b31b730", "text": "In this thesis project, a vector control system for an induct ion motor is implemented on an evaluation board. By comparing the pros and cons of eight candidates of e valuation boards, the TMS320F28335 DSP Experimenter Kit is selected as the digital controller of th e vector control system. Necessary peripheral and interface circuits are built for the signal measurement , the three-phase inverter control and the system protection. These circuits work appropriately except that e conditioning circuit for analog-to-digital conversion contains too much noise. At the stage of the control a lg rithm design, the designed vector control system is simulated in Matlab/Simulink with both S-functio n and Simulink blocks. The simulation results meet the design specifications well. When the control system is verified by simulations, the DSP evaluation board is programmed and the system is tested. The test result s show that the current regulator and the speed regulator are able to regulate the stator currents and the ma chine speed fast and precisely. With the initial values of the motor parameters there is a 12.5% overshoot in the current step response. By adjusting the stator resistance and the inductance this overshoot could b e removed and only minor difference remains between the simulated and measured current step responses.", "title": "" }, { "docid": "d02af961d8780a06ae0162647603f8bb", "text": "We contribute an empirically derived noise model for the Kinect sensor. We systematically measure both lateral and axial noise distributions, as a function of both distance and angle of the Kinect to an observed surface. The derived noise model can be used to filter Kinect depth maps for a variety of applications. Our second contribution applies our derived noise model to the KinectFusion system to extend filtering, volumetric fusion, and pose estimation within the pipeline. Qualitative results show our method allows reconstruction of finer details and the ability to reconstruct smaller objects and thinner surfaces. Quantitative results also show our method improves pose estimation accuracy.", "title": "" }, { "docid": "6b9a25385c44fcef85a0e1725f7ff0c2", "text": "Placement of interior node points is a crucial step in the generation of quality meshes in sweeping algorithms. Two new algorithms were devised for node point placement and implemented in Sweep Tool, the first based on the use of linear transformations between bounding node loops and the second based on smoothing. Examples are given that demonstrate the effectiveness of these algorithms.", "title": "" }, { "docid": "f383dd5dd7210105406c2da80cf72f89", "text": "We present a new, \"greedy\", channel-router that is quick, simple, and highly effective. It always succeeds, usually using no more than one track more than required by channel density. (It may be forced in rare cases to make a few connections \"off the end\" of the channel, in order to succeed.) It assumes that all pins and wiring lie on a common grid, and that vertical wires are on one layer, horizontal on another. The greedy router wires up the channel in a left-to-right, column-by-column manner, wiring each column completely before starting the next. Within each column the router tries to maximize the utility of the wiring produced, using simple, \"greedy\" heuristics. It may place a net on more than one track for a few columns, and \"collapse\" the net to a single track later on, using a vertical jog. It may also use a jog to move a net to a track closer to its pin in some future column. The router may occasionally add a new track to the channel, to avoid \"getting stuck\".", "title": "" } ]
scidocsrr
6998f3e837e8ec5541fdd97c8697271f
What design science is not
[ { "docid": "8bc04818536d2a8deff01b0ea0419036", "text": "Research in IT must address the design tasks faced by practitioners. Real problems must be properly conceptualized and represented, appropriate techniques for their solution must be constructed, and solutions must be implemented and evaluated using appropriate criteria. If significant progress is to be made, IT research must also develop an understanding of how and why IT systems work or do not work. Such an understanding must tie together natural laws governing IT systems with natural laws governing the environments in which they operate. This paper presents a two dimensional framework for research in information technology. The first dimension is based on broad types of design and natural science research activities: build, evaluate, theorize, and justify. The second dimension is based on broad types of outputs produced by design research: representational constructs, models, methods, and instantiations. We argue that both design science and natural science activities are needed to insure that IT research is both relevant and effective.", "title": "" } ]
[ { "docid": "3e23069ba8a3ec3e4af942727c9273e9", "text": "This paper describes an automated tool called Dex (difference extractor) for analyzing syntactic and semantic changes in large C-language code bases. It is applied to patches obtained from a source code repository, each of which comprises the code changes made to accomplish a particular task. Dex produces summary statistics characterizing these changes for all of the patches that are analyzed. Dex applies a graph differencing algorithm to abstract semantic graphs (ASGs) representing each version. The differences are then analyzed to identify higher-level program changes. We describe the design of Dex, its potential applications, and the results of applying it to analyze bug fixes from the Apache and GCC projects. The results include detailed information about the nature and frequency of missing condition defects in these projects.", "title": "" }, { "docid": "6d589aaae8107bf6b71c0f06f7a49a28", "text": "1. INTRODUCTION The explosion of digital connectivity, the significant improvements in communication and information technologies and the enforced global competition are revolutionizing the way business is performed and the way organizations compete. A new, complex and rapidly changing economic order has emerged based on disruptive innovation, discontinuities, abrupt and seditious change. In this new landscape, knowledge constitutes the most important factor, while learning, which emerges through cooperation, together with the increased reliability and trust, is the most important process (Lundvall and Johnson, 1994). The competitive survival and ongoing sustenance of an organisation primarily depend on its ability to redefine and adopt continuously goals, purposes and its way of doing things (Malhotra, 2001). These trends suggest that private and public organizations have to reinvent themselves through 'continuous non-linear innovation' in order to sustain themselves and achieve strategic competitive advantage. The extant literature highlights the great potential of ICT tools for operational efficiency, cost reduction, quality of services, convenience, innovation and learning in private and public sectors. However, scholarly investigations have focused primarily on the effects and outcomes of ICTs (Information & Communication Technology) for the private sector. The public sector has been sidelined because it tends to lag behind in the process of technology adoption and business reinvention. Only recently has the public sector come to recognize the potential importance of ICT and e-business models as a means of improving the quality and responsiveness of the services they provide to their citizens, expanding the reach and accessibility of their services and public infrastructure and allowing citizens to experience a faster and more transparent form of access to government services. The initiatives of government agencies and departments to use ICT tools and applications, Internet and mobile devices to support good governance, strengthen existing relationships and build new partnerships within civil society, are known as eGovernment initiatives. As with e-commerce, eGovernment represents the introduction of a great wave of technological innovation as well as government reinvention. It represents a tremendous impetus to move forward in the 21 st century with higher quality, cost effective government services and a better relationship between citizens and government (Fang, 2002). Many government agencies in developed countries have taken progressive steps toward the web and ICT use, adding coherence to all local activities on the Internet, widening local access and skills, opening up interactive services for local debates, and increasing the participation of citizens on promotion and management …", "title": "" }, { "docid": "55d7db89621dc57befa330c6dea823bf", "text": "In this paper we propose CUDA-based implementations of two 3D point sets registration algorithms: Soft assign and EM-ICP. Both algorithms are known for being time demanding, even on modern multi-core CPUs. Our GPUbased implementations vastly outperform CPU ones. For instance, our CUDA EM-ICP aligns 5000 points in less than 7 seconds on a GeForce 8800GT, while the same implementation in OpenMP on an Intel Core 2 Quad would take 7 minutes.", "title": "" }, { "docid": "2272d3ac8770f456c1cf2e461eba2da9", "text": "EXECUTiVE SUMMARY This quarter, work continued on the design and construction of a robotic fingerspelling hand. The hand is being designed to aid in communication for individuals who are both deaf and blind. In the winter quarter, research was centered on determining an effective method of actuation for the robotic hand. This spring 2008 quarter, time was spent designing the mechanisms needed to mimic the size and motions of a human hand. Several methods were used to determine a proper size for the robotic hand, including using the ManneQuinPro human modeling system to approximate the size of an average male human hand and using the golden ratio to approximate the length of bone sections within the hand. After a proper average hand size was determined, a finger mechanism was designed in the SolidWorks design program that could be built and used in the robotic hand.", "title": "" }, { "docid": "2e57cf33adf048552c4a06f6a2f1c132", "text": "Efficient fastest path computation in the presence of varying speed conditions on a large scale road network is an essential problem in modern navigation systems. Factors affecting road speed, such as weather, time of day, and vehicle type, need to be considered in order to select fast routes that match current driving conditions. Most existing systems compute fastest paths based on road Euclidean distance and a small set of predefined road speeds. However, “History is often the best teacher”. Historical traffic data or driving patterns are often more useful than the simple Euclidean distance-based computation because people must have good reasons to choose these routes, e.g., they may want to avoid those that pass through high crime areas at night or that likely encounter accidents, road construction, or traffic jams. In this paper, we present an adaptive fastest path algorithm capable of efficiently accounting for important driving and speed patterns mined from a large set of traffic data. The algorithm is based on the following observations: (1) The hierarchy of roads can be used to partition the road network into areas, and different path pre-computation strategies can be used at the area level, (2) we can limit our route search strategy to edges and path segments that are actually frequently traveled in the data, and (3) drivers usually traverse the road network through the largest roads available given the distance of the trip, except if there are small roads with a significant speed advantage over the large ones. Through an extensive experimental evaluation on real road networks we show that our algorithm provides desirable (short and well-supported) routes, and that it is significantly faster than competing methods.", "title": "" }, { "docid": "c824c8bb8fd9b0b3f0f89df24e8f53d0", "text": "Ovarian cysts are an extremely common gynecological problem in adolescent. Majority of ovarian cysts are benign with few cases being malignant. Ovarian serous cystadenoma are rare in children. A 14-year-old presented with abdominal pain and severe abdominal distention. She underwent laparotomy and after surgical removal, the mass was found to be ovarian serous cystadenoma on histology. In conclusions, germ cell tumors the most important causes for the giant ovarian masses in children. Epithelial tumors should not be forgotten in the differential diagnosis. Keyword: Adolescent; Ovarian Cysts/diagnosis*; Cystadenoma, Serous/surgery; Ovarian Neoplasms/surgery; Ovarian cystadenoma", "title": "" }, { "docid": "6210a0a93b97a12c2062ac78953f3bd1", "text": "This article proposes a contextual-evolutionary theory of human mating strategies. Both men and women are hypothesized to have evolved distinct psychological mechanisms that underlie short-term and long-term strategies. Men and women confront different adaptive problems in short-term as opposed to long-term mating contexts. Consequently, different mate preferences become activated from their strategic repertoires. Nine key hypotheses and 22 predictions from Sexual Strategies Theory are outlined and tested empirically. Adaptive problems sensitive to context include sexual accessibility, fertility assessment, commitment seeking and avoidance, immediate and enduring resource procurement, paternity certainty, assessment of mate value, and parental investment. Discussion summarizes 6 additional sources of behavioral data, outlines adaptive problems common to both sexes, and suggests additional contexts likely to cause shifts in mating strategy.", "title": "" }, { "docid": "57accb84a15f3b3767ef9a4a524e29b8", "text": "Drosophila melanogaster activates a variety of immune responses against microbial infections. However, information on the Drosophila immune response to entomopathogenic nematode infections is currently limited. The nematode Heterorhabditis bacteriophora is an insect parasite that forms a mutualistic relationship with the gram-negative bacteria Photorhabdus luminescens. Following infection, the nematodes release the bacteria that quickly multiply within the insect and produce several toxins that eventually kill the host. Although we currently know that the insect immune system interacts with Photorhabdus, information on interaction with the nematode vector is scarce. Here we have used next generation RNA-sequencing to analyze the transcriptional profile of wild-type adult flies infected by axenic Heterorhabditis nematodes (lacking Photorhabdus bacteria), symbiotic Heterorhabditis nematodes (carrying Photorhabdus bacteria), and Photorhabdus bacteria alone. We have obtained approximately 54 million reads from the different infection treatments. Bioinformatic analysis shows that infection with Photorhabdus alters the transcription of a large number of Drosophila genes involved in translational repression as well in response to stress. However, Heterorhabditis infection alters the transcription of several genes that participate in lipidhomeostasis and metabolism, stress responses, DNA/protein sythesis and neuronal functions. We have also identified genes in the fly with potential roles in nematode recognition, anti-nematode activity and nociception. These findings provide fundamental information on the molecular events that take place in Drosophila upon infection with the two pathogens, either separately or together. Such large-scale transcriptomic analyses set the stage for future functional studies aimed at identifying the exact role of key factors in the Drosophila immune response against nematode-bacteria complexes.", "title": "" }, { "docid": "4428705a7eab914db00a38a57fb9199e", "text": "Physiological testing of elite athletes requires the correct identification and assessment of sports-specific underlying factors. It is now recognised that performance in long-distance events is determined by maximal oxygen uptake (V(2 max)), energy cost of exercise and the maximal fractional utilisation of V(2 max) in any realised performance or as a corollary a set percentage of V(2 max) that could be endured as long as possible. This later ability is defined as endurance, and more precisely aerobic endurance, since V(2 max) sets the upper limit of aerobic pathway. It should be distinguished from endurance ability or endurance performance, which are synonymous with performance in long-distance events. The present review examines methods available in the literature to assess aerobic endurance. They are numerous and can be classified into two categories, namely direct and indirect methods. Direct methods bring together all indices that allow either a complete or a partial representation of the power-duration relationship, while indirect methods revolve around the determination of the so-called anaerobic threshold (AT). With regard to direct methods, performance in a series of tests provides a more complete and presumably more valid description of the power-duration relationship than performance in a single test, even if both approaches are well correlated with each other. However, the question remains open to determine which systems model should be employed among the several available in the literature, and how to use them in the prescription of training intensities. As for indirect methods, there is quantitative accumulation of data supporting the utilisation of the AT to assess aerobic endurance and to prescribe training intensities. However, it appears that: there is no unique intensity corresponding to the AT, since criteria available in the literature provide inconsistent results; and the non-invasive determination of the AT using ventilatory and heart rate data instead of blood lactate concentration ([La(-)](b)) is not valid. Added to the fact that the AT may not represent the optimal training intensity for elite athletes, it raises doubt on the usefulness of this theory without questioning, however, the usefulness of the whole [La(-)](b)-power curve to assess aerobic endurance and predict performance in long-distance events.", "title": "" }, { "docid": "3dcf6c5e59d4472c0b0e25c96b992f3e", "text": "This paper presents the design of Ultra Wideband (UWB) microstrip antenna consisting of a circular monopole patch antenna with 3 block stepped (wing). The antenna design is an improvement from previous research and it is simulated using CST Microwave Studio software. This antenna was designed on Rogers 5880 printed circuit board (PCB) with overall size of 26 × 40 × 0.787 mm3 and dielectric substrate, εr = 2.2. The performance of the designed antenna was analyzed in term of bandwidth, gain, return loss, radiation pattern, and verified through actual measurement of the fabricated antenna. 10 dB return loss bandwidth from 3.37 GHz to 10.44 GHz based on 50 ohm characteristic impedance for the transmission line model was obtained.", "title": "" }, { "docid": "9de29e26bb0122084d9d67f6c76f1b80", "text": "Neural speech synthesis models have recently demonstrated the ability to synthesize high quality speech for text-to-speech and compression applications. These new models often require powerful GPUs to achieve real-time operation, so being able to reduce their complexity would open the way for many new applications. We propose LPCNet, a WaveRNN variant that combines linear prediction with recurrent neural networks to significantly improve the efficiency of speech synthesis. We demonstrate that LPCNet can achieve significantly higher quality than WaveRNN for the same network size and that high quality LPCNet speech synthesis is achievable with a complexity under 3 GFLOPS. This makes it easier to deploy neural synthesis applications on lower-power devices, such as embedded systems and mobile phones.", "title": "" }, { "docid": "917ab22adee174259bef5171fe6f14fb", "text": "The manner in which quadrupeds change their locomotive patterns—walking, trotting, and galloping—with changing speed is poorly understood. In this paper, we provide evidence for interlimb coordination during gait transitions using a quadruped robot for which coordination between the legs can be self-organized through a simple “central pattern generator” (CPG) model. We demonstrate spontaneous gait transitions between energy-efficient patterns by changing only the parameter related to speed. Interlimb coordination was achieved with the use of local load sensing only without any preprogrammed patterns. Our model exploits physical communication through the body, suggesting that knowledge of physical communication is required to understand the leg coordination mechanism in legged animals and to establish design principles for legged robots that can reproduce flexible and efficient locomotion.", "title": "" }, { "docid": "c71aabd797d4cac56b8985de44f30b46", "text": "The objective of this clinical study was to assess the safety and feasibility of the collagen scaffold, NeuroRegen scaffold, one year after scar tissue resection and implantation. Scar tissue is a physical and chemical barrier that prevents neural regeneration. However, identification of scar tissue is still a major challenge. In this study, the nerve electrophysiology method was used to distinguish scar tissue from normal neural tissue, and then different lengths of scars ranging from 0.5–4.5 cm were surgically resected in five complete chronic spinal cord injury (SCI) patients. The NeuroRegen scaffold along with autologous bone marrow mononuclear cells (BMMCs), which have been proven to promote neural regeneration and SCI recovery in animal models, were transplanted into the gap in the spinal cord following scar tissue resection. No obvious adverse effects related to scar resection or NeuroRegen scaffold transplantation were observed immediately after surgery or at the 12-month follow-up. In addition, patients showed partially autonomic nervous function improvement, and the recovery of somatosensory evoked potentials (SSEP) from the lower limbs was also detected. The results indicate that scar resection and NeuroRegen scaffold transplantation could be a promising clinical approach to treating SCI.", "title": "" }, { "docid": "449c57f0679400c970acbf32d76d6c3c", "text": "The objective of the study was to empirically examine the impact of credit risk on profitability of commercial banks in Ethiopia. For the purpose secondary data collected from 8 sample commercial banks for a 12 year period (2003-2004) were collected from annual reports of respective banks and National Bank of Ethiopia. The data were analyzed using a descriptive statics and panel data regression model and the result showed that credit risk measures: non-performing loan, loan loss provisions and capital adequacy have a significant impact on the profitability of commercial banks in Ethiopia. The study suggested a need for enhancing credit risk management to maintain the prevailing profitability of commercial banks in Ethiopia.", "title": "" }, { "docid": "ce87a635c0c3aaa17e7b83d5fb52adce", "text": "We present a novel definition of the reinforcement learning state, actions and reward function that allows a deep Q-network (DQN) to learn to control an optimization hyperparameter. Using Q-learning with experience replay, we train two DQNs to accept a state representation of an objective function as input and output the expected discounted return of rewards, or q-values, connected to the actions of either adjusting the learning rate or leaving it unchanged. The two DQNs learn a policy similar to a line search, but differ in the number of allowed actions. The trained DQNs in combination with a gradient-based update routine form the basis of the Q-gradient descent algorithms. To demonstrate the viability of this framework, we show that the DQN’s q-values associated with optimal action converge and that the Q-gradient descent algorithms outperform gradient descent with an Armijo or nonmonotone line search. Unlike traditional optimization methods, Q-gradient descent can incorporate any objective statistic and by varying the actions we gain insight into the type of learning rate adjustment strategies that are successful for neural network optimization.", "title": "" }, { "docid": "28a6111c13e9554bf32533f13e56e92b", "text": "OBJECTIVES\nTo better categorize the epidemiologic profile, clinical features, and disease associations of loose anagen hair syndrome (LAHS) compared with other forms of childhood alopecia.\n\n\nDESIGN\nRetrospective survey.\n\n\nSETTING\nAcademic pediatric dermatology practice. Patients Three hundred seventy-four patients with alopecia referred from July 1, 1997, to June 31, 2007.\n\n\nMAIN OUTCOME MEASURES\nEpidemiologic data for all forms of alopecia were ascertained, such as sex, age at onset, age at the time of evaluation, and clinical diagnosis. Patients with LAHS were further studied by the recording of family history, disease associations, hair-pull test or biopsy results, hair color, laboratory test result abnormalities, initial treatment, and involvement of eyelashes, eyebrows, and nails.\n\n\nRESULTS\nApproximately 10% of all children with alopecia had LAHS. The mean age (95% confidence interval) at onset differed between patients with LAHS (2.8 [1.2-4.3] years) vs patients without LAHS (7.1 [6.6-7.7] years) (P < .001), with 3 years being the most common age at onset for patients with LAHS. All but 1 of 37 patients with LAHS were female. The most common symptom reported was thin, sparse hair. Family histories were significant for LAHS (n = 1) and for alopecia areata (n = 3). In 32 of 33 patients, trichograms showed typical loose anagen hairs. Two children had underlying genetic syndromes. No associated laboratory test result abnormalities were noted among patients who underwent testing.\n\n\nCONCLUSIONS\nLoose anagen hair syndrome is a common nonscarring alopecia in young girls with a history of sparse or fine hair. Before ordering extensive blood testing in young girls with diffusely thin hair, it is important to perform a hair-pull test, as a trichogram can be instrumental in the confirmation of a diagnosis of LAHS.", "title": "" }, { "docid": "c7d17145605864aa28106c14954dcae5", "text": "Person re-identification (ReID) is to identify pedestrians observed from different camera views based on visual appearance. It is a challenging task due to large pose variations, complex background clutters and severe occlusions. Recently, human pose estimation by predicting joint locations was largely improved in accuracy. It is reasonable to use pose estimation results for handling pose variations and background clutters, and such attempts have obtained great improvement in ReID performance. However, we argue that the pose information was not well utilized and hasn't yet been fully exploited for person ReID. In this work, we introduce a novel framework called Attention-Aware Compositional Network (AACN) for person ReID. AACN consists of two main components: Pose-guided Part Attention (PPA) and Attention-aware Feature Composition (AFC). PPA is learned and applied to mask out undesirable background features in pedestrian feature maps. Furthermore, pose-guided visibility scores are estimated for body parts to deal with part occlusion in the proposed AFC module. Extensive experiments with ablation analysis show the effectiveness of our method, and state-of-the-art results are achieved on several public datasets, including Market-1501, CUHK03, CUHK01, SenseReID, CUHK03-NP and DukeMTMC-reID.", "title": "" }, { "docid": "1093353b15819a11c94467fd8df83ebe", "text": "Multiple Criteria Decision Making (MCDM) shows signs of becoming a maturing field. There are four quite distinct families of methods: (i) the outranking, (ii) the value and utility theory based, (iii) the multiple objective programming, and (iv) group decision and negotiation theory based methods. Fuzzy MCDM has basically been developed along the same lines, although with the help of fuzzy set theory a number of innovations have been made possible; the most important methods are reviewed and a novel approach interdependence in MCDM is introduced.", "title": "" }, { "docid": "a4073ab337c0d4ef73dceb1a32e1f878", "text": "Conditional belief networks introduce stochastic binary variables in neural networks. Contrary to a classical neural network, a belief network can predict more than the expected value of the output Y given the input X . It can predict a distribution of outputs Y which is useful when an input can admit multiple outputs whose average is not necessarily a valid answer. Such networks are particularly relevant to inverse problems such as image prediction for denoising, or text to speech. However, traditional sigmoid belief networks are hard to train and are not suited to continuous problems. This work introduces a new family of networks called linearizing belief nets or LBNs. A LBN decomposes into a deep linear network where each linear unit can be turned on or off by non-deterministic binary latent units. It is a universal approximator of real-valued conditional distributions and can be trained using gradient descent. Moreover, the linear pathways efficiently propagate continuous information and they act as multiplicative skip-connections that help optimization by removing gradient diffusion. This yields a model which trains efficiently and improves the state-of-the-art on image denoising and facial expression generation with the Toronto faces dataset.", "title": "" }, { "docid": "9dc97d467a82dcd0a823283e7cca3a8f", "text": "Objective\nQuantify physiologically acceptable PICU-discharge vital signs and develop machine learning models to predict these values for individual patients throughout their PICU episode.\n\n\nMethods\nEMR data from 7256 survivor PICU episodes (5632 patients) collected between 2009 and 2017 at Children's Hospital Los Angeles was analyzed. Each episode contained 375 variables representing physiology, labs, interventions, and drugs. Between medical and physical discharge, when clinicians determined the patient was ready for ICU discharge, they were assumed to be in a physiologically acceptable state space (PASS) for discharge. Each patient's heart rate, systolic blood pressure, diastolic blood pressure in the PASS window were measured and compared to age-normal values, regression-quantified PASS predictions, and recurrent neural network (RNN) PASS predictions made 12 hours after PICU admission.\n\n\nResults\nMean absolute errors (MAEs) between individual PASS values and age-normal values (HR: 21.0 bpm; SBP: 10.8 mm Hg; DBP: 10.6 mm Hg) were greater (p < .05) than regression prediction MAEs (HR: 15.4 bpm; SBP: 9.9 mm Hg; DBP: 8.6 mm Hg). The RNN models best approximated individual PASS values (HR: 12.3 bpm; SBP: 7.6 mm Hg; DBP: 7.0 mm Hg).\n\n\nConclusions\nThe RNN model predictions better approximate patient-specific PASS values than regression and age-normal values.", "title": "" } ]
scidocsrr
6948cdbdb094115f74aa43a1aa1cd499
Deep Gaussian Process Regression ( DGPR )
[ { "docid": "ad9d3b13795f7708c634d23615f2dd35", "text": "We introduce a variational inference framework for training the Gaussian process latent variable model and thus performing Bayesian nonlinear dimensionality reduction. This method allows us to variationally integrate out the input variables of the Gaussian process and compute a lower bound on the exact marginal likelihood of the nonlinear latent variable model. The maximization of the variational lower bound provides a Bayesian training procedure that is robust to overfitting and can automatically select the dimensionality of the nonlinear latent space. We demonstrate our method on real world datasets. The focus in this paper is on dimensionality reduction problems, but the methodology is more general. For example, our algorithm is immediately applicable for training Gaussian process models in the presence of missing or uncertain inputs.", "title": "" } ]
[ { "docid": "779d5380c72827043111d00510e32bfd", "text": "OBJECTIVE\nThe purpose of this review is 2-fold. The first is to provide a review for physiatrists already providing care for women with musculoskeletal pelvic floor pain and a resource for physiatrists who are interested in expanding their practice to include this patient population. The second is to describe how musculoskeletal dysfunctions involving the pelvic floor can be approached by the physiatrist using the same principles used to evaluate and treat others dysfunctions in the musculoskeletal system. This discussion clarifies that evaluation and treatment of pelvic floor pain of musculoskeletal origin is within the scope of practice for physiatrists. The authors review the anatomy of the pelvic floor, including the bony pelvis and joints, muscle and fascia, and the peripheral and autonomic nervous systems. Pertinent history and physical examination findings are described. The review concludes with a discussion of differential diagnosis and treatment of musculoskeletal pelvic floor pain in women. Improved recognition of pelvic floor dysfunction by healthcare providers will reduce impairment and disability for women with pelvic floor pain. A physiatrist is in the unique position to treat the musculoskeletal causes of this condition because it requires an expert grasp of anatomy, function, and the linked relationship between the spine and pelvis. Further research regarding musculoskeletal causes and treatment of pelvic floor pain will help validate these concepts and improve awareness and care for women limited by this condition.", "title": "" }, { "docid": "6329341da2a7e0957f2abde7f98764f9", "text": "\"Enterprise Information Portals are applications that enable companies to unlock internally and externally stored information, and provide users a single gateway to personalized information needed to make informed business decisions. \" They are: \". . . an amalgamation of software applications that consolidate, manage, analyze and distribute information across and outside of an enterprise (including Business Intelligence, Content Management, Data Warehouse & Mart and Data Management applications.)\"", "title": "" }, { "docid": "8a290f2a7549bbe7d852403924ee8519", "text": "In this paper we describe a heavily constrained university timetabling problem, and our genetic algorithm based approach to solve it. A problem-specific chromosome representation and knowledge-augmented genetic operators have been developed; these operators ‘ intelli gently’ avoid building ill egal timetables. The prototype timetabling system which is presented has been implemented in C and PROLOG, and includes an interactive graphical user interface. Tests with real data from our university were performed and yield promising results.", "title": "" }, { "docid": "49d533bf41f18bc96c404bb9a8bd12ae", "text": "A back-cavity shielded bow-tie antenna system working at 900MHz center frequency for ground-coupled GPR application is investigated numerically and experimentally in this paper. Bow-tie geometrical structure is modified for a compact design and back-cavity assembly. A layer of absorber is employed to overcome the back reflection by omni-directional radiation pattern of a bow-tie antenna in H-plane, thus increasing the SNR and improve the isolation between T and R antennas as well. The designed antenna system is applied to a prototype GPR system. Tested data shows that the back-cavity shielded antenna works satisfactorily in the 900MHz GPR system.", "title": "" }, { "docid": "9d45c1deaf429be2a5c33cd44b04290e", "text": "In this paper, a new omni-directional driving system with one spherical wheel is proposed. This system is able to overcome the existing driving systems with structural limitations in vertical, horizontal and diagonal movement. This driving system was composed of two stepping motors, a spherical wheel covered by a ball bearing, a weight balancer for the elimination of eccentricity, and ball plungers for balance. All parts of this structure is located at same distance on the center because the center of gravity of this system must be placed at the center of the system. An own ball bearing was designed for settled rotation and smooth direction change of a spherical wheel. The principle of an own ball bearing is the reversal of the ball mouse. Steel as the material of ball in the own ball bearing, was used for the prevention the slip with ground. One of the stepping motors is used for driving the spherical wheel. This spherical wheel is stable because of the support of ball bearing. And the other enables to move in a wanted direction while it rotates based on the central axis. The ATmega128 chip is used for the control of two stepping motors. To verify the proposed system, driving experiments was executed in variety of environments. Finally, the performance and the validity of the omni-directional driving system were confirmed.", "title": "" }, { "docid": "7645c6a0089ab537cb3f0f82743ce452", "text": "Behavioral studies of facial emotion recognition (FER) in autism spectrum disorders (ASD) have yielded mixed results. Here we address demographic and experiment-related factors that may account for these inconsistent findings. We also discuss the possibility that compensatory mechanisms might enable some individuals with ASD to perform well on certain types of FER tasks in spite of atypical processing of the stimuli, and difficulties with real-life emotion recognition. Evidence for such mechanisms comes in part from eye-tracking, electrophysiological, and brain imaging studies, which often show abnormal eye gaze patterns, delayed event-related-potential components in response to face stimuli, and anomalous activity in emotion-processing circuitry in ASD, in spite of intact behavioral performance during FER tasks. We suggest that future studies of FER in ASD: 1) incorporate longitudinal (or cross-sectional) designs to examine the developmental trajectory of (or age-related changes in) FER in ASD and 2) employ behavioral and brain imaging paradigms that can identify and characterize compensatory mechanisms or atypical processing styles in these individuals.", "title": "" }, { "docid": "52908b59435aa899d9e452e71a87e461", "text": "Scalability is a desirable attribute of a network, system, or process. Poor scalability can result in poor system performance, necessitating the reengineering or duplication of systems. While scalability is valued, its characteristics and the characteristics that undermine it are usually only apparent from the context. Here, we attempt to define different aspects of scalability, such as structural scalability and load scalability. Structural scalability is the ability of a system to expand in a chosen dimension without major modifications to its architecture. Load scalability is the ability of a system to perform gracefully as the offered traffic increases. It is argued that systems with poor load scalability may exhibit it because they repeatedly engage in wasteful activity, because they are encumbered with poor scheduling algorithms, because they cannot fully take advantage of parallelism, or because they are algorithmically inefficient. We qualitatively illustrate these concepts with classical examples from the literature of operating systems and local area networks, as well as an example of our own. Some of these are accompanied by rudimentary delay analysis.", "title": "" }, { "docid": "db00803f1e11d3b8723f77cd72834ba4", "text": "Metal implants such as hip prostheses and dental fillings produce streak and star artifacts in the reconstructed computed tomography (CT) images. Due to these artifacts, the CT image may not be diagnostically usable. A new reconstruction procedure is proposed that reduces the streak artifacts and that might improve the diagnostic value of the CT images. The procedure starts with a maximum a posteriori (MAP) reconstruction using an iterative reconstruction algorithm and a multimodal prior. This produces an artifact-free constrained image. This constrained image is the basis for an image-based projection completion procedure. The algorithm was validated on simulations, phantom and patient data, and compared with other metal artifact reduction algorithms.", "title": "" }, { "docid": "4704f3ed7a5d5d9b244689019025730f", "text": "To address the need for fundamental universally valid definitions of exact bandwidth and quality factor (Q) of tuned antennas, as well as the need for efficient accurate approximate formulas for computing this bandwidth and Q, exact and approximate expressions are found for the bandwidth and Q of a general single-feed (one-port) lossy or lossless linear antenna tuned to resonance or antiresonance. The approximate expression derived for the exact bandwidth of a tuned antenna differs from previous approximate expressions in that it is inversely proportional to the magnitude |Z'/sub 0/(/spl omega//sub 0/)| of the frequency derivative of the input impedance and, for not too large a bandwidth, it is nearly equal to the exact bandwidth of the tuned antenna at every frequency /spl omega//sub 0/, that is, throughout antiresonant as well as resonant frequency bands. It is also shown that an appropriately defined exact Q of a tuned lossy or lossless antenna is approximately proportional to |Z'/sub 0/(/spl omega//sub 0/)| and thus this Q is approximately inversely proportional to the bandwidth (for not too large a bandwidth) of a simply tuned antenna at all frequencies. The exact Q of a tuned antenna is defined in terms of average internal energies that emerge naturally from Maxwell's equations applied to the tuned antenna. These internal energies, which are similar but not identical to previously defined quality-factor energies, and the associated Q are proven to increase without bound as the size of an antenna is decreased. Numerical solutions to thin straight-wire and wire-loop lossy and lossless antennas, as well as to a Yagi antenna and a straight-wire antenna embedded in a lossy dispersive dielectric, confirm the accuracy of the approximate expressions and the inverse relationship between the defined bandwidth and the defined Q over frequency ranges that cover several resonant and antiresonant frequency bands.", "title": "" }, { "docid": "eaddba3b27a3a1faf9e957917d102d3f", "text": "Some recent modifications of the protein assay by the method of Lowry, Rosebrough, Farr, and Randall (1951, .I. Biol. Chem. 193, 265-275) have been reexamined and altered to provide a consolidated method which is simple, rapid, objective, and more generally applicable. A DOC-TCA protein precipitation technique provides for rapid quantitative recovery of soluble and membrane proteins from interfering substances even in very dilute solutions (< 1 pg/ml of protein). SDS is added to alleviate possible nonionic and cationic detergent and lipid interferences, and to provide mild conditions for rapid denaturation of membrane and proteolipid proteins. A simple method based on a linear log-log protein standard curve is presented to permit rapid and totally objective protein analysis using small programmable calculators. The new modification compared favorably with the original method of Lowry ef al.", "title": "" }, { "docid": "fcdd881b983cfd011e15de473f389572", "text": "In this paper we describe the development, experiments and evaluation of the iFloor, an interactive floor prototype installed at the local central municipality library. The primary purpose of the iFloor prototype is to support and stimulate community interaction between collocated people. The context of the library demands that any user can walk up and use the prototype without any devices or prior introduction. To achieve this, the iFloor proposes innovative interaction (modes/paradigms/patterns) for floor surfaces through the means of video tracking. Browsing and selecting content is done in a collaborative process and mobile phones are used for posting messages onto the floor. The iFloor highlights topics on social issues of ubiquitous computing environments in public spaces, and provides an example of how to exploit human spatial movements, positions and arrangements in interaction with computers.", "title": "" }, { "docid": "3534e4321560c826057e02c52d4915dd", "text": "While hexahedral mesh elements are preferred by a variety of simulation techniques, constructing quality all-hex meshes of general shapes remains a challenge. An attractive hex-meshing approach, often referred to as submapping, uses a low distortion mapping between the input model and a PolyCube (a solid formed from a union of cubes), to transfer a regular hex grid from the PolyCube to the input model. Unfortunately, the construction of suitable PolyCubes and corresponding volumetric maps for arbitrary shapes remains an open problem. Our work introduces a new method for computing low-distortion volumetric PolyCube deformations of general shapes and for subsequent all-hex remeshing. For a given input model, our method simultaneously generates an appropriate PolyCube structure and mapping between the input model and the PolyCube. From these we automatically generate good quality all-hex meshes of complex natural and man-made shapes.", "title": "" }, { "docid": "920b20fe7f4d7a63052b1058d67a50fc", "text": "Aroma is one of the most important quality traits of basmati rice (Oryza sativa L) that leads to high consumer acceptance. Earlier three significant QTLs for aroma, namely aro3-1, aro4-1 and aro8-1, have been mapped on rice chromosomes 3, 4 and 8, respectively, using a population of recombinant inbred lines (RILs) derived from a cross between Pusa 1121 (a basmati quality variety) and Pusa 1342 (a non-aromatic variety). For fine mapping of these QTLs, 184 F6 RILs were grown in the Kharif season of 2005 at New Delhi and Karnal, India. A total of 115 new SSR markers covering the three QTL intervals were designed and screened for parental polymorphism. Of these, 26 markers were polymorphic between parents, eight for the interval aro3-1, eight for the interval aro4-1 and ten for the interval aro8-1, thus enriching the density of SSR markers in these QTL intervals. Revised genetic maps were constructed by adding 23 of these new markers to the earlier map, by giving physical order of the markers in the pseudomolecules a preference. In the revised maps, the interval for QTL aro4-1 could not be improved further but QTL aro3-1 was narrowed down to an interval of 390 kbp from the earlier reported interval of 8.6 Mbp and similarly the QTL aro8-1 was narrowed down to a physical interval of 430 kbp. The numbers of candidate genes in the aro3-1 and aro8-1 intervals have now been reduced to 51 and 66, respectively. The badh2 gene on chromosome 8 was not associated with the aroma QTL on this chromosome.", "title": "" }, { "docid": "51bc9449c1dd9518513945a4a2669806", "text": "We present a robust model to locate facial landmarks under different views and possibly severe occlusions. To build reliable relationships between face appearance and shape with large view variations, we propose to formulate face alignment as an l1-induced Stagewise Relational Dictionary (SRD) learning problem. During each training stage, the SRD model learns a relational dictionary to capture consistent relationships between face appearance and shape, which are respectively modeled by the pose-indexed image features and the shape displacements for current estimated landmarks. During testing, the SRD model automatically selects a sparse set of the most related shape displacements for the testing face and uses them to refine its shape iteratively. To locate facial landmarks under occlusions, we further propose to learn an occlusion dictionary to model different kinds of partial face occlusions. By deploying the occlusion dictionary into the SRD model, the alignment performance for occluded faces can be further improved. Our algorithm is simple, effective, and easy to implement. Extensive experiments on two benchmark datasets and two newly built datasets have demonstrated its superior performances over the state-of-the-art methods, especially for faces with large view variations and/or occlusions.", "title": "" }, { "docid": "81f5905805f6faea108995cbe74a8435", "text": "In simultaneous electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) studies, average reference (AR), and digitally linked mastoid (LM) are popular re-referencing techniques in event-related potential (ERP) analyses. However, they may introduce their own physiological signals and alter the EEG/ERP outcome. A reference electrode standardization technique (REST) that calculated a reference point at infinity was proposed to solve this problem. To confirm the advantage of REST in ERP analyses of synchronous EEG-fMRI studies, we compared the reference effect of AR, LM, and REST on task-related ERP results of a working memory task during an fMRI scan. As we hypothesized, we found that the adopted reference did not change the topography map of ERP components (N1 and P300 in the present study), but it did alter the task-related effect on ERP components. LM decreased or eliminated the visual working memory (VWM) load effect on P300, and the AR distorted the distribution of VWM location-related effect at left posterior electrodes as shown in the statistical parametric scalp mapping (SPSM) of N1. ERP cortical source estimates, which are independent of the EEG reference choice, were used as the golden standard to infer the relative utility of different references on the ERP task-related effect. By comparison, REST reference provided a more integrated and reasonable result. These results were further confirmed by the results of fMRI activations and a corresponding EEG-only study. Thus, we recommend the REST, especially with a realistic head model, as the optimal reference method for ERP data analysis in simultaneous EEG-fMRI studies.", "title": "" }, { "docid": "dcec6ef9e08d7bcfa86aca8d045b6bd4", "text": "This article examines the intellectual and institutional factors that contributed to the collaboration of neuropsychiatrist Warren McCulloch and mathematician Walter Pitts on the logic of neural networks, which culminated in their 1943 publication, \"A Logical Calculus of the Ideas Immanent in Nervous Activity.\" Historians and scientists alike often refer to the McCulloch-Pitts paper as a landmark event in the history of cybernetics, and fundamental to the development of cognitive science and artificial intelligence. This article seeks to bring some historical context to the McCulloch-Pitts collaboration itself, namely, their intellectual and scientific orientations and backgrounds, the key concepts that contributed to their paper, and the institutional context in which their collaboration was made. Although they were almost a generation apart and had dissimilar scientific backgrounds, McCulloch and Pitts had similar intellectual concerns, simultaneously motivated by issues in philosophy, neurology, and mathematics. This article demonstrates how these issues converged and found resonance in their model of neural networks. By examining the intellectual backgrounds of McCulloch and Pitts as individuals, it will be shown that besides being an important event in the history of cybernetics proper, the McCulloch-Pitts collaboration was an important result of early twentieth-century efforts to apply mathematics to neurological phenomena.", "title": "" }, { "docid": "c9b2525d34eb58130d3f8c5d68bb8714", "text": "Cloud gaming is a new way to deliver high-quality gaming experience to gamers anywhere and anytime. In cloud gaming, sophisticated game software runs on powerful servers in data centers, rendered game scenes are streamed to gamers over the Internet in real time, and the gamers use light-weight software executed on heterogeneous devices to interact with the games. Due to the proliferation of high-speed networks and cloud computing, cloud gaming has attracted tremendous attentions in both the academia and industry since late 2000's. In this paper, we survey the latest cloud gaming research from different aspects, spanning over cloud gaming platforms, optimization techniques, and commercial cloud gaming services. The readers will gain the overview of cloud gaming research and get familiar with the recent developments in this area.", "title": "" }, { "docid": "2d0c5f6be15408d4814b22d28b1541af", "text": "OBJECTIVE\nOur previous study has found that circulating microRNA (miRNA, or miR) -122, -140-3p, -720, -2861, and -3149 are significantly elevated during early stage of acute coronary syndrome (ACS). This study was conducted to determine the origin of these elevated plasma miRNAs in ACS.\n\n\nMETHODS\nqRT-PCR was performed to detect the expression profiles of these 5 miRNAs in liver, spleen, lung, kidney, brain, skeletal muscles, and heart. To determine their origins, these miRNAs were detected in myocardium of acute myocardial infarction (AMI), and as well in platelets and peripheral blood mononuclear cells (PBMCs, including monocytes, circulating endothelial cells (CECs) and lymphocytes) of the AMI pigs and ACS patients.\n\n\nRESULTS\nMiR-122 was specifically expressed in liver, and miR-140-3p, -720, -2861, and -3149 were highly expressed in heart. Compared with the sham pigs, miR-122 was highly expressed in the border zone of the ischemic myocardium in the AMI pigs without ventricular fibrillation (P < 0.01), miR-122 and -720 were decreased in platelets of the AMI pigs, and miR-122, -140-3p, -720, -2861, and -3149 were increased in PBMCs of the AMI pigs (all P < 0.05). Compared with the non-ACS patients, platelets miR-720 was decreased and PBMCs miR-122, -140-3p, -720, -2861, and -3149 were increased in the ACS patients (all P < 0.01). Furthermore, PBMCs miR-122, -720, and -3149 were increased in the AMI patients compared with the unstable angina (UA) patients (all P < 0.05). Further origin identification revealed that the expression levels of miR-122 in CECs and lymphocytes, miR-140-3p and -2861 in monocytes and CECs, miR-720 in monocytes, and miR-3149 in CECs were greatly up-regulated in the ACS patients compared with the non-ACS patients, and were higher as well in the AMI patients than that in the UA patients except for the miR-122 in CECs (all P < 0.05).\n\n\nCONCLUSION\nThe elevated plasma miR-122, -140-3p, -720, -2861, and -3149 in the ACS patients were mainly originated from CECs and monocytes.", "title": "" }, { "docid": "32135b15574c700a5c1b47671db7072b", "text": "The problem of color constancy may be solved if we can recover the physical properties of illuminants and surfaces from photosensor responses. We consider this problem within the framework of Bayesian decision theory. First, we model the relation among illuminants, surfaces, and photosensor responses. Second, we construct prior distributions that describe the probability that particular illuminants and surfaces exist in the world. Given a set of photosensor responses, we can then use Bayes's rule to compute the posterior distribution for the illuminants and the surfaces in the scene. There are two widely used methods for obtaining a single best estimate from a posterior distribution. These are maximum a posteriori (MAP) and minimum mean-square-error (MMSE) estimation. We argue that neither is appropriate for perception problems. We describe a new estimator, which we call the maximum local mass (MLM) estimate, that integrates local probability density. The new method uses an optimality criterion that is appropriate for perception tasks: It finds the most probable approximately correct answer. For the case of low observation noise, we provide an efficient approximation. We develop the MLM estimator for the color-constancy problem in which flat matte surfaces are uniformly illuminated. In simulations we show that the MLM method performs better than the MAP estimator and better than a number of standard color-constancy algorithms. We note conditions under which even the optimal estimator produces poor estimates: when the spectral properties of the surfaces in the scene are biased.", "title": "" }, { "docid": "1b0cb70fb25d86443a01a313371a27ae", "text": "We present a protocol for general state machine replication – a method that provides strong consistency – that has high performance in a wide-area network. In particular, our protocol Mencius has high throughput under high client load and low latency under low client load even under changing wide-area network environment and client load. We develop our protocol as a derivation from the well-known protocol Paxos. Such a development can be changed or further refined to take advantage of specific network or application requirements.", "title": "" } ]
scidocsrr
025cceb8328c515540adfee20ae767a7
Flexible IoT security middleware for end-to-end cloud-fog communication
[ { "docid": "1a1c9b8fa2b5fc3180bc1b504def5ea1", "text": "Wireless sensor networks can be deployed in any attended or unattended environments like environmental monitoring, agriculture, military, health care etc., where the sensor nodes forward the sensing data to the gateway node. As the sensor node has very limited battery power and cannot be recharged after deployment, it is very important to design a secure, effective and light weight user authentication and key agreement protocol for accessing the sensed data through the gateway node over insecure networks. Most recently, Turkanovic et al. proposed a light weight user authentication and key agreement protocol for accessing the services of the WSNs environment and claimed that the same protocol is efficient in terms of security and complexities than related existing protocols. In this paper, we have demonstrated several security weaknesses of the Turkanovic et al. protocol. Additionally, we have also illustrated that the authentication phase of the Turkanovic et al. is not efficient in terms of security parameters. In order to fix the above mentioned security pitfalls, we have primarily designed a novel architecture for the WSNs environment and basing upon which a proposed scheme has been presented for user authentication and key agreement scheme. The security validation of the proposed protocol has done by using BAN logic, which ensures that the protocol achieves mutual authentication and session key agreement property securely between the entities involved. Moreover, the proposed scheme has simulated using well popular AVISPA security tool, whose simulation results show that the protocol is SAFE under OFMC and CL-AtSe models. Besides, several security issues informally confirm that the proposed protocol is well protected in terms of relevant security attacks including the above mentioned security pitfalls. The proposed protocol not only resists the above mentioned security weaknesses, but also achieves complete security requirements including specially energy efficiency, user anonymity, mutual authentication and user-friendly password change phase. Performance comparison section ensures that the protocol is relatively efficient in terms of complexities. The security and performance analysis makes the system so efficient that the proposed protocol can be implemented in real-life application. © 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "77b06794253e412a26860fb82e263181", "text": "Internet of Things (IoT) is an ubiquitous concept where physical objects are connected over the internet and are provided with unique identifiers to enable their self-identification to other devices and the ability to transmit data over the network. Sensor nodes along with their heterogeneous nature are the main part of the IoT and may act as internet hosts or clients. Communication security and end-users privacy protection is a major concern in the development of IoT especially if these IP-enabled sensor nodes have limited resources. Secret key distribution for heterogeneous sensors becomes challenging due to the inconsistencies in their cryptographic primitives and computational resources as in Healthcare applications. This paper introduces a new End-to-End key establishment protocol that is lightweight for resource-constrained sensors and secure through strong encryption and authentication. By using this protocol, resource-constrained nodes can also benefit from the same security functionalities that are typical of unconstrained domains, without however having to execute computationally intensive operations. The protocol is based on cooperation by offloading the heavy cryptographic operations of constrained nodes to the neighboring trusted nodes or devices. Security analysis and performance evaluation results show that the proposed protocol is secure and is sufficiently energy efficient.", "title": "" }, { "docid": "0d81a7af3c94e054841e12d4364b448c", "text": "Internet of Things (IoT) is characterized by heterogeneous technologies, which concur to the provisioning of innovative services in various application domains. In this scenario, the satisfaction of security and privacy requirements plays a fundamental role. Such requirements include data confidentiality and authentication, access control within the IoT network, privacy and trust among users and things, and the enforcement of security and privacy policies. Traditional security countermeasures cannot be directly applied to IoT technologies due to the different standards and communication stacks involved. Moreover, the high number of interconnected devices arises scalability issues; therefore a flexible infrastructure is needed able to deal with security threats in such a dynamic environment. In this survey we present the main research challenges and the existing solutions in the field of IoT security, identifying open issues, and suggesting some hints for future research. During the last decade, Internet of Things (IoT) approached our lives silently and gradually, thanks to the availability of wireless communication systems (e.g., RFID, WiFi, 4G, IEEE 802.15.x), which have been increasingly employed as technology driver for crucial smart monitoring and control applications [1–3]. Nowadays, the concept of IoT is many-folded, it embraces many different technologies, services, and standards and it is widely perceived as the angular stone of the ICT market in the next ten years, at least [4–6]. From a logical viewpoint, an IoT system can be depicted as a collection of smart devices that interact on a collabo-rative basis to fulfill a common goal. At the technological floor, IoT deployments may adopt different processing and communication architectures, technologies, and design methodologies, based on their target. For instance, the same IoT system could leverage the capabilities of a wireless sensor network (WSN) that collects the environmental information in a given area and a set of smartphones on top of which monitoring applications run. In the middle, a standardized or proprietary middle-ware could be employed to ease the access to virtualized resources and services. The middleware, in turn, might be implemented using cloud technologies, centralized overlays , or peer to peer systems [7]. Of course, this high level of heterogeneity, coupled to the wide scale of IoT systems, is expected to magnify security threats of the current Internet, which is being increasingly used to let interact humans, machines, and robots, in any combination. More in details, traditional security countermeasures and privacy enforcement cannot be directly applied to IoT technologies due to …", "title": "" } ]
[ { "docid": "d78117c809f963a2983c262cca2399e9", "text": "Range detection applications based on radar can be separated into measurements of short distances with high accuracy or large distances with low accuracy. In this paper an approach is investigated to combine the advantages of both principles. Therefore an FMCW radar will be extended with an additional phase evaluation technique. In order to realize this combination an increased range resolution of the FMCW radar is required. This paper describes an frequency estimation algorithm to increase the frequency resolution and hence the range resolution of an FMCW radar at 24 GHz for a line based range detection system to evaluate the possibility of an extended FMCW radar using the phase information.", "title": "" }, { "docid": "cf074f806c9b78947c54fb7f41167d9e", "text": "Applications of Machine Learning to Support Dementia Care through Commercially Available O↵-the-Shelf Sensing", "title": "" }, { "docid": "8d3e7a6032d6e017537b68b47c4dae38", "text": "With the increasing complexity of modern radar system and the increasing number of devices used in the radar system, it would be highly desirable to model the complete radar system including hardware and software by a single tool. This paper presents a novel software-based simulation method for modern radar system which here is automotive radar application. Various functions of automotive radar, like target speed, distance and azimuth and elevation angle detection, are simulated in test case and the simulation results are compared with the measurement results.", "title": "" }, { "docid": "bfe8e4093219080ef7c377a67184ff00", "text": "A clothoid has the property that its curvature varies linearly with arclength. This is a useful feature for the path of a vehicle whose turning radius is controlled as a linear function of the distance travelled. Highways, railways and the paths of car-like robots may be composed of straight line segments, clothoid segments and circular arcs. Control polylines are used in computer aided design and computer aided geometric design applications to guide composite curves during the design phase. This article examines the use of a control polyline to guide a curve composed of segments of clothoids, straight lines, and circular arcs. r 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "b9e8007220be2887b9830c05c283f8a5", "text": "INTRODUCTION\nHealth-care professionals are trained health-care providers who occupy a potential vanguard position in human immunodeficiency virus (HIV)/acquired immune deficiency syndrome (AIDS) prevention programs and the management of AIDS patients. This study was performed to assess HIV/AIDS-related knowledge, attitude, and practice (KAP) and perceptions among health-care professionals at a tertiary health-care institution in Uttarakhand, India, and to identify the target group where more education on HIV is needed.\n\n\nMATERIALS AND METHODS\nA cross-sectional KAP survey was conducted among five groups comprising consultants, residents, medical students, laboratory technicians, and nurses. Probability proportional to size sampling was used for generating random samples. Data analysis was performed using charts and tables in Microsoft Excel 2016, and statistical analysis was performed using the Statistical Package for the Social Science software version 20.0.\n\n\nRESULTS\nMost participants had incomplete knowledge regarding the various aspects of HIV/AIDS. Attitude in all the study groups was receptive toward people living with HIV/AIDS. Practical application of knowledge was best observed in the clinicians as well as medical students. Poor performance by technicians and nurses was observed in prevention and prophylaxis. All groups were well informed about the National AIDS Control Policy except technicians.\n\n\nCONCLUSION\nPoor knowledge about HIV infection, particularly among the young medical students and paramedics, is evidence of the lacunae in the teaching system, which must be kept in mind while formulating teaching programs. As suggested by the respondents, Information Education Communication activities should be improvised making use of print, electronic, and social media along with interactive awareness sessions, regular continuing medical educations, and seminars to ensure good quality of safe modern medical care.", "title": "" }, { "docid": "2a8aa90a9e45f58486cb712fe1271842", "text": "Existing relation classification methods that rely on distant supervision assume that a bag of sentences mentioning an entity pair are all describing a relation for the entity pair. Such methods, performing classification at the bag level, cannot identify the mapping between a relation and a sentence, and largely suffers from the noisy labeling problem. In this paper, we propose a novel model for relation classification at the sentence level from noisy data. The model has two modules: an instance selector and a relation classifier. The instance selector chooses high-quality sentences with reinforcement learning and feeds the selected sentences into the relation classifier, and the relation classifier makes sentencelevel prediction and provides rewards to the instance selector. The two modules are trained jointly to optimize the instance selection and relation classification processes. Experiment results show that our model can deal with the noise of data effectively and obtains better performance for relation classification at the sentence level.", "title": "" }, { "docid": "259df0ad497b5fc3318dfca7f8ee1f9a", "text": "BACKGROUND\nColorectal cancer is a leading cause of morbidity and mortality, especially in the Western world. The human and financial costs of this disease have prompted considerable research efforts to evaluate the ability of screening tests to detect the cancer at an early curable stage. Tests that have been considered for population screening include variants of the faecal occult blood test, flexible sigmoidoscopy and colonoscopy. Reducing mortality from colorectal cancer (CRC) may be achieved by the introduction of population-based screening programmes.\n\n\nOBJECTIVES\nTo determine whether screening for colorectal cancer using the faecal occult blood test (guaiac or immunochemical) reduces colorectal cancer mortality and to consider the benefits, harms and potential consequences of screening.\n\n\nSEARCH STRATEGY\nPublished and unpublished data for this review were identified by: Reviewing studies included in the previous Cochrane review; Searching several electronic databases (Cochrane Library, Medline, Embase, CINAHL, PsychInfo, Amed, SIGLE, HMIC); and Writing to the principal investigators of potentially eligible trials.\n\n\nSELECTION CRITERIA\nWe included in this review all randomised trials of screening for colorectal cancer that compared faecal occult blood test (guaiac or immunochemical) on more than one occasion with no screening and reported colorectal cancer mortality.\n\n\nDATA COLLECTION AND ANALYSIS\nData from the eligible trials were independently extracted by two reviewers. The primary data analysis was performed using the group participants were originally randomised to ('intention to screen'), whether or not they attended screening; a secondary analysis adjusted for non-attendence. We calculated the relative risks and risk differences for each trial, and then overall, using fixed and random effects models (including testing for heterogeneity of effects). We identified nine articles concerning four randomised controlled trials and two controlled trials involving over 320,000 participants with follow-up ranging from 8 to 18 years.\n\n\nMAIN RESULTS\nCombined results from the 4 eligible randomised controlled trials shows that participants allocated to screening had a 16% reduction in the relative risk of colorectal cancer mortality (RR 0.84, CI: 0.78-0.90). In the 3 studies that used biennial screening (Funen, Minnesota, Nottingham) there was a 15% relative risk reduction (RR 0.85, CI: 0.78-0.92) in colorectal cancer mortality. When adjusted for screening attendance in the individual studies, there was a 25% relative risk reduction (RR 0.75, CI: 0.66 - 0.84) for those attending at least one round of screening using the faecal occult blood test.\n\n\nAUTHORS' CONCLUSIONS\nBenefits of screening include a modest reduction in colorectal cancer mortality, a possible reduction in cancer incidence through the detection and removal of colorectal adenomas, and potentially, the less invasive surgery that earlier treatment of colorectal cancers may involve. Harmful effects of screening include the psycho-social consequences of receiving a false-positive result, the potentially significant complications of colonoscopy or a false-negative result, the possibility of overdiagnosis (leading to unnecessary investigations or treatment) and the complications associated with treatment.", "title": "" }, { "docid": "511991822f427c3f62a4c091594e89e3", "text": "Reinforcement learning has recently gained popularity due to its many successful applications in various fields. In this project reinforcement learning is implemented in a simple warehouse situation where robots have to learn to interact with each other while performing specific tasks. The aim is to study whether reinforcement learning can be used to train multiple agents. Two different methods have been used to achieve this aim, Q-learning and deep Q-learning. Due to practical constraints, this paper cannot provide a comprehensive review of real life robot interactions. Both methods are tested on single-agent and multi-agent models in Python computer simulations. The results show that the deep Q-learning model performed better in the multiagent simulations than the Q-learning model and it was proven that agents can learn to perform their tasks to some degree. Although, the outcome of this project cannot yet be considered sufficient for moving the simulation into reallife, it was concluded that reinforcement learning and deep learning methods can be seen as suitable for modelling warehouse robots and their interactions.", "title": "" }, { "docid": "83e4ee7cf7a82fcb8cb77f7865d67aa8", "text": "A meta-analysis of the relationship between class attendance in college and college grades reveals that attendance has strong relationships with both class grades (k = 69, N = 21,195, r = .44) and GPA (k = 33, N = 9,243, r = .41). These relationships make class attendance a better predictor of college grades than any other known predictor of academic performance, including scores on standardized admissions tests such as the SAT, high school GPA, study habits, and study skills. Results also show that class attendance explains large amounts of unique variance in college grades because of its relative independence from SAT scores and high school GPA and weak relationship with student characteristics such as conscientiousness and motivation. Mandatory attendance policies appear to have a small positive impact on average grades (k = 3, N = 1,421, d = .21). Implications for theoretical frameworks of student academic performance and educational policy are discussed. Many college instructors exhort their students to attend class as frequently as possible, arguing that high levels of class attendance are likely to increase learning and improve student grades. Such arguments may hold intuitive appeal and are supported by findings linking class attendance to both learning (e.g., Jenne, 1973) and better grades (e.g., Moore et al., 2003), but both students and some educational researchers appear to be somewhat skeptical of the importance of class attendance. This skepticism is reflected in high class absenteeism rates ranging from 18. This article aims to help resolve the debate regarding the importance of class attendance by providing a quantitative review of the literature investigating the relationship of class attendance with both college grades and student characteristics that may influence attendance. 273 At a theoretical level class attendance fits well into frameworks that emphasize the joint role of cognitive ability and motivation in determining learning and work performance (e.g., Kanfer & Ackerman, 1989). Specifically, cognitive ability and motivation influence academic outcomes via two largely distinct mechanisms— one mechanism related to information processing and the other mechanism being behavioral in nature. Cognitive ability influences the degree to which students are able to process, integrate, and remember material presented to them (Humphreys, 1979), a mechanism that explains the substantial predictive validity of SAT scores for college grades (e. & Ervin, 2000). Noncognitive attributes such as conscientiousness and achievement motivation are thought to influence grades via their influence on behaviors that facilitate the understanding and …", "title": "" }, { "docid": "788f02363d1cd96cf1786e98deac0a8c", "text": "This paper investigates the use of color information when used within a state-of-the-art large scale image search system. We introduce a simple yet effective and efficient color signature generation procedure. It is used either to produce global or local descriptors. As a global descriptor, it outperforms several state-of-the-art color description methods, in particular the bag-of-words method based on color SIFT. As a local descriptor, our signature is used jointly with SIFT descriptors (no color) to provide complementary information. This significantly improves the recognition rate, outperforming the state of the art on two image search benchmarks. We provide an open source package of our signature (http://www.kooaba.com/en/learnmore/labs/).", "title": "" }, { "docid": "cf0a52fb8b55cf253f560aa8db35717a", "text": "Big Data though it is a hype up-springing many technical challenges that confront both academic research communities and commercial IT deployment, the root sources of Big Data are founded on data streams and the curse of dimensionality. It is generally known that data which are sourced from data streams accumulate continuously making traditional batch-based model induction algorithms infeasible for real-time data mining. Feature selection has been popularly used to lighten the processing load in inducing a data mining model. However, when it comes to mining over high dimensional data the search space from which an optimal feature subset is derived grows exponentially in size, leading to an intractable demand in computation. In order to tackle this problem which is mainly based on the high-dimensionality and streaming format of data feeds in Big Data, a novel lightweight feature selection is proposed. The feature selection is designed particularly for mining streaming data on the fly, by using accelerated particle swarm optimization (APSO) type of swarm search that achieves enhanced analytical accuracy within reasonable processing time. In this paper, a collection of Big Data with exceptionally large degree of dimensionality are put under test of our new feature selection algorithm for performance evaluation.", "title": "" }, { "docid": "f79def9a56be8d91c81385abfc6dbee7", "text": "Computational Creativity is the AI subfield in which we study how to build computational models of creative thought in science and the arts. From an engineering perspective, it is des irable to have concrete measures for assessing the progress made from one version of a program to another, or for comparing and contras ting different software systems for the same creative task. We de scribe the Turing Test and versions of it which have been used in orde r to measure progress in Computational Creativity. We show th at the versions proposed thus far lack the important aspect of inte rac ion, without which much of the power of the Turing Test is lost. We a rgue that the Turing Test is largely inappropriate for the purpos es of evaluation in Computational Creativity, since it attempts to ho mogenise creativity into a single (human) style, does not take into ac count the importance of background and contextual information for a c eative act, encourages superficial, uninteresting advances in fro nt-ends, and rewards creativity which adheres to a certain style over tha t which creates something which is genuinely novel. We further argu e that although there may be some place for Turing-style tests for C omputational Creativity at some point in the future, it is curren tly untenable to apply any defensible version of the Turing Test. As an alternative to Turing-style tests, we introduce two de scriptive models for evaluating creative software, the FACE mode l which describes creative acts performed by software in terms of tu ples of generative acts, and the IDEA model which describes how such creative acts can have an impact upon an ideal audience, given id eal information about background knowledge and the software de v lopment process. While these models require further study and e l boration, we believe that they can be usefully applied to current sys ems as well as guiding further development of creative systems. 1 The Turing Test and Computational Creativity The Turing Test (TT), in which a computer and human are interr ogated, with the computer considered intelligent if the huma n interrogator is unable to distinguish between them, is principal ly a philosophical construct proposed by Alan Turing as a way of determ ining whether AI has achieved its goal of simulating intelligence [1]. The TT has provoked much discussion, both historical and contem porary, however this has principally been within the philosophy of A I: most AI researchers see it as a distraction from their goals, enco uraging a mere trickery of intelligence and ever more sophisticated n atural language front ends, as opposed to focussing on real problems. D espite the appeal of the (as yet unawarded) Loebner Prize, most subfi elds of AI have developed and follow their own evaluation criteri a and methodologies, which have little to do with the TT. 1 School of Informatics, University of Edinburgh, UK 2 Department of Computing, Imperial College, London, UK Computational Creativity (CC) is a subfield of AI, in which re searchers aim to model creative thought by building program s which can produce ideas and artefacts which are novel, surprising and valuable, either autonomously or in conjunction with humans. Th ere are three main motivations for the study of Computational Creat ivity: • to provide a computational perspective on human creativity , in order to help us to understand it (cognitive science); • to enable machines to be creative, in order to enhance our liv es in some way (engineering); and • to produce tools which enhance human creativity (aids for cr eative individuals). Creativity can be subdivided into everyday problem-solvin g, and the sort of creativity reserved for the truly great, in which a problem is solved or an object created that has a major impact on other people. These are respectively known as “little-c” (mundane) a nd “bigC” (eminent) creativity [2]. Boden [3] draws a similar disti nction in her view of creativity as search within a conceptual space, w h re “exploratory creativity” searches within the space, and “tran sformational creativity” involves expanding the space by breaking one or m e of the defining characteristics and creating a new conceptua l space. Boden sees transformational creativity as more surprising , i ce, according to the defining rules of the conceptual space, ideas w ithin this space could not have been found before. There are two notions of evaluation in CC: ( i) judgements which determine whether an idea or artefact is valuable or not (an e ssential criterion for creativity) – these judgements may be made int rnally by whoever produced the idea, or externally, by someone else and (ii ) judgements to determine whether a system is acting creativ ely or not. In the following discussion, by evaluation, we mean the latter judgement. Finding measures of evaluation of CC is an active area of research, both influenced by, and influencing, practical a nd theoretical aspects of CC. It is a particularly important area, s ince such measures suggest ways of defining progress in the field, 3 as well as strongly guiding program design. While tests of creativity in humans are important for our understanding of creativity, they do n ot usually causehumans to be creative (creativity training programs, which train people to do well at such tests, notwithstanding). Way s in which CC is evaluated, on the other hand, will have a deep influence o future development of potentially creative programs. Clearl y, different modes of evaluation will be appropriate for the different mo tivations listed above. 3 The necessity for good measures of evaluation in CC is somewh at paralleled in the psychology of creativity: “Creativity is becoming a p opular topic in educational, economic and political circles throughout th e world – whether this popularity is just a passing fad or a lasting change in in terest in creativity and innovation will probably depend, in large part, on wh ether creativity assessment keeps pace with the rest of the field.” [4, p. 64] The Turing Test is of particular interest to CC for two reason s. Firstly, unlike the general situation in AI, the TT, or varia tions of it, arecurrently being used to evaluate candidate programs in CC. T hus, the TT is having a major influence on the development of CC. Thi s influence is usually neither noted nor questioned. Secondly , there are huge philosophical problems with using a test based on imita tion to evaluate competence in an area of thought which is based on or iginality. While there are varying definitions of creativity, t he majority consider some interpretation of novelty and utility to be es sential criteria. For instance, one of the commonalities found by Rothe nberg in a collection of international perspectives on creativit y is that “creativity involves thinking that is aimed at producing ideas o r products that are relatively novel” [5, p.2], and in CC the combin ation of novelty and usefulness is accepted as key (for instance, s ee [6] or [3]). In [4], Plucker and Makel list “similar, overlapping a nd possibly synonymous terms for creativity: imagination, ingenuity, innovation, inspiration, inventiveness, muse, novelty, originality, serendipity, talent and unique”. The term ‘imitation’ is simply antipodal to many of these terms. In the following sections, we firstly describe and discuss so me attempts to evaluate Computational Creativity using the Turi ng Test or versions of it ( §2), concluding that these attempts all omit the important aspect of interaction, and suggest the sort of directio n that a TT for a creative computer art system might follow. We then pres ent a series of arguments that the TT is inappropriate for measuring creativity in computers (or humans) in §3, and suggest that although there may be some place for Turing-style tests for Computational C reativity at some point in the future, it is currently untenable and impractical. As an alternative to Turing-style tests, in §4, we introduce two descriptive models for evaluating creative software, the F ACE model which describes creative acts performed by software in term s of tuples of generative acts, and the IDEA model which describes h ow such creative acts can have an impact upon an ideal audience, given ideal information about background knowledge and the softw are development process. We conclude our discussion in §5. 2 Attempts to evaluate Computational Creativity using the Turing Test or versions of it There have been several attempts to evaluate Computational Cre tivity using the Turing Test or versions of it. While these are us f l in terms of advancing our understanding of CC, they do not go f ar enough. In this section we discuss two such advances ( §2.1 and§2.2), and two further suggestions on using human creative behavio ur as a guide for evaluating Computational Creativity ( §2.3). We highlight the importance of interaction in §2.4. 2.1 Discrimination tests Pearce and Wiggins [7] assert for the need for objective, fal sifi ble measures of evaluation in cognitive musicology. They propo se the ‘discrimination test’, which is analogous to the TT, in whic subjects are played segments of both machine and human-generated mus ic and asked to distinguish between them. This might be in a part icular style, such as Bach’s music, or might be more general. The y also present one of the most considered analyses of whether Turin g-style tests such as the framework they propose might be appropriat e for evaluating Computational Creativity [7, §7]. While they do not directly refer to Boden’s exploratory creativity [3], instea d referring to Boden’s distinction between psychological (P-creativity , concerning ideas which are novel with resepct to a particular mind) and h istorical creativity (H-creativity, concerning ideas which are novel with respect to the whole of human history ), they do argue that much creative work is carried out within a particular style. They cite Garnham’s response ", "title": "" }, { "docid": "d2abd2fbb54a307d652bacdf92234466", "text": "Aldehydes-induced toxicity has been implicated in many neurodegenerative diseases. Exposure to reactive aldehydes from (1) alcohol and food metabolism; (2) environmental pollutants, including car, factory exhausts, smog, pesticides, herbicides; (3) metabolism of neurotransmitters, amino acids and (4) lipid peroxidation of biological membrane from excessive ROS, all contribute to 'aldehydic load' that has been linked to the pathology of neurodegenerative diseases. In particular, the α, β-unsaturated aldehydes derived from lipid peroxidation, 4-hydroxynonenal (4-HNE), DOPAL (MAO product of dopamine), malondialdehyde, acrolein and acetaldehyde, all readily form chemical adductions with proteins, DNA and lipids, thus causing neurotoxicity. Mitochondrial aldehyde dehydrogenase 2 (ALDH 2) is a major aldehyde metabolizing enzyme that protects against deleterious aldehyde buildup in brain, a tissue that has a particularly high mitochondrial content. In this review, we highlight the deleterious effects of increased aldehydic load in the neuropathology of ischemic stroke, Alzheimer's disease and Parkinson's disease. We also discuss evidence for the association between ALDH2 deficiency, a common East Asianspecific mutation, and these neuropathologies. A novel class of small molecule aldehyde dehydrogenase activators (Aldas), represented by Alda-1, reduces neuronal cell death in models of ischemic stroke, Alzheimer's disease and Parkinson's disease. Together, these data suggest that reducing aldeydic load by enhancing the activity of aldehyde dehydrogenases, such as ALDH2, represents as a therapeutic strategy for neurodegenerative diseases.", "title": "" }, { "docid": "960fc8cd6c6d867c6aefd77dbe4ec20e", "text": "Personalized support for learners becomes even more important, when e-Learning takes place in open and dynamic learning and information networks. This paper shows how to realize personalized learning support in distributed learning environments based on Semantic Web technologies. Our approach fills the existing gap between current adaptive educational systems with well-established personalization functionality, and open, dynamic learning repository networks. We propose a service-based architecture for establishing personalized e-Learning, where personalization functionality is provided by various web-services. A Personal Learning Assistant integrates personalization services and other supporting services, and provides the personalized access to learning resources in an e-Learning network.", "title": "" }, { "docid": "94b0fb3ad3e413e0db6563488fe3bab9", "text": "Deep Web is content hidden behind HTML forms. Since it represents a large portion of the structured, unstructured and dynamic data on the Web, accessing Deep-Web content has been a long challenge for the database community. This paper describes a crawler for accessing Deep-Web using Ontologies. Performance evaluation of the proposed work showed that this new approach has promising results. Keywordsdeepweb,ontology,hiddenweb,domain,mapping.", "title": "" }, { "docid": "5309af9cf135b8eb3c2ff633ea0bd192", "text": "Diameter at breast height has been estimated from mobile laser scanning using a new set of methods. A 2D laser scanner was mounted facing forward, tilted nine degrees downwards, on a car. The trajectory was recorded using inertial navigation and visual SLAM (simultaneous localization and mapping). The laser scanner data, the trajectory and the orientation were used to calculate a 3D point cloud. Clusters representing trees were extracted line-wise to reduce the effects of uncertainty in the positioning system. The intensity of the laser echoes was used to filter out unreliable echoes only grazing a stem. The movement was used to obtain measurements from a larger part of the stem, and multiple lines from different views were used for the circle fit. Two trigonometric methods and two circle fit methods were tested. The best results with bias 2.3% (6 mm) and root mean squared error 14% (37 mm) were acquired with the circle fit on multiple 2D projected clusters. The method was evaluated compared to field data at five test areas with approximately 300 caliper-measured trees within a 10-m working range. The results show that this method is viable for stem measurements from a moving vehicle, for example a forest harvester.", "title": "" }, { "docid": "e5261ee5ea2df8bae7cc82cb4841dea0", "text": "Automatic generation of video summarization is one of the key techniques in video management and browsing. In this paper, we present a generic framework of video summarization based on the modeling of viewer's attention. Without fully semantic understanding of video content, this framework takes advantage of understanding of video content, this framework takes advantage of computational attention models and eliminates the needs of complex heuristic rules in video summarization. A set of methods of audio-visual attention model features are proposed and presented. The experimental evaluations indicate that the computational attention based approach is an effective alternative to video semantic analysis for video summarization.", "title": "" }, { "docid": "de304ae0f87412c6b0bfca6a3e6835bb", "text": "This paper presents a novel method for sensorless brushless dc (BLDC) motor drives. Based on the characteristics of the back electromotive force (EMF), the rotor position signals would be constructed. It is intended to construct these signals, which make the phase difference between the constructed signals and the back EMFs controllable. Then, the rotor-position error can be compensated by controlling the phase-difference angle in real time. In this paper, the rotor-position-detection error is analyzed. Using the TMS320F2812 chip as a control core, some experiments have been carried out on the prototype, which is the surface-mounted permanent magnet BLDC motor, and the experimental results verify the analysis and demonstrate advantages of the proposed sensorless-control method.", "title": "" }, { "docid": "bc39e9c9980656ddb7d7eaa93b8fae9a", "text": "This paper is about designing Microstrip Patch Antenna with wearable Substrate with additional EBG structure for better gain to be operated in ISM band (2.4 GHz). A Wearable substrate is used in textile garments so this antenna structure is designed specifically for wearing garments, that's why SAR (Specific Absorption Rate) has been measured in this paper. For designing EBG structure help in improving gain and better results for SAR. CST studio software has been used for simulation of this antenna.", "title": "" }, { "docid": "73ce00cee1b2c1895ea5faa40929d2b7", "text": "This paper describes Luminoso’s participation in SemEval 2017 Task 2, “Multilingual and Cross-lingual Semantic Word Similarity”, with a system based on ConceptNet. ConceptNet is an open, multilingual knowledge graph that focuses on general knowledge that relates the meanings of words and phrases. Our submission to SemEval was an update of previous work that builds high-quality, multilingual word embeddings from a combination of ConceptNet and distributional semantics. Our system took first place in both subtasks. It ranked first in 4 out of 5 of the separate languages, and also ranked first in all 10 of the cross-lingual language pairs.", "title": "" } ]
scidocsrr
6e60745f2c2836317b546f317b793d4d
Text line extraction for historical document images
[ { "docid": "99ffaa3f845db7b71a6d1cbc62894861", "text": "There is a huge amount of historical documents in libraries and in various National Archives that have not been exploited electronically. Although automatic reading of complete pages remains, in most cases, a long-term objective, tasks such as word spotting, text/image alignment, authentication and extraction of specific fields are in use today. For all these tasks, a major step is document segmentation into text lines. Because of the low quality and the complexity of these documents (background noise, artifacts due to aging, interfering lines), automatic text line segmentation remains an open research field. The objective of this paper is to present a survey of existing methods, developed during the last decade and dedicated to documents of historical interest.", "title": "" } ]
[ { "docid": "c26abad7f3396faa798a74cfb23e6528", "text": "Recent advances in seismic sensor technology, data acquisition systems, digital communications, and computer hardware and software make it possible to build reliable real-time earthquake information systems. Such systems provide a means for modern urban regions to cope effectively with the aftermath of major earthquakes and, in some cases, they may even provide warning, seconds before the arrival of seismic waves. In the long term these systems also provide basic data for mitigation strategies such as improved building codes.", "title": "" }, { "docid": "3d9c02413c80913cb32b5094dcf61843", "text": "There is an explosion of youth subscriptions to original content-media-sharing Web sites such as YouTube. These Web sites combine media production and distribution with social networking features, making them an ideal place to create, connect, collaborate, and circulate. By encouraging youth to become media creators and social networkers, new media platforms such as YouTube offer a participatory culture in which youth can develop, interact, and learn. As youth development researchers, we must be cognizant of this context and critically examine what this platform offers that might be unique to (or redundant of) typical adolescent experiences in other developmental contexts.", "title": "" }, { "docid": "acbb920f48119857f598388a39cdebb6", "text": "Quantitative analyses in landscape ecology have traditionally been dominated by the patch-mosaic concept in which landscapes are modeled as a mosaic of discrete patches. This model is useful for analyzing categorical data but cannot sufficiently account for the spatial heterogeneity present in continuous landscapes. Sub-pixel remote sensing classifications offer a potential data source for capturing continuous spatial heterogeneity but lack discrete land cover classes and therefore cannot be analyzed using standard landscape metric tools. This research introduces the threshold gradient method to allow transformation of continuous sub-pixel classifications into a series of discrete maps based on land cover proportion (i.e., intensity) that can be analyzed using landscape metric tools. Sub-pixel data are reclassified at multiple thresholds along a land cover continuum and landscape metrics are computed for each map. Metrics are plotted in response to intensity and these ‘scalograms’ are mathematically modeled using curve fitting techniques to allow determination of critical land cover thresholds (e.g., inflection points) where considerable landscape changes are occurring. Results show that critical land cover intensities vary between metrics, and the approach can generate increased ecological information not available with other landscape characterization methods.", "title": "" }, { "docid": "718e31eabfd386768353f9b75d9714eb", "text": "The mathematical structure of Sudoku puzzles is akin to hard constraint satisfaction problems lying at the basis of many applications, including protein folding and the ground-state problem of glassy spin systems. Via an exact mapping of Sudoku into a deterministic, continuous-time dynamical system, here we show that the difficulty of Sudoku translates into transient chaotic behavior exhibited by this system. We also show that the escape rate κ, an invariant of transient chaos, provides a scalar measure of the puzzle's hardness that correlates well with human difficulty ratings. Accordingly, η = -log₁₀κ can be used to define a \"Richter\"-type scale for puzzle hardness, with easy puzzles having 0 < η ≤ 1, medium ones 1 < η ≤ 2, hard with 2 < η ≤ 3 and ultra-hard with η > 3. To our best knowledge, there are no known puzzles with η > 4.", "title": "" }, { "docid": "8ff903bfdc620639013c62a7e123ef54", "text": "A new aging model for Lithium Ion batteries is proposed based on theoretical models of crack propagation. This provides an exponential dependence of aging on stress such as depth of discharge. A measure of stress is derived from arbitrary charge and discharge histories to include mixed use in vehicles or vehicle to grid operations. This aging model is combined with an empirical equivalent circuit model, to provide time and state of charge dependent charge and discharge characteristics at any rate and temperature. This choice of model results in a cycle life prediction with few parameters to be fitted to a particular cell.", "title": "" }, { "docid": "f72d1cb1ccc4793e672f32d7e415c73d", "text": "Neural Architecture Search (NAS) aims to facilitate the design of deep networks for new tasks. Existing techniques rely on two stages: searching over the architecture space and validating the best architecture. Evaluating NAS algorithms is currently solely done by comparing their results on the downstream task. While intuitive, this fails to explicitly evaluate the effectiveness of their search strategies. In this paper, we extend the NAS evaluation procedure to include the search phase. To this end, we compare the quality of the solutions obtained by NAS search policies with that of random architecture selection. We find that: (i) On average, the random policy outperforms state-of-the-art NAS algorithms; and (ii) The results and candidate rankings of NAS algorithms do not reflect the true performance of the candidate architectures. While our former finding illustrates the fact that the NAS search space has been sufficiently constrained so that random solutions yield good results, we trace the latter back to the weight sharing strategy used by state-of-the-art NAS methods. In contrast with common belief, weight sharing negatively impacts the training of good architectures, thus reducing the effectiveness of the search process. We believe that following our evaluation framework will be key to designing NAS strategies that truly discover superior architectures.", "title": "" }, { "docid": "3159879f34a093d38e82dba61b92d74e", "text": "The performance of many hard combinatorial problem solvers depends strongly on their parameter settings, and since manual parameter tuning is both tedious and suboptimal the AI community has recently developed several algorithm configuration (AC) methods to automatically address this problem. While all existing AC methods start the configuration process of an algorithm A from scratch for each new type of benchmark instances, here we propose to exploit information about A’s performance on previous benchmarks in order to warmstart its configuration on new types of benchmarks. We introduce two complementary ways in which we can exploit this information to warmstart AC methods based on a predictive model. Experiments for optimizing a flexible modern SAT solver on twelve different instance sets show that our methods often yield substantial speedups over existing AC methods (up to 165-fold) and can also find substantially better configurations given the same compute budget.", "title": "" }, { "docid": "26713ba18f5a9e151a8f73c9fc282f88", "text": "In this paper, we present a neural network based task-oriented dialogue system that can be optimized end-to-end with deep reinforcement learning (RL). The system is able to track dialogue state, interface with knowledge bases, and incorporate query results into agent’s responses to successfully complete task-oriented dialogues. Dialogue policy learning is conducted with a hybrid supervised and deep RL methods. We first train the dialogue agent in a supervised manner by learning directly from task-oriented dialogue corpora, and further optimize it with deep RL during its interaction with users. In the experiments on two different dialogue task domains, our model demonstrates robust performance in tracking dialogue state and producing reasonable system responses. We show that deep RL based optimization leads to significant improvement on task success rate and reduction in dialogue length comparing to supervised training model. We further show benefits of training task-oriented dialogue model end-to-end comparing to componentwise optimization with experiment results on dialogue simulations and human evaluations.", "title": "" }, { "docid": "e729d7b399b3a4d524297ae79b28f45d", "text": "The aim of this paper is to solve optimal design problems for industrial applications when the objective function value requires the evaluation of expensive simulation codes and its first derivatives are not available. In order to achieve this goal we propose two new algorithms that draw inspiration from two existing approaches: a filled function based algorithm and a Particle Swarm Optimization method. In order to test the efficiency of the two proposed algorithms, we perform a numerical comparison both with the methods we drew inspiration from, and with some standard Global Optimization algorithms that are currently adopted in industrial design optimization. Finally, a realistic ship design problem, namely the reduction of the amplitude of the heave motion of a ship advancing in head seas (a problem connected to both safety and comfort), is solved using the new codes and other global and local derivativeThis work has been partially supported by the Ministero delle Infrastrutture e dei Trasporti in the framework of the research plan “Programma di Ricerca sulla Sicurezza”, Decreto 17/04/2003 G.U. n. 123 del 29/05/2003, by MIUR, FIRB 2001 Research Program Large-Scale Nonlinear Optimization and by the U.S. Office of Naval Research (NICOP grant N. 000140510617). E.F. Campana ( ) · D. Peri · A. Pinto INSEAN—Istituto Nazionale per Studi ed Esperienze di Architettura Navale, Via di Vallerano 139, 00128 Roma, Italy e-mail: [email protected] G. Liuzzi Consiglio Nazionale delle Ricerche, Istituto di Analisi dei Sistemi ed Informatica “A. Ruberti”, Viale Manzoni 30, 00185 Roma, Italy S. Lucidi Dipartimento di Informatica e Sistemistica “A. Ruberti”, Università degli Studi di Roma “Sapienza”, Via Ariosto 25, 00185 Roma, Italy V. Piccialli Dipartimento di Ingegneria dell’Impresa, Università degli Studi di Roma “Tor Vergata”, Via del Policlinico 1, 00133 Roma, Italy 534 E.F. Campana et al. free optimization methods. All the numerical results show the effectiveness of the two new algorithms.", "title": "" }, { "docid": "c3fb97edabf2c4fa68cf45bb888e5883", "text": "Multi-armed bandit problems are the predominant theoretical model of exploration-exploitation tradeoffs in learning, and they have countless applications ranging from medical trials, to communication networks, to Web search and advertising. In many of these application domains, the learner may be constrained by one or more supply (or budget) limits, in addition to the customary limitation on the time horizon. The literature lacks a general model encompassing these sorts of problems. We introduce such a model, called bandits with knapsacks, that combines bandit learning with aspects of stochastic integer programming. In particular, a bandit algorithm needs to solve a stochastic version of the well-known knapsack problem, which is concerned with packing items into a limited-size knapsack. A distinctive feature of our problem, in comparison to the existing regret-minimization literature, is that the optimal policy for a given latent distribution may significantly outperform the policy that plays the optimal fixed arm. Consequently, achieving sublinear regret in the bandits-with-knapsacks problem is significantly more challenging than in conventional bandit problems.\n We present two algorithms whose reward is close to the information-theoretic optimum: one is based on a novel “balanced exploration” paradigm, while the other is a primal-dual algorithm that uses multiplicative updates. Further, we prove that the regret achieved by both algorithms is optimal up to polylogarithmic factors. We illustrate the generality of the problem by presenting applications in a number of different domains, including electronic commerce, routing, and scheduling. As one example of a concrete application, we consider the problem of dynamic posted pricing with limited supply and obtain the first algorithm whose regret, with respect to the optimal dynamic policy, is sublinear in the supply.", "title": "" }, { "docid": "17c8766c5fcc9b6e0d228719291dcea5", "text": "In this study we examined the social behaviors of 4- to 12-year-old children with autism spectrum disorders (ASD; N = 24) during three tradic interactions with an adult confederate and an interaction partner, where the interaction partner varied randomly among (1) another adult human, (2) a touchscreen computer game, and (3) a social dinosaur robot. Children spoke more in general, and directed more speech to the adult confederate, when the interaction partner was a robot, as compared to a human or computer game interaction partner. Children spoke as much to the robot as to the adult interaction partner. This study provides the largest demonstration of social human-robot interaction in children with autism to date. Our findings suggest that social robots may be developed into useful tools for social skills and communication therapies, specifically by embedding social interaction into intrinsic reinforcers and motivators.", "title": "" }, { "docid": "8e5c3dbe3312898fd7ef791326cbe509", "text": "In his recent commentary, Robins (2009) disputed the role played by ultraviolet radiation (UVR), namely, the vitamin-D-producing wavelengths of ultraviolet B (UVB), in the evolution of human skin. He questioned the theory that reduced levels of pigmentation in human skin were selected to facilitate absorption of UVB. He provided evidence to support his idea that people can produce enough vitamin D in their skin, regardless of pigmentation, if they are not pursuing a modern lifestyle. He asserted that, within his framework, rickets was the only selective force that could have influenced the evolution of light pigmentation because other detrimental effects of vitamin D deficiency are unproven. As rickets is increased by industrialization, Robins concluded that ‘‘. . . vitamin D status could not have constituted the fitness differential between lightly pigmented and darkly pigmented individuals at high latitudes that favored the evolutionary selection of the former’’ (Robins, 2009). In this article, we examine the current evidence for what has been termed the ‘‘vitamin D theory,’’ and highlight the importance of UVB penetration in the evolution of human skin. We begin with an overview of the solar processes involved in cutaneous vitamin D synthesis, followed by a discussion of causal arguments and causation in the context of the vitamin D theory, and conclude with a review of physiological mechanisms and their evolutionary significance.", "title": "" }, { "docid": "381e173e41b085ad7a4a30e84b1d37dc", "text": "Monarch butterfly optimization (MBO) is a new metaheuristic algorithm mimics the migration of butterflies from northern USA to Mexico. In MBO, there are mainly two processes. In the first process, the algorithm emulates how some of the butterflies move from the current position to the new position by the migration operator. In the latter process, the algorithm tunes the position of other butterflies by adjusting operator. In order to enhance the search ability of MBO, an innovation method called MBHS is introduced to tackle the optimization problem. In MBHS, the harmony search (HS) adds mutation operators to the process of adjusting operator to enhance the exploitation, exploration ability, and speed up the convergence rate of MBO. For the purpose to validate the performance of MBHS, 14 benchmark functions are used, and the performance is compared with well-known search algorithms. The experimental results demonstrate that MBHS performs better than the basic MBO and other algorithms.", "title": "" }, { "docid": "814e593fac017e5605c4992ef7b25d6d", "text": "This paper discusses the design of high power density transformer and inductor for the high frequency dual active bridge (DAB) GaN charger. Because the charger operates at 500 kHz, the inductance needed to achieve ZVS for the DAB converter is reduced to as low as 3μH. As a result, it is possible to utilize the leakage inductor as the series inductor of DAB converter. To create such amount of leakage inductance, certain space between primary and secondary winding is allocated to store the leakage flux energy. The designed transformer is above 99.2% efficiency while delivering 3.3kW. The power density of the designed transformer is 6.3 times of the lumped transformer and inductor in 50 kHz Si Charger. The detailed design procedure and loss analysis are discussed.", "title": "" }, { "docid": "c043e7a5d5120f5a06ef6decc06c184a", "text": "Entities are further categorized into those that are the object of the measurement (‘assayed components’) and those, if any, that are subjected to targeted and controlled experimental interventions (‘perturbations/interventions’). These two core categories are related to the concepts ‘perturbagen’ and ‘target’ in the Bioassay Ontology (BAO2) and capture an important aspect of the design of experiments where multiple conditions are compared with each other in order to test whether a given perturbation (e.g., the presence or absence of a drug), causes a given response (e.g., a change in gene expression). Additional categories include ‘experimental variables’, ‘reporters’, ‘normalizing components’ and generic ‘biological components’ (Supplementary Data). We developed a web-based tool with a graphical user interface that allows computer-assisted manual extraction of the metadata model described above at the level of individual figure panels based on the information provided in figure legends and in the images. Files that contain raw or minimally processed data, when available, can furthermore be linked or uploaded and attached to the figure. As proof of principle, we have curated a compendium of over 18,000 experiments published across 23 journals. From the 721 papers processed, 381 papers were related to the field of autophagy, and the rest were annotated during the publication process of accepted manuscripts at four partner molecular biology journals. Both sets of papers were processed identically. Out of the 18,157 experimental panels annotated, 77% included at least one ‘intervention/assayed component’ pair, and this supported the broad applicability of the perturbation-centric SourceData model. We provide a breakdown of entities by categories in Supplementary Figure 1. We note that the presence of a perturbation is not a requirement for the model. As such, the SourceData model is also applicable in cases such as correlative observations. The SourceData model is independent of data type (i.e., image-based or numerical values) and is well suited for cell and molecular biology experiments. 77% of the processed entities were explicitly mentioned in the text of the legend. For the remaining entities, curators added the terms based on the labels directly displayed on the image of the figure. SourceData: a semantic platform for curating and searching figures", "title": "" }, { "docid": "1a38b0f7db94b9bda2a647969c27cb04", "text": "A picture is worth a thousand words. Not until recently, however, we noticed some success stories in understanding of visual scenes: a model that is capable of to detect/name objects, describe their attributes, and recognize their relationships/interactions. In this paper, we propose a phrase-based hierarchical Long Short-Term Memory (phi-LSTM) model to generate image description. The proposed model encodes sentence as a sequence of combination of phrases and words, instead of a sequence of words alone as in those conventional solutions. The two levels of this model are dedicated to i) learn to generate image relevant noun phrases, and ii) produce appropriate image description from the phrases and other words in the corpus. Adopting the convolutional neural network to learn image features and the LSTM to learn word sequence in a sentence, the proposed model has shown a better or competitive results in comparison to the state-of-the-art models on Flickr8k and Flick30k datasets.", "title": "" }, { "docid": "defde14c64f5eecda83cf2a59c896bc0", "text": "Time series shapelets are discriminative subsequences and their similarity to a time series can be used for time series classification. Since the discovery of time series shapelets is costly in terms of time, the applicability on long or multivariate time series is difficult. In this work we propose Ultra-Fast Shapelets that uses a number of random shapelets. It is shown that Ultra-Fast Shapelets yield the same prediction quality as current state-of-theart shapelet-based time series classifiers that carefully select the shapelets by being by up to three orders of magnitudes. Since this method allows a ultra-fast shapelet discovery, using shapelets for long multivariate time series classification becomes feasible. A method for using shapelets for multivariate time series is proposed and Ultra-Fast Shapelets is proven to be successful in comparison to state-of-the-art multivariate time series classifiers on 15 multivariate time series datasets from various domains. Finally, time series derivatives that have proven to be useful for other time series classifiers are investigated for the shapelet-based classifiers. It is shown that they have a positive impact and that they are easy to integrate with a simple preprocessing step, without the need of adapting the shapelet discovery algorithm.", "title": "" }, { "docid": "31ed2186bcd711ac4a5675275cd458eb", "text": "Location-aware wireless sensor networks will enable a new class of applications, and accurate range estimation is critical for this task. Low-cost location determination capability is studied almost entirely using radio frequency received signal strength (RSS) measurements, resulting in poor accuracy. More accurate systems use wide bandwidths and/or complex time-synchronized infrastructure. Low-cost, accurate ranging has proven difficult because small timing errors result in large range errors. This paper addresses estimation of the distance between wireless nodes using a two-way ranging technique that approaches the Cramér-Rao Bound on ranging accuracy in white noise and achieves 1-3 m accuracy in real-world ranging and localization experiments. This work provides an alternative to inaccurate RSS and complex, wide-bandwidth methods. Measured results using a prototype wireless system confirm performance in the real world.", "title": "" }, { "docid": "d74874cf15642c87c7de51e54275f9be", "text": "We used a three layer Convolutional Neural Network (CNN) to make move predictions in chess. The task was defined as a two-part classification problem: a piece-selector CNN is trained to score which white pieces should be made to move, and move-selector CNNs for each piece produce scores for where it should be moved. This approach reduced the intractable class space in chess by a square root. The networks were trained using 20,000 games consisting of 245,000 moves made by players with an ELO rating higher than 2000 from the Free Internet Chess Server. The piece-selector network was trained on all of these moves, and the move-selector networks trained on all moves made by the respective piece. Black moves were trained on by using a data augmentation to frame it as a move made by the", "title": "" }, { "docid": "4d71e585675eb2cec41ca20f1b97045b", "text": "Weed scouting is an important part of modern integrated weed management but can be time consuming and sparse when performed manually. Automated weed scouting and weed destruction has typically been performed using classification systems able to classify a set group of species known a priori. This greatly limits deployability as classification systems must be retrained for any field with a different set of weed species present within them. In order to overcome this limitation, this paper works towards developing a clustering approach to weed scouting which can be utilized in any field without the need for prior species knowledge. We demonstrate our system using challenging data collected in the field from an agricultural robotics platform. We show that considerable improvements can be made by (i) learning low-dimensional (bottleneck) features using a deep convolutional neural network to represent plants in general and (ii) tying views of the same area (plant) together. Deploying this algorithm on in-field data collected by AgBotII, we are able to successfully cluster cotton plants from grasses without prior knowledge or training for the specific plants in the field.", "title": "" } ]
scidocsrr
570163ab70ded593d9bacc820194b5ee
Adaptive Fingerprint Image Enhancement Based on Spatial Contextual Filtering and Pre-processing of Data
[ { "docid": "db677822da381d375640723704d99cbc", "text": "The important step in fingerprint matching is the reliable fingerprint recognition. Automatic fingerprint recognition system relies on the input fingerprint for feature extraction. Hence, the effectiveness of feature extraction relies heavily on the quality of input fingerprint images. In this paper adaptive filtering in frequency domain in order to enhance the fingerprint image is proposed. Enhancement of the original fingerprint image is obtained by histogram equalization of the Gabor filtered image.", "title": "" } ]
[ { "docid": "37af58543ae2508271439427f424caf7", "text": "Bitcoin is the first widely adopted decentralized digitale-cash system. All Bitcoin transactions that include addresses of senders and receivers are stored in the public blockchain which could cause privacy problems. The Zerocoin protocol hides the link between individual Bitcoin transactions without adding trusted third parties. However such an untraceable remittance system could cause illegal transfers such as money laundering. In this paper we address this problem and propose an auditable decentralized e-cash scheme based on the Zerocoin protocol. Our scheme allows designated auditors to extract link information from Zerocoin transactions while preventing other users including miners from obtaining it. Respecting the mind of the decentralized system, the auditor doesn't have other authorities such as stopping transfers, confiscating funds, and deactivating accounts. A technical contribution of our scheme is that a coin sender embeds audit information with a non-interactive zeroknowledge proof of knowledge (NIZKP). This zero-knowledge prevents malicious senders from embedding indiscriminate audit information, and we construct it simply using only the standard Schnorr protocol for discrete logarithm without zk-SNARKs or other recent techniques for zero-knowledge proof.", "title": "" }, { "docid": "73f3a2c45e356dca5bdfe2b733cedf22", "text": "In this paper, we present a software cost estimation model for agile development which can help estimate concrete development costs for the desired features of a product and track the project progress dynamically. In general, existing cost estimation methods for agile developments used a story point. Because it is relative value, the estimation results tend to be easily fluctuated by the small variation of the baseline story point. For tracking project’s progress, the velocity was measured to check the progress and was used to make plan for the iteration and the releases of the project. The proposed method in this paper provides the systematic estimation and dynamic tracking methodology for agile projects. To estimate the effort of a project development, function points are used in addition to the story point. The function points are determined based on the user stories of desired features of the product. We adopt the Kalman filter algorithm for tracking project progress. The remaining function points at a certain point during the project are modeled as the state space model for the Kalman filter. The daily variation of the function point is observed and inputted to the Kalman Filter for providing concrete estimation and velocity. Moreover we validate the better performance of our model by comparing with traditional methods through a case study.", "title": "" }, { "docid": "3f0b6a3238cf60d7e5d23363b2affe95", "text": "This paper presents a new strategy to control the generated power that comes from the energy sources existing in autonomous and isolated Microgrids. In this particular study, the power system consists of a power electronic converter supplied by a battery bank, which is used to form the AC grid (grid former converter), an energy source based on a wind turbine with its respective power electronic converter (grid supplier converter), and the power consumers (loads). The main objective of this proposed strategy is to control the state of charge of the battery bank limiting the voltage on its terminals by controlling the power generated by the energy sources. This is done without using dump loads or any physical communication among the power electronic converters or the individual energy source controllers. The electrical frequency of the microgrid is used to inform to the power sources and their respective converters the amount of power they need to generate in order to maintain the battery-bank state of charge below or equal its maximum allowable limit. It is proposed a modified droop control to implement this task.", "title": "" }, { "docid": "ef365e432e771c812300b654ceaff419", "text": "OBJECTIVE\nPretreatment of myoinositol is a very new method that was evaluated in multiple small studies to manage poor ovarian response in assisted reproduction. This study was to determine the efficacy of myoinositol supplement in infertile women undergoing ovulation induction for intracytoplasmic sperm injection (ICSI) or in vitro fertilization embryo transfer (IVF-ET).\n\n\nMETHODS\nA meta-analysis and systematic review of published articles evaluating the efficacy of myo-inositol in patients undergoing ovulation induction for ICSI or IVF-ET was performed.\n\n\nRESULTS\nSeven trials with 935 women were included. Myoinositol supplement was associated with significantly improved clinical pregnancy rate [95% confidence interval (CI), 1.04-1.96; P = .03] and abortion rate (95% CI, 0.08-0.50; P = .0006). Meanwhile, Grade 1 embryos proportion (95% CI, 1.10-2.74; P = .02), germinal vescicle and degenerated oocytes retrieved (95% CI, 0.11-0.86; P = .02), and total amount of ovulation drugs (95% CI, -591.69 to -210.39; P = .001) were also improved in favor of myo-inositol. There were no significant difference in total oocytes retrieved, MII stage oocytes retrieved, stimulation days, and E2 peak level.\n\n\nCONCLUSIONS\nMyoinositol supplement increase clinical pregnancy rate in infertile women undergoing ovulation induction for ICSI or IVF-ET. It may improve the quality of embryos, and reduce the unsuitable oocytes and required amount of stimulation drugs.", "title": "" }, { "docid": "340f4f9336dd0884bb112345492b47f9", "text": "Inspired by how humans summarize long documents, we propose an accurate and fast summarization model that first selects salient sentences and then rewrites them abstractively (i.e., compresses and paraphrases) to generate a concise overall summary. We use a novel sentence-level policy gradient method to bridge the nondifferentiable computation between these two neural networks in a hierarchical way, while maintaining language fluency. Empirically, we achieve the new state-of-theart on all metrics (including human evaluation) on the CNN/Daily Mail dataset, as well as significantly higher abstractiveness scores. Moreover, by first operating at the sentence-level and then the word-level, we enable parallel decoding of our neural generative model that results in substantially faster (10-20x) inference speed as well as 4x faster training convergence than previous long-paragraph encoder-decoder models. We also demonstrate the generalization of our model on the test-only DUC2002 dataset, where we achieve higher scores than a state-of-the-art model.", "title": "" }, { "docid": "ab3dd1f92c09e15ee05ab7f65f676afe", "text": "We introduce a novel learning method for 3D pose estimation from color images. While acquiring annotations for color images is a difficult task, our approach circumvents this problem by learning a mapping from paired color and depth images captured with an RGB-D camera. We jointly learn the pose from synthetic depth images that are easy to generate, and learn to align these synthetic depth images with the real depth images. We show our approach for the task of 3D hand pose estimation and 3D object pose estimation, both from color images only. Our method achieves performances comparable to state-of-the-art methods on popular benchmark datasets, without requiring any annotations for the color images.", "title": "" }, { "docid": "f1755e987da9d915eb9969e7b1eeb8dc", "text": "Recent advances in distant-talking ASR research have confirmed that speech enhancement is an essential technique for improving the ASR performance, especially in the multichannel scenario. However, speech enhancement inevitably distorts speech signals, which can cause significant degradation when enhanced signals are used as training data. Thus, distant-talking ASR systems often resort to using the original noisy signals as training data and the enhanced signals only at test time, and give up on taking advantage of enhancement techniques in the training stage. This paper proposes to make use of enhanced features in the student-teacher learning paradigm. The enhanced features are used as input to a teacher network to obtain soft targets, while a student network tries to mimic the teacher network's outputs using the original noisy features as input, so that speech enhancement is implicitly performed within the student network. Compared with conventional student-teacher learning, which uses a better network as teacher, the proposed self-supervised method uses better (enhanced) inputs to a teacher. This setup matches the above scenario of making use of enhanced features in network training. Experiments with the CHiME-4 challenge real dataset show significant ASR improvements with an error reduction rate of 12% in the single-channel track and 15% in the 2-channel track, respectively, by using 6-channel beamformed features for the teacher model.", "title": "" }, { "docid": "5c227388ee404354692ffa0b2f3697f3", "text": "Automotive surround view camera system is an emerging automotive ADAS (Advanced Driver Assistance System) technology that assists the driver in parking the vehicle safely by allowing him/her to see a top-down view of the 360 degree surroundings of the vehicle. Such a system normally consists of four to six wide-angle (fish-eye lens) cameras mounted around the vehicle, each facing a different direction. From these camera inputs, a composite bird-eye view of the vehicle is synthesized and shown to the driver in real-time during parking. In this paper, we present a surround view camera solution that consists of three key algorithm components: geometric alignment, photometric alignment, and composite view synthesis. Our solution produces a seamlessly stitched bird-eye view of the vehicle from four cameras. It runs real-time on DSP C66x producing an 880x1080 output video at 30 fps.", "title": "" }, { "docid": "c2634da978df12f5d18db31a79d22bc1", "text": "Existing bidirectional reflectance distribution function (BRDF) models are capable of capturing the distinctive highlights produced by the fibrous nature of wood. However, capturing parameter textures for even a single specimen remains a laborious process requiring specialized equipment. In this paper we take a procedural approach to generating parameters for the wood BSDF. We characterize the elements of trees that are important for the appearance of wood, discuss techniques appropriate for representing those features, and present a complete procedural wood shader capable of reproducing the growth patterns responsible for the distinctive appearance of highly prized “figured” wood. Our procedural wood shader is random-access, 3D, modular, and is fast enough to generate a preview for design.", "title": "" }, { "docid": "d56ff4b194c123b19a335e00b38ea761", "text": "As the automobile industry evolves, a number of in-vehicle communication protocols are developed for different in-vehicle applications. With the emerging new applications towards Internet of Things (IoT), a more integral solution is needed to enable the pervasiveness of intra- and inter-vehicle communications. In this survey, we first introduce different classifications of automobile applications with focus on their bandwidth and latency. Then we survey different in-vehicle communication bus protocols including both legacy protocols and emerging Ethernet. In addition, we highlight our contribution in the field to employ power line as the in-vehicle communication medium. We believe power line communication will play an important part in future automobile which can potentially reduce the amount of wiring, simplify design and reduce cost. Based on these technologies, we also introduce some promising applications in future automobile enabled by the development of in-vehicle network. Finally, We will share our view on how the in-vehicle network can be merged into the future IoT.", "title": "" }, { "docid": "ac0e5f0a447897a3c5c9b480ab1c3796", "text": "In a recent paper [13], the Fast Marching farthest point sampling strategy (FastFPS) for planar domains and curved manifolds was introduced. The version of FastFPS for curved manifolds discussed in the paper [13] deals with surface domains in triangulated form only. Due to a restriction of the underlying Fast Marching method, the algorithm further requires the splitting of any obtuse into acute triangles to ensure the consistency of the Fast Marching approximation. In this paper, we overcome these restrictions by using Mémoli and Sapiro’s [11, 12] extension of the Fast Marching method to the handling of implicit surfaces and point clouds. We find that the extended FastFPS algorithm can be applied to surfaces in implicit or point cloud form without the loss of the original algorithm’s computational optimality and without the need for any preprocessing.", "title": "" }, { "docid": "ebfd9accf86d5908c12d2c3b92758620", "text": "A circular microstrip array with beam focused for RFID applications was presented. An analogy with the optical lens and optical diffraction was made to describe the behaviour of this system. The circular configuration of the array requires less phase shift and exhibite smaller side lobe level compare to a square array. The measurement result shows a good agreement with simulation and theory. This system is a good way to increase the efficiency of RFID communication without useless power. This solution could also be used to develop RFID devices for the localization problematic. The next step of this work is to design a system with an adjustable focus length.", "title": "" }, { "docid": "48653a8de0dd6e881415855e694fc925", "text": "The aim of this study was to compare the use of transcutaneous vs. motor nerve stimulation in the evaluation of low-frequency fatigue. Nine female and eleven male subjects, all physically active, performed a 30-min downhill run on a motorized treadmill. Knee extensor muscle contractile characteristics were measured before, immediately after (Post), and 30 min after the fatiguing exercise (Post30) by using single twitches and 0.5-s tetani at 20 Hz (P20) and 80 Hz (P80). The P20-to-P80 ratio was calculated. Electrical stimulations were randomly applied either maximally to the femoral nerve or via large surface electrodes (ES) at an intensity sufficient to evoke 50% of maximal voluntary contraction (MVC) during a 80-Hz tetanus. Voluntary activation level was also determined during isometric MVC by the twitch-interpolation technique. Knee extensor MVC and voluntary activation level decreased at all points in time postexercise (P < 0.001). P20 and P80 displayed significant time x gender x stimulation method interactions (P < 0.05 and P < 0.001, respectively). Both stimulation methods detected significant torque reductions at Post and Post30. Overall, ES tended to detect a greater impairment at Post in male and a lesser one in female subjects at both Post and Post30. Interestingly, the P20-P80 ratio relative decrease did not differ between the two methods of stimulation. The low-to-high frequency ratio only demonstrated a significant time effect (P < 0.001). It can be concluded that low-frequency fatigue due to eccentric exercise appears to be accurately assessable by ES.", "title": "" }, { "docid": "80b5030cbb923f32dc791409eb184a80", "text": "Bayesian Optimisation (BO) refers to a class of methods for global optimisation of a function f which is only accessible via point evaluations. It is typically used in settings where f is expensive to evaluate. A common use case for BO in machine learning is model selection, where it is not possible to analytically model the generalisation performance of a statistical model, and we resort to noisy and expensive training and validation procedures to choose the best model. Conventional BO methods have focused on Euclidean and categorical domains, which, in the context of model selection, only permits tuning scalar hyper-parameters of machine learning algorithms. However, with the surge of interest in deep learning, there is an increasing demand to tune neural network architectures. In this work, we develop NASBOT, a Gaussian process based BO framework for neural architecture search. To accomplish this, we develop a distance metric in the space of neural network architectures which can be computed efficiently via an optimal transport program. This distance might be of independent interest to the deep learning community as it may find applications outside of BO. We demonstrate that NASBOT outperforms other alternatives for architecture search in several cross validation based model selection tasks on multi-layer perceptrons and convolutional neural networks.", "title": "" }, { "docid": "b2de917d74765e39562c60c74a88d7f3", "text": "Computer-phobic university students are easy to find today especially when it come to taking online courses. Affect has been shown to influence users’ perceptions of computers. Although self-reported computer anxiety has declined in the past decade, it continues to be a significant issue in higher education and online courses. More importantly, anxiety seems to be a critical variable in relation to student perceptions of online courses. A substantial amount of work has been done on computer anxiety and affect. In fact, the technology acceptance model (TAM) has been extensively used for such studies where affect and anxiety were considered as antecedents to perceived ease of use. However, few, if any, have investigated the interplay between the two constructs as they influence perceived ease of use and perceived usefulness towards using online systems for learning. In this study, the effects of affect and anxiety (together and alone) on perceptions of an online learning system are investigated. Results demonstrate the interplay that exists between affect and anxiety and their moderating roles on perceived ease of use and perceived usefulness. Interestingly, the results seem to suggest that affect and anxiety may exist simultaneously as two weights on each side of the TAM scale.", "title": "" }, { "docid": "0a5e2cc403ba9a4397d04c084b25f43e", "text": "Ebola virus disease (EVD) distinguishes its feature as high infectivity and mortality. Thus, it is urgent for governments to draw up emergency plans against Ebola. However, it is hard to predict the possible epidemic situations in practice. Luckily, in recent years, computational experiments based on artificial society appeared, providing a new approach to study the propagation of EVD and analyze the corresponding interventions. Therefore, the rationality of artificial society is the key to the accuracy and reliability of experiment results. Individuals' behaviors along with travel mode directly affect the propagation among individuals. Firstly, artificial Beijing is reconstructed based on geodemographics and machine learning is involved to optimize individuals' behaviors. Meanwhile, Ebola course model and propagation model are built, according to the parameters in West Africa. Subsequently, propagation mechanism of EVD is analyzed, epidemic scenario is predicted, and corresponding interventions are presented. Finally, by simulating the emergency responses of Chinese government, the conclusion is finally drawn that Ebola is impossible to outbreak in large scale in the city of Beijing.", "title": "" }, { "docid": "3d5eb503f837adffb4468548b3f76560", "text": "Purpose This study investigates the impact of such contingency factors as top management support, business vision, and external expertise, on the one hand, and ERP system success, on the other. Design/methodology/approach A conceptual model was developed and relevant hypotheses formulated. Surveys were conducted in two Northern European countries and a structural equation modeling technique was used to analyze the data. Originality/value It is argued that ERP systems are different from other IT implementations; as such, there is a need to provide insights as to how the aforementioned factors play out in the context of ERP system success evaluations for adopting organizations. As was predicted, the results showed that the three contingency factors positively influence ERP system success. More importantly, the relative importance of quality external expertise over the other two factors for ERP initiatives was underscored. The implications of the findings for both practitioners and researchers are discussed.", "title": "" }, { "docid": "ae468573cd37e4f3bf923d76bc9f0779", "text": "This paper integrates recent work on Path Integral (PI) and Kullback Leibler (KL) divergence stochastic optimal control theory with earlier work on risk sensitivity and the fundamental dualities between free energy and relative entropy. We derive the path integral optimal control framework and its iterative version based on the aforemetioned dualities. The resulting formulation of iterative path integral control is valid for general feedback policies and in contrast to previous work, it does not rely on pre-specified policy parameterizations. The derivation is based on successive applications of Girsanov's theorem and the use of Radon-Nikodým derivative as applied to diffusion processes due to the change of measure in the stochastic dynamics. We compare the PI control derived based on Dynamic Programming with PI based on the duality between free energy and relative entropy. Moreover we extend our analysis on the applicability of the relationship between free energy and relative entropy to optimal control of markov jump diffusions processes. Furthermore, we present the links between KL stochastic optimal control and the aforementioned dualities and discuss its generalizability.", "title": "" }, { "docid": "54d1e75ca60b89af7ac77a2175aafa97", "text": "The purpose of this study was to compare the biomechanics of the traditional squat with 2 popular exercise variations commonly referred to as the powerlifting squat and box squat. Twelve male powerlifters performed the exercises with 30, 50, and 70% of their measured 1 repetition maximum (1RM), with instruction to lift the loads as fast as possible. Inverse dynamics and spatial tracking of the external resistance were used to quantify biomechanical variables. A range of significant kinematic and kinetic differences (p < 0.05) emerged between the exercises. The traditional squat was performed with a narrow stance, whereas the powerlifting squat and box squat were performed with similar wide stances (48.3 ± 3.8, 89.6 ± 4.9, 92.1 ± 5.1 cm, respectively). During the eccentric phase of the traditional squat, the knee traveled past the toes resulting in anterior displacement of the system center of mass (COM). In contrast, during the powerlifting squat and box squat, a more vertical shin position was maintained, resulting in posterior displacements of the system COM. These differences in linear displacements had a significant effect (p < 0.05) on a number of peak joint moments, with the greatest effects measured at the spine and ankle. For both joints, the largest peak moment was produced during the traditional squat, followed by the powerlifting squat, then box squat. Significant differences (p < 0.05) were also noted at the hip joint where the largest moment in all 3 planes were produced during the powerlifting squat. Coaches and athletes should be aware of the biomechanical differences between the squatting variations and select according to the kinematic and kinetic profile that best match the training goals.", "title": "" } ]
scidocsrr
88faf78474697a96350aa9a7736e43bb
Managing Byzantine Robots via Blockchain Technology in a Swarm Robotics Collective Decision Making Scenario
[ { "docid": "045a56e333b1fe78677b8f4cc4c20ecc", "text": "Swarm robotics is an approach to collective robotics that takes inspiration from the self-organized behaviors of social animals. Through simple rules and local interactions, swarm robotics aims at designing robust, scalable, and flexible collective behaviors for the coordination of large numbers of robots. In this paper, we analyze the literature from the point of view of swarm engineering: we focus mainly on ideas and concepts that contribute to the advancement of swarm robotics as an engineering field and that could be relevant to tackle real-world applications. Swarm engineering is an emerging discipline that aims at defining systematic and well founded procedures for modeling, designing, realizing, verifying, validating, operating, and maintaining a swarm robotics system. We propose two taxonomies: in the first taxonomy, we classify works that deal with design and analysis methods; in the second taxonomy, we classify works according to the collective behavior studied. We conclude with a discussion of the current limits of swarm robotics as an engineering discipline and with suggestions for future research directions.", "title": "" } ]
[ { "docid": "18faba65741b6871517c8050aa6f3a45", "text": "Individuals differ in the manner they approach decision making, namely their decision-making styles. While some people typically make all decisions fast and without hesitation, others invest more effort into deciding even about small things and evaluate their decisions with much more scrutiny. The goal of the present study was to explore the relationship between decision-making styles, perfectionism and emotional processing in more detail. Specifically, 300 college students majoring in social studies and humanities completed instruments designed for assessing maximizing, decision commitment, perfectionism, as well as emotional regulation and control. The obtained results indicate that maximizing is primarily related to one dimension of perfectionism, namely the concern over mistakes and doubts, as well as emotional regulation and control. Furthermore, together with the concern over mistakes and doubts, maximizing was revealed as a significant predictor of individuals' decision commitment. The obtained findings extend previous reports regarding the association between maximizing and perfectionism and provide relevant insights into their relationship with emotional regulation and control. They also suggest a need to further explore these constructs that are, despite their complex interdependence, typically investigated in separate contexts and domains.", "title": "" }, { "docid": "72b93e02049b837a7990225494883708", "text": "Cloud computing is emerging as a major trend in the ICT industry. While most of the attention of the research community is focused on considering the perspective of the Cloud providers, offering mechanisms to support scaling of resources and interoperability and federation between Clouds, the perspective of developers and operators willing to choose the Cloud without being strictly bound to a specific solution is mostly neglected.\n We argue that Model-Driven Development can be helpful in this context as it would allow developers to design software systems in a cloud-agnostic way and to be supported by model transformation techniques into the process of instantiating the system into specific, possibly, multiple Clouds. The MODAClouds (MOdel-Driven Approach for the design and execution of applications on multiple Clouds) approach we present here is based on these principles and aims at supporting system developers and operators in exploiting multiple Clouds for the same system and in migrating (part of) their systems from Cloud to Cloud as needed. MODAClouds offers a quality-driven design, development and operation method and features a Decision Support System to enable risk analysis for the selection of Cloud providers and for the evaluation of the Cloud adoption impact on internal business processes. Furthermore, MODAClouds offers a run-time environment for observing the system under execution and for enabling a feedback loop with the design environment. This allows system developers to react to performance fluctuations and to re-deploy applications on different Clouds on the long term.", "title": "" }, { "docid": "aa3767bc35b8d465aa779f36fb40319e", "text": "This paper introduces the Minnesota Intrusion Detection System (MINDS), which uses a suite of data mining techniques to automatically detect attacks against computer networks and systems. While the long-term objective of MINDS is to address all aspects of intrusion detection, in this paper we present two specific contributions. First, we present MINDS anomaly detection module that assigns a score to each connection that reflects how anomalous the connection is compared to the normal network traffic. Experimental results on live network traffic at the University of Minnesota show that our anomaly detection techniques have been successful in automatically detecting several novel intrusions that could not be identified using state-of-the-art signature-based tools such as SNORT. Many of these have been reported on the CERT/CC list of recent advisories and incident notes. We also present the results of comparing the MINDS anomaly detection module to SPADE (Statistical Packet Anomaly Detection Engine), which is designed to detect stealthy scans.", "title": "" }, { "docid": "c848f8194856335a19bc195a79942d48", "text": "Managerial myopia in identifying competitive threats is a well-recognized phenomenon (Levitt, 1960; Zajac and Bazerman, 1991). Identifying such threats is particularly problematic, since they may arise from substitutability on the supply side as well as on the demand side. Managers who focus only on the product market arena in scanning their competitive environment may fail to notice threats that are developing due to the resources and latent capabilities of indirect or potential competitors. This paper brings together insights from the fields of strategic management and marketing to develop a simple but powerful set of tools for helping managers overcome this common problem. We present a two-stage framework for competitor identification and analysis that brings into consideration a broad range of competitors, including potential competitors, substitutors, and indirect competitors. Specifically we draw from Peteraf and Bergen’s (2001) framework for competitor identification to develop a hierarchy of competitor awareness. That is used, in combination with resource equivalence, to generate hypotheses on competitive analysis. This framework not only extends the ken of managers, but also facilitates an assessment of the strategic opportunities and threats that various competitors represent and allows managers to assess their significance in relative terms. Copyright # 2002 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "238c9f73acb34acf6e0d1cd8b7adaeaa", "text": "Psychology research reports that people tend to seek companionship with those who have a similar level of extraversion, and markers in dialogue show the speaker’s extraversion. Work in human-computer interaction seeks to understand creating and maintaining rapport between humans and ECAs. This study examines if humans will report greater rapport when interacting with an agent with an extraversion/introversion profile similar to their own. ECAs representing an extrovert and an introvert were created by manipulating three dialogue features. Using an informal, task-oriented setting, participants interacted with one of the agents in an immersive environment. Results suggest that subjects did not report the greatest rapport when interacting with the agent most similar to their level of extraversion. Introduction People often seek companionship with those who have a personality similar to their own [11]. There is evidence that personality types are borne out in dialogue choices [1]. Humans are uniquely physically capable of speech, and tend to find spoken communication as the most efficient and comfortable way to interact, including with technology [2]. They respond to computer personalities in the same way as they would to human personalities [10]. Recent research has sought to understand the nature of creating and maintaining rapport—a sense of emotional connection—when communicating with embodied conversational agents (ECAs) [2, 8, 14]. Successful ECAs could serve in a number of useful applications, from education to care giving. As human relationships are fundamentally social and emotional, these qualities must be incorporated into ECAs if human-agent relationships are to feel natural to users. Research has been focused on the development and maintenance of rapport felt by humans when interacting with an ECA [11] and in developing ECA personalities [3]. However, questions remain as to which agent personality is the best match for developing rapport in human-ECA interactions. In this study, two agents representing an extravert and an introvert were created by manipulating three dialogue features. Using a task-oriented but informal set-", "title": "" }, { "docid": "a008e9f817c6c4658c9c739d0d7fb6a4", "text": "BI (Business Intelligence) is an important discipline for companies and the challenges it faces are strategic. A central concept in BI is the data warehouse, which is a set of consolidated data from heterogeneous sources (usually databases in 3NF). To model the data warehouse, the Inmon and Kimball approaches are the most used. Both solutions monopolize the BI market However, a third modeling approach called “Data Vault” of its creator Linstedt, is gaining ground from year to year. It allows building a data warehouse of raw (unprocessed) data from heterogeneous sources. The purpose of this paper is to present a comparative study of the three precedent approaches. First, we study each approach separately and then we draw a comparison between them. Finally, we include recommendations for selecting the best approach before concluding this paper.", "title": "" }, { "docid": "70aba7669b0d3d17dd43eb570042b769", "text": "The most critical step in the production of diphtheria vaccines is the inactivation of the toxin by formaldehyde. Diphtheria toxoid (DTx) is produced during this inactivation process through partly unknown, chemical modifications of the toxin. Consequently, diphtheria vaccines are difficult to characterise completely and the quality of the toxoids is routinely determined with potency and safety tests. This article describes the possibility of monitoring the quality in diphtheria vaccine production with a selection of physicochemical and immunochemical tests as an alternative to established in vivo tests. To this end, diphtheria toxin was treated with increasing formaldehyde concentrations resulting in toxoid products varying in potency and residual toxicity. Differences in the quality of the experimental toxoids were also assessed with physicochemical and immunochemical techniques. The results obtained with several of these analyses, including SDS-PAGE, primary amino group determination, fluorescence spectroscopy, circular dichroism (CD) and biosensor analysis, showed a clear correlation with the potency and safety tests. A set of criteria is proposed that a diphtheria toxoid must comply with, i.e. an apparent shift of the B-fragment on SDS-PAGE, a reduction of primary amino groups in a diphtheria molecule, an increased resistance to denaturation, an increased circular dichroism signal in the near-UV region and a reduced binding to selected monoclonal antibodies. In principle, a selected set of in vitro analyses can replace the classical in vivo tests to evaluate the quality of diphtheria toxoid vaccines, provided that the validity of these tests is demonstrated in extensive validation studies and regulatory acceptance is obtained.", "title": "" }, { "docid": "7aa07ba3e04a79cf51dfc9c42b415628", "text": "A model is presented that permits the calculation of densities of 60-Hz magnetic fields throughout a residence from only a few measurements. We assume that residential magnetic fields are produced by sources external to the house and by the residential grounding circuit. The field from external sources is measured with a single probe. The field produced by the grounding circuit is calculated from the current flowing in the circuit and its geometry. The two fields are combined to give a prediction of the total field at any point in the house. A data-acquisition system was built to record the magnitude and phase of the grounding current and the field from external sources. The model's predictions were compared with measurements of the total magnetic field at a single location in 23 houses; a correlation coefficient of .87 was obtained, indicating that the model has good predictive capability. A more detailed study that was carried out in one house permitted comparisons of measurements with the model's predictions at locations throughout the house. Again, quite reasonable agreement was found. We also investigated the temporal variability of field readings in this house. Daily magnetic field averages were found to be considerably more stable than hourly averages. Finally, we demonstrate the use of the model in creating a profile of the magnetic fields in a home.", "title": "" }, { "docid": "51fadc6bba7453a1260d41ab2b25133e", "text": "This paper presents a digits classification system to recognize telephone numbers written on signboards. Candidate regions of digits are extracted from an image through edge extraction, enhancement and labeling. Since the digits in the images often have skew and slant, the digits are recognized after the skew and slant correction. To correct the skew, Hough transform is used, and the slant is corrected using the method of circumscribing digits with tilted rectangles. In experiments, we tested a total of 1,332 images of signboards with 11,939 digits. We obtained a digit extraction rate of 99.2% and a correct digit recognition rate of 98.8%.", "title": "" }, { "docid": "25ef53569c432d0be63db3514e003c83", "text": "The rapid growth of small Internet connected devices, known as the Internet of Things (IoT), is creating a new set of challenges to create secure, private infrastructures. This paper reviews the current literature on the challenges and approaches to security and privacy in the Internet of Things, with a strong focus on how these aspects are handled in IoT middleware. We focus on IoT middleware because many systems are built from existing middleware and these inherit the underlying security properties of the middleware framework. The paper is composed of three main sections. Firstly, we propose a matrix of security and privacy threats for IoT. This matrix is used as the basis of a widespread literature review aimed at identifying requirements on IoT platforms and middleware. Secondly, we present a structured literature review of the available middleware and how security is handled in these middleware approaches. We utilise the requirements from the first phase to evaluate. Finally, we draw a set of conclusions and identify further work in this area. Subjects Computer Networks and Communications, Embedded Computing, Real-Time and Embedded Systems, Security and Privacy, World Wide Web and Web Science", "title": "" }, { "docid": "03024b4232d8c233ecfc0c6c9751de0e", "text": "Area X is a songbird basal ganglia nucleus that is required for vocal learning. Both Area X and its immediate surround, the medial striatum (MSt), contain cells displaying either striatal or pallidal characteristics. We used pathway-tracing techniques to compare directly the targets of Area X and MSt with those of the lateral striatum (LSt) and globus pallidus (GP). We found that the zebra finch LSt projects to the GP, substantia nigra pars reticulata (SNr) and pars compacta (SNc), but not the thalamus. The GP is reciprocally connected with the subthalamic nucleus (STN) and projects to the SNr and motor thalamus analog, the ventral intermediate area (VIA). In contrast to the LSt, Area X and surrounding MSt project to the ventral pallidum (VP) and dorsal thalamus via pallidal-like neurons. A dorsal strip of the MSt contains spiny neurons that project to the VP. The MSt, but not Area X, projects to the ventral tegmental area (VTA) and SNc, but neither MSt nor Area X projects to the SNr. Largely distinct populations of SNc and VTA dopaminergic neurons innervate Area X and surrounding the MSt. Finally, we provide evidence consistent with an indirect pathway from the cerebellum to the basal ganglia, including Area X. Area X projections thus differ from those of the GP and LSt, but are similar to those of the MSt. These data clarify the relationships among different portions of the oscine basal ganglia as well as among the basal ganglia of birds and mammals.", "title": "" }, { "docid": "ef95b5b3a0ff0ab0907565305d597a9d", "text": "Control flow defenses against ROP either use strict, expensive, but strong protection against redirected RET instructions with shadow stacks, or much faster but weaker protections without. In this work we study the inherent overheads of shadow stack schemes. We find that the overhead is roughly 10% for a traditional shadow stack. We then design a new scheme, the parallel shadow stack, and show that its performance cost is significantly less: 3.5%. Our measurements suggest it will not be easy to improve performance on current x86 processors further, due to inherent costs associated with RET and memory load/store instructions. We conclude with a discussion of the design decisions in our shadow stack instrumentation, and possible lighter-weight alternatives.", "title": "" }, { "docid": "27707a845bb3baf7a97cd14e81f8e7f0", "text": "This paper attempts to identify the importance of sentiment words in financial reports on financial risk. By using a financespecific sentiment lexicon, we apply regression and ranking techniques to analyze the relations between sentiment words and financial risk. The experimental results show that, based on the bag-of-words model, models trained on sentiment words only result in comparable performance to those on origin texts, which confirms the importance of financial sentiment words on risk prediction. Furthermore, the learned models suggest strong correlations between financial sentiment words and risk of companies. As a result, these findings are of great value for providing us more insight and understanding into the impact of financial sentiment words in financial reports.", "title": "" }, { "docid": "655e2fda8fd2e8f7a665ca64047399a0", "text": "This article describes a self-propelled dolphin robot that aims to create a stable and controllable experimental platform. A viable bioinspired approach to generate diverse instances of dolphin-like swimming online via a center pattern generator (CPG) network is proposed.The characteristic parameters affecting three-dimensional (3-D) swimming performance are further identified and discussed. Both interactive and programmed swimming tests are provided to illustrate the validity of the present scheme.", "title": "" }, { "docid": "27fb6d2424113e52e07022095dce8ca8", "text": "Middleware for wireless sensor network (WSN) has been proposed as an effective solution to ease the application development by providing high-level abstractions. One of the important tasks of middleware in WSN is to support event service. As an important paradigm for event service, publish / subscribe (pub/sub) can support the asynchronous data exchange for applications and has received a lot of attention in traditional distributed systems. In WSNs, however, the design of pub/sub, especially on composite events, has not been adequately addressed. In this paper, we present the design and implementation of PSWare, a pub / sub middleware for WSN which can support both primitive and composite events. Our contribution mainly includes three parts. First, we propose an event definition language (EDL), which is specifically tailored to WSNs and can achieve high expressiveness and availability in the definition of primitive and composite events. The application programmers of PSWare can use the proposed EDL to define events in a simple manner. We implemented a compiler to compile the program written in EDL into byte codes. Second, we develop a runtime environment on sensor nodes, which provide a platform to run the compiled byte codes. Finally, we propose a composite event detection protocol to detect the events in an energy-efficient fashion.", "title": "" }, { "docid": "a6f5c789c8b4c9f6066675ed11292745", "text": "We propose a shared task based on recent advances in learning to generate natural language from meaning representations using semantically unaligned data. The aNALoGuE challenge aims to evaluate and compare recent corpus-based methods with respect to their scalability to data size and target complexity, as well as to assess predictive quality of automatic evaluation metrics.", "title": "" }, { "docid": "9af09d6ba8b1628284f3169316993ee0", "text": "This paper proposed a retinal image segmentation method based on conditional Generative Adversarial Network (cGAN) to segment optic disc. The proposed model consists of two successive networks: generator and discriminator. The generator learns to map information from the observing input (i.e., retinal fundus color image), to the output (i.e., binary mask). Then, the discriminator learns as a loss function to train this mapping by comparing the ground-truth and the predicted output with observing the input image as a condition. Experiments were performed on two publicly available dataset; DRISHTI GS1 and RIM-ONE. The proposed model outperformed state-of-the-art-methods by achieving around 0.96 and 0.98 of Jaccard and Dice coefficients, respectively. Moreover, an image segmentation is performed in less than a second on recent GPU.", "title": "" }, { "docid": "916c7a159dd22d0a0c0d3f00159ad790", "text": "The concept of scalability was introduced to the IEEE 802.16 WirelessMAN Orthogonal Frequency Division Multiplexing Access (OFDMA) mode by the 802.16 Task Group e (TGe). A scalable physical layer enables standard-based solutions to deliver optimum performance in channel bandwidths ranging from 1.25 MHz to 20 MHz with fixed subcarrier spacing for both fixed and portable/mobile usage models, while keeping the product cost low. The architecture is based on a scalable subchannelization structure with variable Fast Fourier Transform (FFT) sizes according to the channel bandwidth. In addition to variable FFT sizes, the specification supports other features such as Advanced Modulation and Coding (AMC) subchannels, Hybrid Automatic Repeat Request (H-ARQ), high-efficiency uplink subchannel structures, Multiple-Input-MultipleOutput (MIMO) diversity, and coverage enhancing safety channels, as well as other OFDMA default features such as different subcarrier allocations and diversity schemes. The purpose of this paper is to provide a brief tutorial on the IEEE 802.16 WirelessMAN OFDMA with an emphasis on scalable OFDMA. INTRODUCTION The IEEE 802.16 WirelessMAN standard [1] provides specifications for an air interface for fixed, portable, and mobile broadband wireless access systems. The standard includes requirements for high data rate Line of Sight (LOS) operation in the 10-66 GHz range for fixed wireless networks as well as requirements for Non Line of Sight (NLOS) fixed, portable, and mobile systems operating in sub 11 GHz licensed and licensed-exempt bands. Because of its superior performance in multipath fading wireless channels, Orthogonal Frequency Division Multiplexing (OFDM) signaling is recommended in OFDM and WirelessMAN OFDMA Physical (PHY) layer modes of the 802.16 standard for operation in sub 11 GHz NLOS applications. OFDM technology has been recommended in other wireless standards such as Digital Video Broadcasting (DVB) [2] and Wireless Local Area Networking (WLAN) [3]-[4], and it has been successfully implemented in the compliant solutions. Amendments for PHY and Medium Access Control (MAC) layers for mobile operation are being developed (working drafts [5] are being debated at the time of publication of this paper) by TGe of the 802.16 Working Group. The task group’s responsibility is to develop enhancement specifications to the standard to support Subscriber Stations (SS) moving at vehicular speeds and thereby specify a system for combined fixed and mobile broadband wireless access. Functions to support optional PHY layer structures, mobile-specific MAC enhancements, higher-layer handoff between Base Stations (BS) or sectors, and security features are among those specified. Operation in mobile mode is limited to licensed bands suitable for mobility between 2 and 6 GHz. Unlike many other OFDM-based systems such as WLAN, the 802.16 standard supports variable bandwidth sizes between 1.25 and 20 MHz for NLOS operations. This feature, along with the requirement for support of combined fixed and mobile usage models, makes the need for a scalable design of OFDM signaling inevitable. More specifically, neither one of the two OFDM-based modes of the 802.16 standard, WirelessMAN OFDM and OFDMA (without scalability option), can deliver the kind of performance required for operation in vehicular mobility multipath fading environments for all bandwidths in the specified range, without scalability enhancements that guarantee fixed subcarrier spacing for OFDM signals. The concept of scalable OFDMA is introduced to the IEEE 802.16 WirelessMAN OFDMA mode by the 802.16 TGe and has been the subject of many contributions to the standards committee [6]-[9]. Other features such as AMC subchannels, Hybrid Automatic Repeat Request Intel Technology Journal, Volume 8, Issue 3, 2004 Scalable OFDMA Physical Layer in IEEE 802.16 WirelessMAN 202 (H-ARQ), high-efficiency Uplink (UL) subchannel structures, Multiple-Input-Multiple-Output (MIMO) diversity, enhanced Advanced Antenna Systems (AAS), and coverage enhancing safety channels were introduced [10]-[14] simultaneously to enhance coverage and capacity of mobile systems while providing the tools to trade off mobility with capacity. The rest of the paper is organized as follows. In the next section we cover multicarrier system requirements, drivers of scalability, and design tradeoffs. We follow that with a discussion in the following six sections of the OFDMA frame structure, subcarrier allocation modes, Downlink (DL) and UL MAP messaging, diversity options, ranging in OFDMA, and channel coding options. Note that although the IEEE P802.16-REVd was ratified shortly before the submission of this paper, the IEEE P802.16e was still in draft stage at the time of submission, and the contents of this paper therefore are based on proposed contributions to the working group. MULTICARRIER DESIGN REQUIREMENTS AND TRADEOFFS A typical early step in the design of an Orthogonal Frequency Division Multiplexing (OFDM)-based system is a study of subcarrier design and the size of the Fast Fourier Transform (FFT) where optimal operational point balancing protection against multipath, Doppler shift, and design cost/complexity is determined. For this, we use Wide-Sense Stationary Uncorrelated Scattering (WSSUS), a widely used method to model time varying fading wireless channels both in time and frequency domains using stochastic processes. Two main elements of the WSSUS model are briefly discussed here: Doppler spread and coherence time of channel; and multipath delay spread and coherence bandwidth. A maximum speed of 125 km/hr is used here in the analysis for support of mobility. With the exception of high-speed trains, this provides a good coverage of vehicular speed in the US, Europe, and Asia. The maximum Doppler shift [15] corresponding to the operation at 3.5 GHz (selected as a middle point in the 26 GHz frequency range) is given by Equation (1). Hz m s m f m 408 086 . 0 / 35 = = = λ ν Equation (1) The worst-case Doppler shift value for 125 km/hr (35 m/s) would be ~700 Hz for operation at the 6 GHz upper limit specified by the standard. Using a 10 KHz subcarrier spacing, the Inter Channel Interference (ICI) power corresponding to the Doppler shift calculated in Equation (1) can be shown [16] to be limited to ~-27 dB. The coherence time of the channel, a measure of time variation in the channel, corresponding to the Doppler shift specified above, is calculated in Equation (2) [15].", "title": "" }, { "docid": "461062a51b0c33fcbb0f47529f3a6fba", "text": "Release of ATP from astrocytes is required for Ca2+ wave propagation among astrocytes and for feedback modulation of synaptic functions. However, the mechanism of ATP release and the source of ATP in astrocytes are still not known. Here we show that incubation of astrocytes with FM dyes leads to selective labelling of lysosomes. Time-lapse confocal imaging of FM dye-labelled fluorescent puncta, together with extracellular quenching and total-internal-reflection fluorescence microscopy (TIRFM), demonstrated directly that extracellular ATP or glutamate induced partial exocytosis of lysosomes, whereas an ischaemic insult with potassium cyanide induced both partial and full exocytosis of these organelles. We found that lysosomes contain abundant ATP, which could be released in a stimulus-dependent manner. Selective lysis of lysosomes abolished both ATP release and Ca2+ wave propagation among astrocytes, implicating physiological and pathological functions of regulated lysosome exocytosis in these cells.", "title": "" }, { "docid": "42979dd6ad989896111ef4de8d26b2fb", "text": "Online dating services let users expand their dating pool beyond their social network and specify important characteristics of potential partners. To assess compatibility, users share personal information — e.g., identifying details or sensitive opinions about sexual preferences or worldviews — in profiles or in one-on-one communication. Thus, participating in online dating poses inherent privacy risks. How people reason about these privacy risks in modern online dating ecosystems has not been extensively studied. We present the results of a survey we designed to examine privacy-related risks, practices, and expectations of people who use or have used online dating, then delve deeper using semi-structured interviews. We additionally analyzed 400 Tinder profiles to explore how these issues manifest in practice. Our results reveal tensions between privacy and competing user values and goals, and we demonstrate how these results can inform future designs.", "title": "" } ]
scidocsrr
478cda3fc497ae856dcde7cfb19cf971
Quasar: a probabilistic publish-subscribe system for social networks
[ { "docid": "fc1c3291c631562a6d1b34d5b5ccd27e", "text": "There are many methods for making a multicast protocol “reliable.” At one end of the spectrum, a reliable multicast protocol might offer tomicity guarantees, such as all-or-nothing delivery, delivery ordering, and perhaps additional properties such as virtually synchronous addressing. At the other are protocols that use local repair to overcome transient packet loss in the network, offering “best effort” reliability. Yet none of this prior work has treated stability of multicast delivery as a basic reliability property, such as might be needed in an internet radio, television, or conferencing application. This article looks at reliability with a new goal: development of a multicast protocol which is reliable in a sense that can be rigorously quantified and includes throughput stability guarantees. We characterize this new protocol as a “bimodal multicast” in reference to its reliability model, which corresponds to a family of bimodal probability distributions. Here, we introduce the protocol, provide a theoretical analysis of its behavior, review experimental results, and discuss some candidate applications. These confirm that bimodal multicast is reliable, scalable, and that the protocol provides remarkably stable delivery throughput.", "title": "" } ]
[ { "docid": "4071b0a0f3887a5ad210509e6ad5498a", "text": "Nowadays, the IoT is largely dependent on sensors. The IoT devices are embedded with sensors and have the ability to communicate. A variety of sensors play a key role in networked devices in IoT. In order to facilitate the management of such sensors, this paper investigates how to use SNMP protocol, which is widely used in network device management, to implement sensors information management of IoT system. The principles and implement details to setup the MIB file, agent and manager application are discussed. A prototype system is setup to validate our methods. The test results show that because of its easy use and strong expansibility, SNMP is suitable and a bright way for sensors information management of IoT system.", "title": "" }, { "docid": "dc92e3feb9ea6a20d73962c0905f623b", "text": "Software maintenance consumes around 70% of the software life cycle. Improving software maintainability could save software developers significant time and money. This paper examines whether the pattern of dependency injection significantly reduces dependencies of modules in a piece of software, therefore making the software more maintainable. This hypothesis is tested with 20 sets of open source projects from sourceforge.net, where each set contains one project that uses the pattern of dependency injection and one similar project that does not use the pattern. The extent of the dependency injection use in each project is measured by a new Number of DIs metric created specifically for this analysis. Maintainability is measured using coupling and cohesion metrics on each project, then performing statistical analysis on the acquired results. After completing the analysis, no correlation was evident between the use of dependency injection and coupling and cohesion numbers. However, a trend towards lower coupling numbers in projects with a dependency injection count of 10% or more was observed.", "title": "" }, { "docid": "dcdd6419d4cdbd53f07cf8a9eba48e8c", "text": "The use of RFID devices for real-time production monitoring in modern factories is impeded by the inherent unreliability of RFID devices. In this paper we present a consistency stack that conceptually divides the different consistency issues in production monitoring into separate layers. In addition to this we have built a consistency management framework to ensure consistent real-time production monitoring, using unreliable RFID devices. In detail, we deal with the problem of detecting object sequences by a set of unreliable RFID readers that are installed along production lines. We propose a probabilistic sequence detection algorithm that assigns probabilities to objects detected by RFID devices and provides probabilistic guarantees regarding the real-time sequences of objects on the production lines.", "title": "" }, { "docid": "1ce4d3cd97ad8aa06f0c0a97ec7b4584", "text": "A crucial element in many mobile fitness applications is gamification that makes physical activities fun. While many methods focus on competition and individual users' interaction with the game, the aspect of social interaction and how users play games together in a group remains an open subject. To investigate these issues, we developed a mobile game, HealthyTogether, to understand how users interact in different group gamification settings: competition, cooperation, or hybrid. We describe the results of a user study involving 18 dyads (N=36) over a period of two weeks. Results show that users significantly enhanced physical activities using HealthyTogether compared with when they exercised alone by up to 15%. Among the group settings, cooperation (21% increase) and hybrid (18% increase) outperformed competition (8% increase). Additionally, users sent significantly more messages in cooperation setting than hybrid and competition. Furthermore, physical activities are positively correlated with the number of messages they exchanged. Based on the findings, we derive design implications for practitioners.", "title": "" }, { "docid": "7a6f97457f70e2d7dbcd488f9ed6c390", "text": "This paper proposes a novel participant selection framework, named CrowdRecruiter, for mobile crowdsensing. CrowdRecruiter operates on top of energy-efficient Piggyback Crowdsensing (PCS) task model and minimizes incentive payments by selecting a small number of participants while still satisfying probabilistic coverage constraint. In order to achieve the objective when piggybacking crowdsensing tasks with phone calls, CrowdRecruiter first predicts the call and coverage probability of each mobile user based on historical records. It then efficiently computes the joint coverage probability of multiple users as a combined set and selects the near-minimal set of participants, which meets coverage ratio requirement in each sensing cycle of the PCS task. We evaluated CrowdRecruiter extensively using a large-scale real-world dataset and the results show that the proposed solution significantly outperforms three baseline algorithms by selecting 10.0% -- 73.5% fewer participants on average under the same probabilistic coverage constraint.", "title": "" }, { "docid": "986f469fc8d367baa8ad0db10caf3241", "text": "While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.", "title": "" }, { "docid": "0039f089fa355bb1e6c980e1d6fb1b64", "text": "YouTube, with millions of content creators, has become the preferred destination for watching videos online. Through the Partner program, YouTube allows content creators to monetize their popular videos. Of significant importance for content creators is which meta-level features (e.g. title, tag, thumbnail) are most sensitive for promoting video popularity. The popularity of videos also depends on the social dynamics, i.e. the interaction of the content creators (or channels) with YouTube users. Using real-world data consisting of about 6 million videos spread over 25 thousand channels, we empirically examine the sensitivity of YouTube meta-level features and social dynamics. The key meta-level features that impact the view counts of a video include: first day view count , number of subscribers, contrast of the video thumbnail, Google hits, number of keywords, video category, title length, and number of upper-case letters in the title respectively and illustrate that these meta-level features can be used to estimate the popularity of a video. In addition, optimizing the meta-level features after a video is posted increases the popularity of videos. In the context of social dynamics, we discover that there is a causal relationship between views to a channel and the associated number of subscribers. Additionally, insights into the effects of scheduling and video playthrough in a channel are also provided. Our findings provide a useful understanding of user engagement in YouTube.", "title": "" }, { "docid": "7f070d85f4680a2b88d3b530dff0cfc5", "text": "An extensive data search among various types of developmental and evolutionary sequences yielded a `four quadrant' model of consciousness and its development (the four quadrants being intentional, behavioural, cultural, and social). Each of these dimensions was found to unfold in a sequence of at least a dozen major stages or levels. Combining the four quadrants with the dozen or so major levels in each quadrant yields an integral theory of consciousness that is quite comprehensive in its nature and scope. This model is used to indicate how a general synthesis and integration of twelve of the most influential schools of consciousness studies can be effected, and to highlight some of the most significant areas of future research. The conclusion is that an `all-quadrant, all-level' approach is the minimum degree of sophistication that we need into order to secure anything resembling a genuinely integral theory of consciousness.", "title": "" }, { "docid": "81f6c52bb579645e5919eac629c90f6d", "text": "A DEA-based stochastic estimation framework is presented to evaluate contextual variables affecting productivity. Conditions are identified under which a two-stage procedure consisting of DEA followed by regression analysis yields consistent estimators of the impact of contextual variables. Conditions are also identified under which DEA in the first stage followed by maximum likelihood estimation in the second stage yields consistent estimators of the impact of contextual variables. Monte Carlo simulations are carried out to compare the performance of our two-stage approach with one-stage and two-stage parametric approaches. Simulation results suggest that DEA-based procedures perform as well as the best parametric method in the estimation of the impact of contextual variables on productivity. Simulation results also indicate that DEA-based procedures perform better than parametric methods in the estimation of individual decision making unit (DMU) productivity. (", "title": "" }, { "docid": "ee397703a8d5a751c7fd7c76f92ebd73", "text": "Autografting of dopamine-producing adrenal medullary tissue to the striatal region of the brain is now being attempted in patients with Parkinson's disease. Since the success of this neurosurgical approach to dopamine-replacement therapy may depend on the selection of the most appropriate subregion of the striatum for implantation, we examined the pattern and degree of dopamine loss in striatum obtained at autopsy from eight patients with idiopathic Parkinson's disease. We found that in the putamen there was a nearly complete depletion of dopamine in all subdivisions, with the greatest reduction in the caudal portions (less than 1 percent of the dopamine remaining). In the caudate nucleus, the only subdivision with severe dopamine reduction was the most dorsal rostral part (4 percent of the dopamine remaining); the other subdivisions still had substantial levels of dopamine (up to approximately 40 percent of control levels). We propose that the motor deficits that are a constant and characteristic feature of idiopathic Parkinson's disease are for the most part a consequence of dopamine loss in the putamen, and that the dopamine-related caudate deficits (in \"higher\" cognitive functions) are, if present, less marked or restricted to discrete functions only. We conclude that the putamen--particularly its caudal portions--may be the most appropriate site for intrastriatal application of dopamine-producing autografts in patients with idiopathic Parkinson's disease.", "title": "" }, { "docid": "86df4a413696826b615ddd6004189884", "text": "In this paper, we consider two important problems defined on finite metric spaces, and provide efficient new algorithms and approximation schemes for these problems on inputs given as graph shortest path metrics or high-dimensional Euclidean metrics. The first of these problems is the greedy permutation (or farthest-first traversal) of a finite metric space: a permutation of the points of the space in which each point is as far as possible from all previous points. We describe randomized algorithms to find (1 + ε)-approximate greedy permutations of any graph with n vertices and m edges in expected time O ( ε−1(m + n) log n log(n/ε) ) , and to find (1 + ε)-approximate greedy permutations of points in high-dimensional Euclidean spaces in expected time O(ε−2n1+1/(1+ε) 2+o(1)). Additionally we describe a deterministic algorithm to find exact greedy permutations of any graph with n vertices and treewidth O(1) in worst-case time O(n3/2 log n). The second of the two problems we consider is distance selection: given k ∈ q( n 2 )y , we are interested in computing the kth smallest distance in the given metric space. We show that for planar graph metrics one can approximate this distance, up to a constant factor, in near linear time.", "title": "" }, { "docid": "9157266c7dea945bf5a68f058836e681", "text": "For the task of implicit discourse relation recognition, traditional models utilizing manual features can suffer from data sparsity problem. Neural models provide a solution with distributed representations, which could encode the latent semantic information, and are suitable for recognizing semantic relations between argument pairs. However, conventional vector representations usually adopt embeddings at the word level and cannot well handle the rare word problem without carefully considering morphological information at character level. Moreover, embeddings are assigned to individual words independently, which lacks of the crucial contextual information. This paper proposes a neural model utilizing context-aware character-enhanced embeddings to alleviate the drawbacks of the current word level representation. Our experiments show that the enhanced embeddings work well and the proposed model obtains state-of-the-art results.", "title": "" }, { "docid": "dc708a73438124f69c9ac75f0f127710", "text": "Machine learning algorithms often suffer from good generalization in testing domains especially when the training (source) and test (target) domains do not have similar distributions. To address this problem, several domain adaptation techniques have been proposed to improve the performance of the learning algorithms when they face accuracy degradation caused by the domain shift problem. In this paper, we focus on the non-homogeneous distributed target domains and propose a new latent subdomain discovery model to divide the target domain into subdomains while adapting them. It is expected that applying adaptation on subdomains increase the rate of detection in comparing with the situation that the target domain is seen as one single domain. The proposed division method considers each subdomain as a cluster which has the definite ratio of positive to negative samples, linear discriminability and conditional distribution similarity to the source domain. This method divides the target domain into subdomains while adapting the trained target classifier for each subdomain using Adapt-SVM adaptation method. It also has a simple solution for selecting the appropriate number of subdomains. We call our proposed method Cluster-based Adaptive SVM or CA-SVM in short. We test CA-SVM on two different computer vision problems, pedestrian detection and image classification. The experimental results show the advantage in accuracy rate for our approach in comparison to several baselines.", "title": "" }, { "docid": "57b84ac6866e3e60aae874c4d00e5815", "text": "A large class of problems can be formulated in terms of the assignment of labels to objects. Frequently, processes are needed which reduce ambiguity and noise, and select the best label among several possible choices. Relaxation labeling processes are just such a class of algorithms. They are based on the parallel use of local constraints between labels. This paper develops a theory to characterize the goal of relaxation labeling. The theory is founded on a definition of con-sistency in labelings, extending the notion of constraint satisfaction. In certain restricted circumstances, an explicit functional exists that can be maximized to guide the search for consistent labelings. This functional is used to derive a new relaxation labeling operator. When the restrictions are not satisfied, the theory relies on variational cal-culus. It is shown that the problem of finding consistent labelings is equivalent to solving a variational inequality. A procedure nearly identical to the relaxation operator derived under restricted circum-stances serves in the more general setting. Further, a local convergence result is established for this operator. The standard relaxation labeling formulas are shown to approximate our new operator, which leads us to conjecture that successful applications of the standard methods are explainable by the theory developed here. Observations about con-vergence and generalizations to higher order compatibility relations are described.", "title": "" }, { "docid": "2d404ea42ea4e4a0a20778c586c2490b", "text": "This paper presents a method for losslessly compressing multi-channel electroencephalogram signals. The Karhunen-Loeve transform is used to exploit the inter-correlation among the EEG channels. The transform is approximated using lifting scheme which results in a reversible realization under finite precision processing. An integer time-frequency transform is applied to further minimize the temporal redundancy", "title": "" }, { "docid": "7dc7eaef334fc7678821fa66424421f1", "text": "The present research complements extant variable-centered research that focused on the dimensions of autonomous and controlled motivation through adoption of a person-centered approach for identifying motivational profiles. Both in high school students (Study 1) and college students (Study 2), a cluster analysis revealed 4 motivational profiles: a good quality motivation group (i.e., high autonomous, low controlled); a poor quality motivation group (i.e., low autonomous, high controlled); a low quantity motivation group (i.e., low autonomous, low controlled); and a high quantity motivation group (i.e., high autonomous, high controlled). To compare the 4 groups, the authors derived predictions from qualitative and quantitative perspectives on motivation. Findings generally favored the qualitative perspective; compared with the other groups, the good quality motivation group displayed the most optimal learning pattern and scored highest on perceived need-supportive teaching. Theoretical and practical implications of the findings are discussed.", "title": "" }, { "docid": "4b22eaf527842e0fa41a1cd740ad9b40", "text": "Music transcription is the process of creating a written score of music from an audio recording. Musicians and musicologists use transcription to better understand music that may not have a written form, from improvised jazz solos to traditional folk music. Automatic music transcription introduces signal-processing algorithms to extract pitch and rhythm information from recordings. This speeds up and automates the process of music transcription, which requires musical training and is very time consuming even for experts. This thesis explores the still unsolved problem of automatic music transcription through an in-depth analysis of the problem itself and an overview of different techniques to solve the hardest subtask of music transcription, multiple pitch estimation. It concludes with a close study of a typical multiple pitch estimation algorithm and highlights the challenges that remain unsolved.", "title": "" }, { "docid": "9d3189e7a9c585ee9dfc61280eb4c317", "text": "ICT-enhanced research methods such as educational data mining (EDM) have allowed researchers to effectively model a broad range of constructs pertaining to the student, moving from traditional assessments of knowledge to assessment of engagement, meta-cognition, strategy, and affect. The automated detection of these constructs allows EDM researchers to develop intervention strategies that can be implemented either by the software or the teacher. It also allows for secondary analyses of the construct, where the detectors are applied to a data set that is much larger than one that could be analyzed by more traditional methods. However, in many cases, the data used to develop EDM models is collected from students who may not be representative of the broader populations who are likely to use ICT. In order to use EDM models (automated detectors) with new populations, their generalizability must be verified. In this study, we examine whether detectors of affect remain valid when applied to new populations. Models of four educationally relevant affective states were constructed based on data from urban, suburban, and rural students using ASSISTments software for middle school mathematics in the Northeastern United States. We find that affect detectors trained on a population drawn primarily from one demographic grouping do not generalize to populations drawn primarily from the other demographic groupings, even though those populations might be considered part of the same national or regional culture. Models constructed using data from all three sub-populations are more applicable to students in those populations than those trained on a single group, but still do not achieve ideal population validity—the ability to generalize across all sub-groups. In particular, models generalize better across urban and suburban students than rural students. These findings have important implications for data collection efforts, validation techniques, and the design of interventions that are intended to be applied at scale.", "title": "" }, { "docid": "9c2f8f7094b48100d594280fee455fe9", "text": "Training very deep neural networks is very difficult because of gradient degradation. However, the incomparable expressiveness of the many deep layers is highly desirable at testing time and usually leads to better performance. Recently, training techniques such as residual networks that enable us to train very deep networks have proved to be a great success. In this paper, we studied the application of the recently proposed deep networks with stochastic depth (DNSD) to train deeper acoustic models for speech recognition. By randomly dropping a subset of layers during training, the studied stochastic depth training method helps reduce the training time substantially, yet the networks trained are much deeper since all the layers are kept during testing. We investigated this approach on the TIMIT data set. Our preliminary experimental results show that when training data are limited, deep networks with stochastic depth helps very little. However, when more training data are available, DNSD significantly improves the recognition accuracy, compared with a conventional deep neural networks.", "title": "" } ]
scidocsrr
13063c0bfcfc14a4df58913cf849c5d4
Pander to ponder
[ { "docid": "141b333f0c7b256be45c478a79e8f8eb", "text": "Communications regulators over the next decade will spend increasing time on conflicts between the private interests of broadband providers and the public’s interest in a competitive innovation environment centered on the Internet. As the policy questions this conflict raises are basic to communications policy, they are likely to reappear in many different forms. So far, the first major appearance has come in the ‘‘open access’’ (or ‘‘multiple access’’) debate, over the desirability of allowing vertical integration between Internet Service Providers and cable operators. Proponents of open access see it as a structural remedy to guard against an erosion of the ‘‘neutrality’’ of the network as between competing content and applications. Critics, meanwhile, have taken open-access regulation as unnecessary and likely to slow the pace of broadband deployment.", "title": "" } ]
[ { "docid": "6dd810d8a5180b49ded351f0acf135b8", "text": "In classification problem, we assume that the samples around the class boundary are more likely to be incorrectly annotated than others, and propose boundaryconditional class noise (BCN). Based on the BCN assumption, we use unnormalized Gaussian and Laplace distributions to directly model how class noise is generated, in symmetric and asymmetric cases. In addition, we demonstrate that Logistic regression and Probit regression can also be reinterpreted from this class noise perspective, and compare them with the proposed models. The empirical study shows that, the proposed asymmetric models overall outperform the benchmark linear models, and the asymmetric Laplace-noise model achieves the best performance among all.", "title": "" }, { "docid": "b17889bc5f4d4fb498a9b9c5d45bd560", "text": "Photonic components are superior to electronic ones in terms of operational bandwidth, but the diffraction limit of light poses a significant challenge to the miniaturization and high-density integration of optical circuits. The main approach to circumvent this problem is to exploit the hybrid nature of surface plasmon polaritons (SPPs), which are light waves coupled to free electron oscillations in a metal that can be laterally confined below the diffraction limit using subwavelength metal structures. However, the simultaneous realization of strong confinement and a propagation loss sufficiently low for practical applications has long been out of reach. Channel SPP modes—channel plasmon polaritons (CPPs)—are electromagnetic waves that are bound to and propagate along the bottom of V-shaped grooves milled in a metal film. They are expected to exhibit useful subwavelength confinement, relatively low propagation loss, single-mode operation and efficient transmission around sharp bends. Our previous experiments showed that CPPs do exist and that they propagate over tens of micrometres along straight subwavelength grooves. Here we report the design, fabrication and characterization of CPP-based subwavelength waveguide components operating at telecom wavelengths: Y-splitters, Mach–Zehnder interferometers and waveguide–ring resonators. We demonstrate that CPP guides can indeed be used for large-angle bending and splitting of radiation, thereby enabling the realization of ultracompact plasmonic components and paving the way for a new class of integrated optical circuits.", "title": "" }, { "docid": "53adce741d07ad54c10eef30cca63db3", "text": "A new method for deriving limb segment motion from markers placed on the skin is described. The method provides a basis for determining the artifact associated with nonrigid body movement of points placed on the skin. The method is based on a cluster of points uniformly distributed on the limb segment. Each point is assigned an arbitrary mass. The center of mass and the inertia tensor of this cluster of points are calculated. The eigenvalues and eigenvectors of the inertia tensor are used to define a coordinate system in the cluster as well as to provide a basis for evaluating non-rigid body movement. The eigenvalues of the inertia tensor remain invariant if the segment is behaving as a rigid body, thereby providing a basis for determining variations for nonrigid body movement. The method was tested in a simulation model where systematic and random errors were introduced into a fixed cluster of points. The simulation demonstrated that the error due to nonrigid body movement could be substantially reduced. The method was also evaluated in a group of ten normal subjects during walking. The results for knee rotation and translation obtained from the point cluster method compared favorably to results previously obtained from normal subjects with intra-cortical pins placed into the femur and tibia. The resulting methodology described in this paper provides a unique approach to the measurement of in vivo motion using skin-based marker systems.", "title": "" }, { "docid": "8d534b2acb7e501f0c20d7daea943b84", "text": "A leading edge 14 nm SoC platform technology based upon the 2nd generation Tri-Gate transistor technology [5] has been optimized for density, low power and wide dynamic range. 70 nm gate pitch, 52 nm metal pitch and 0.0499 um2 HDC SRAM cells are the most aggressive design rules reported for 14/16 nm node SoC process to achieve Moore's Law 2x density scaling over 22 nm node. High performance NMOS/PMOS drive currents of 1.3/1.2 mA/um, respectively, have been achieved at 0.7 V and 100 nA/um off-state leakage, 37%/50% improvement over 22 nm node. Ultra-low power NMOS/PMOS drives are 0.50/0.32 mA/um at 0.7 V and 15pA/um Ioff. This technology also deploys high voltage I/O transistors to support up to 3.3 V I/O. A full suite of analog, mixed-signal and RF features are also supported.", "title": "" }, { "docid": "e509d0aa776dcb649349ec3d49a347f1", "text": "Fibrous dysplasia (FD) is a benign fibro-osseous bone disease of unknown etiology and uncertain pathogenesis. When bone maturation is completed, indicating the occurence of stabilization is a strong evidence of mechanism. The lesion frequently affects the craniofacial skeleton. The maxilla is affected twice comparing mandible and occurs more frequently in the posterior area. In this case, a 16 year-old female patient is presented who was diagnosed as having maxillofacial fibrous dysplasia. *Corresponding author: Gözde Canıtezer, Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ondokuz Mayıs University, 55139 Kurupelit, Samsun, Turkey, Tel: +90 362 3121919-3012, +90 505 8659063; Fax: +90 362 4576032; E-mail: [email protected] Received October 02, 2013; Accepted November 04, 2013; Published November 06, 2013 Citation: Canıtezer G, Gunduz K, Ozden B, Kose HI (2013) Monostotic Fibrous Dysplasia: A Case Report. Dentistry 3: 1667. doi:10.4172/2161-1122.1000167 Copyright: © 2013 Canıtezer G, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.", "title": "" }, { "docid": "59291cb1c13ab274f06b619698784e23", "text": "We present a new class of Byzantine-tolerant State Machine Replication protocols for asynchronous environments that we term Byzantine Chain Replication. We demonstrate two implementations that present different trade-offs between performance and security, and compare these with related work. Leveraging an external reconfiguration service, these protocols are not based on Byzantine consensus, do not require majoritybased quorums during normal operation, and the set of replicas is easy to reconfigure. One of the implementations is instantiated with t+ 1 replicas to tolerate t failures and is useful in situations where perimeter security makes malicious attacks unlikely. Applied to in-memory BerkeleyDB replication, it supports 20,000 transactions per second while a fully Byzantine implementation supports 12,000 transactions per second—about 70% of the throughput of a non-replicated database.", "title": "" }, { "docid": "2e9d6ad38bd51fbd7af165e4b9262244", "text": "BACKGROUND\nThe assessment of blood lipids is very frequent in clinical research as it is assumed to reflect the lipid composition of peripheral tissues. Even well accepted such relationships have never been clearly established. This is particularly true in ophthalmology where the use of blood lipids has become very common following recent data linking lipid intake to ocular health and disease. In the present study, we wanted to determine in humans whether a lipidomic approach based on red blood cells could reveal associations between circulating and tissue lipid profiles. To check if the analytical sensitivity may be of importance in such analyses, we have used a double approach for lipidomics.\n\n\nMETHODOLOGY AND PRINCIPAL FINDINGS\nRed blood cells, retinas and optic nerves were collected from 9 human donors. The lipidomic analyses on tissues consisted in gas chromatography and liquid chromatography coupled to an electrospray ionization source-mass spectrometer (LC-ESI-MS). Gas chromatography did not reveal any relevant association between circulating and ocular fatty acids except for arachidonic acid whose circulating amounts were positively associated with its levels in the retina and in the optic nerve. In contrast, several significant associations emerged from LC-ESI-MS analyses. Particularly, lipid entities in red blood cells were positively or negatively associated with representative pools of retinal docosahexaenoic acid (DHA), retinal very-long chain polyunsaturated fatty acids (VLC-PUFA) or optic nerve plasmalogens.\n\n\nCONCLUSIONS AND SIGNIFICANCE\nLC-ESI-MS is more appropriate than gas chromatography for lipidomics on red blood cells, and further extrapolation to ocular lipids. The several individual lipid species we have identified are good candidates to represent circulating biomarkers of ocular lipids. However, further investigation is needed before considering them as indexes of disease risk and before using them in clinical studies on optic nerve neuropathies or retinal diseases displaying photoreceptors degeneration.", "title": "" }, { "docid": "cbdcd68fdcbb7b05f32a70225de00a65", "text": "This paper proposes a network architecture to perform variable length semantic video generation using captions. We adopt a new perspective towards video generation where we allow the captions to be combined with the long-term and short-term dependencies between video frames and thus generate a video in an incremental manner. Our experiments demonstrate our network architecture’s ability to distinguish between objects, actions and interactions in a video and combine them to generate videos for unseen captions. The network also exhibits the capability to perform spatio-temporal style transfer when asked to generate videos for a sequence of captions. We also show that the network’s ability to learn a latent representation allows it generate videos in an unsupervised manner and perform other tasks such as action recognition.", "title": "" }, { "docid": "b57fa292a5357b9cab294f62e63e0e81", "text": "Emoji, a set of pictographic Unicode characters, have seen strong uptake over the last couple of years. All common mobile platforms and many desktop systems now support emoji entry, and users have embraced their use. Yet, we currently know very little about what makes for good emoji entry. While soft keyboards for text entry are well optimized, based on language and touch models, no such information exists to guide the design of emoji keyboards. In this article, we investigate of the problem of emoji entry, starting with a study of the current state of the emoji keyboard implementation in Android. To enable moving forward to novel emoji keyboard designs, we then explore a model for emoji similarity that is able to inform such designs. This semantic model is based on data from 21 million collected tweets containing emoji. We compare this model against a solely description-based model of emoji in a crowdsourced study. Our model shows good perfor mance in capturing detailed relationships between emoji.", "title": "" }, { "docid": "326b0cb75e92e216cbac8f3c648b0efc", "text": "Scholarly content is increasingly being discussed, shared and bookmarked online by researchers. Altmetric is a start-­up that focuses on tracking, collecting and measuring this activity on behalf of publishers and here we describe our approach and general philosophy. Over the past year we've seen sharing and discussion activity around approximately 750k articles. The average number of articles shared each day grows by 5-­ 10% a month. We look at examples of how people are interacting with papers online and at how publishers can collect and present the resulting data to deliver real value to their authors and readers. Introduction Scholars are increasingly visible on the web and social media 1. While the majority of their online activities may not be directly related to their research they are nevertheless discussing, sharing and bookmarking scholarly articles online in large numbers. We know this because our job at Altmetric is to track the attention paid to papers online. Founded in January 2011 and with investment from Digital Science we're a London based start-­‐up that identifies, tracks and collects article level metrics on behalf of publishers. Article level metrics are quantitative or qualitative indicators of the impact that a single article has had. Examples of the former would be a count of the number of times the article has been downloaded, or shared on Twitter. Examples of the latter would be media coverage or a blog post from somebody well respected in the field. Tracking the conversations around papers Encouraging audiences to engage with articles online isn't anything new for many publishers. The Public Library of Science (PLoS), BioMed Central, Cell Press and Nature Publishing Group have all tried encouraging users to leave comments on papers with varying degrees of success but the response from users has generally been poor, with only a small fraction of papers ever receiving notable attention 2. A larger proportion of papers are discussed in some depth on academic blogs and a larger still proportion shared on social networks like Twitter, Facebook and Google+. Scholars seem to feel more comfortable sharing or discussing content in more informal environments tied to their personal identity and where", "title": "" }, { "docid": "b91f54fd70da385625d9df127834d8c7", "text": "This commentary was stimulated by Yeping Li’s first editorial (2014) citing one of the journal’s goals as adding multidisciplinary perspectives to current studies of single disciplines comprising the focus of other journals. In this commentary, I argue for a greater focus on STEM integration, with a more equitable representation of the four disciplines in studies purporting to advance STEM learning. The STEM acronym is often used in reference to just one of the disciplines, commonly science. Although the integration of STEM disciplines is increasingly advocated in the literature, studies that address multiple disciplines appear scant with mixed findings and inadequate directions for STEM advancement. Perspectives on how discipline integration can be achieved are varied, with reference to multidisciplinary, interdisciplinary, and transdisciplinary approaches adding to the debates. Such approaches include core concepts and skills being taught separately in each discipline but housed within a common theme; the introduction of closely linked concepts and skills from two or more disciplines with the aim of deepening understanding and skills; and the adoption of a transdisciplinary approach, where knowledge and skills from two or more disciplines are applied to real-world problems and projects with the aim of shaping the total learning experience. Research that targets STEM integration is an embryonic field with respect to advancing curriculum development and various student outcomes. For example, we still need more studies on how student learning outcomes arise not only from different forms of STEM integration but also from the particular disciplines that are being integrated. As noted in this commentary, it seems that mathematics learning benefits less than the other disciplines in programs claiming to focus on STEM integration. Factors contributing to this finding warrant more scrutiny. Likewise, learning outcomes for engineering within K-12 integrated STEM programs appear under-researched. This commentary advocates a greater focus on these two disciplines within integrated STEM education research. Drawing on recommendations from the literature, suggestions are offered for addressing the challenges of integrating multiple disciplines faced by the STEM community.", "title": "" }, { "docid": "0dc0b31c4f174a69b5917cdf93a5dd22", "text": "Webpage is becoming a more and more important visual input to us. While there are few studies on saliency in webpage, we in this work make a focused study on how humans deploy their attention when viewing webpages and for the first time propose a computational model that is designed to predict webpage saliency. A dataset is built with 149 webpages and eye tracking data from 11 subjects who free-view the webpages. Inspired by the viewing patterns on webpages, multi-scale feature maps that contain object blob representation and text representation are integrated with explicit face maps and positional bias. We propose to use multiple kernel learning (MKL) to achieve a robust integration of various feature maps. Experimental results show that the proposed model outperforms its counterparts in predicting webpage saliency.", "title": "" }, { "docid": "46980b89e76bc39bf125f63ed9781628", "text": "In this paper, a design of miniaturized 3-way Bagley polygon power divider (BPD) is presented. The design is based on using non-uniform transmission lines (NTLs) in each arm of the divider instead of the conventional uniform ones. For verification purposes, a 3-way BPD is designed, simulated, fabricated, and measured. Besides suppressing the fundamental frequency's odd harmonics, a size reduction of almost 30% is achieved.", "title": "" }, { "docid": "dbafd5f4efa7fd372ca5db119624ee56", "text": "In many distribution centers, there is a constant pressure to reduce the order throughput times. One such distribution center is the DC of De Bijenkorf, a retail organization in The Netherlands with 7 subsidiaries and a product assortment of about 300,000 SKUs (stock keeping units). The orders for the subsidiaries are picked manually in this warehouse, which is very labor intensive. Furthermore many shipments have to be finished at about the same time, which leads to peak loads in the picking process. The picking process is therefore a costly operation. In this study we have investigated the possibilities to pick the orders more efficiently, without altering the storage or material handling equipment used or the storage strategies. It appeared to be possible to obtain a reduction between 17 and 34% in walking time, by simply routing the pickers more efficiently. The amount of walking time reduction depends on the routing algorithm used. The largest saving is obtained by using an optimal routing algorithm that has been developed for De Bijenkorf. The main reason for this substantial reduction in walking time, is the change from one-sided picking to two-sided picking in the narrow aisles. It is even possible to obtain a further reduction in walking time by clustering the orders. Small orders can be combined on one pick cart and can be picked in a single route. The combined picking of several orders (constrained by the size of the orders and the cart capacity) leads to a total reduction of about 60% in walking time, using a simple order clustering strategy in combination with a newly developed routing strategy. The reduction in total order picking time and hence the reduction in the number of pickers is about 19%.", "title": "" }, { "docid": "4dba2a9a29f58b55a6b2c3101acf2437", "text": "Clinical and neurobiological findings have reported the involvement of endocannabinoid signaling in the pathophysiology of schizophrenia. This system modulates dopaminergic and glutamatergic neurotransmission that is associated with positive, negative, and cognitive symptoms of schizophrenia. Despite neurotransmitter impairments, increasing evidence points to a role of glial cells in schizophrenia pathobiology. Glial cells encompass three main groups: oligodendrocytes, microglia, and astrocytes. These cells promote several neurobiological functions, such as myelination of axons, metabolic and structural support, and immune response in the central nervous system. Impairments in glial cells lead to disruptions in communication and in the homeostasis of neurons that play role in pathobiology of disorders such as schizophrenia. Therefore, data suggest that glial cells may be a potential pharmacological tool to treat schizophrenia and other brain disorders. In this regard, glial cells express cannabinoid receptors and synthesize endocannabinoids, and cannabinoid drugs affect some functions of these cells that can be implicated in schizophrenia pathobiology. Thus, the aim of this review is to provide data about the glial changes observed in schizophrenia, and how cannabinoids could modulate these alterations.", "title": "" }, { "docid": "c0a67a4d169590fa40dfa9d80768ef09", "text": "Excerpts of technical papers and magazine articles that serve the purposes of conventional abstracts have been created entirely by automatic means. In the exploratory research described, the complete text of an article in machine-readable form i s scanned by a n IBM 704 data-processing machine and analyzed in accordance with a standard program. Statistical information derived from word frequency and distribution is used by the machine to compute a relative measure of significance, first for individual words and then for sentences. Sentences scoring highest in significance are extracted and printed out to become the \" auto-abstract. \" Introduction", "title": "" }, { "docid": "371c3b72d33c17080968e65f1a24787d", "text": "Bullying and cyberbullying have serious consequences for all those involved, especially the victims, and its prevalence is high throughout all the years of schooling, which emphasizes the importance of prevention. This article describes an intervention proposal, made up of a program (Cyberprogram 2.0 Garaigordobil and Martínez-Valderrey, 2014a) and a videogame (Cooperative Cybereduca 2.0 Garaigordobil and Martínez-Valderrey, 2016b) which aims to prevent and reduce cyberbullying during adolescence and which has been validated experimentally. The proposal has four objectives: (1) To know what bullying and cyberbullying are, to reflect on the people involved in these situations; (2) to become aware of the harm caused by such behaviors and the severe consequences for all involved; (3) to learn guidelines to prevent and deal with these situations: know what to do when one suffers this kind of violence or when observing that someone else is suffering it; and (4) to foster the development of social and emotional factors that inhibit violent behavior (e.g., communication, ethical-moral values, empathy, cooperation…). The proposal is structured around 25 activities to fulfill these goals and it ends with the videogame. The activities are carried out in the classroom, and the online video is the last activity, which represents the end of the intervention program. The videogame (www.cybereduca.com) is a trivial pursuit game with questions and answers related to bullying/cyberbullying. This cybernetic trivial pursuit is organized around a fantasy story, a comic that guides the game. The videogame contains 120 questions about 5 topics: cyberphenomena, computer technology and safety, cybersexuality, consequences of bullying/cyberbullying, and coping with bullying/cyberbullying. To evaluate the effectiveness of the intervention, a quasi-experimental design, with repeated pretest-posttest measures and control groups, was used. During the pretest and posttest stages, 8 assessment instruments were administered. The experimental group randomly received the intervention proposal, which consisted of one weekly 1-h session during the entire school year. The results obtained with the analyses of variance of the data collected before and after the intervention in the experimental and control groups showed that the proposal significantly promoted the following aspects in the experimental group: (1) a decrease in face-to-face bullying and cyberbullying behaviors, in different types of school violence, premeditated and impulsive aggressiveness, and in the use of aggressive conflict-resolution strategies; and (2) an increase of positive social behaviors, self-esteem, cooperative conflict-resolution strategies, and the capacity for empathy. The results provide empirical evidence for the proposal. The importance of implementing programs to prevent bullying in all its forms, from the beginning of schooling and throughout formal education, is discussed.", "title": "" }, { "docid": "386428ddfca099e7d1d2cbb88085ee83", "text": "We tested the predictions of 2 explanations for retrieval-based learning; while the elaborative retrieval hypothesis assumes that the retrieval of studied information promotes the generation of semantically related information, which aids in later retrieval (Carpenter, 2009), the episodic context account proposed by Karpicke, Lehman, and Aue (in press) assumes that retrieval alters the representation of episodic context and improves one's ability to guide memory search on future tests. Subjects studied multiple word lists and either recalled each list (retrieval practice), did a math task (control), or generated associates for each word (elaboration) after each list. After studying the last list, all subjects recalled the list and, after a 5-min delay, recalled all lists. Analyses of correct recall, intrusions, response times, and temporal clustering dissociate retrieval practice from elaboration, supporting the episodic context account.", "title": "" }, { "docid": "5be29fb7d5d24a2d81d3da104cad9b05", "text": "This paper describes the approach we take to the analysis of social media, combining opinion mining from text and multimedia (images, videos, etc), and centred on entity and event recognition. We examine a particular use case, which is to help archivists select material for inclusion in an archive of social media for preserving community memories, moving towards structured preservation around semantic categories. The textual approach we take is rule-based and builds on a number of sub-components, taking into account issues inherent in social media such as noisy ungrammatical text, use of swear words, sarcasm etc. The analysis of multimedia content complements this work in order to help resolve ambiguity and to provide further contextual information. We provide two main innovations in this work: first, the novel combination of text and multimedia opinion mining tools; and second, the adaptation of NLP tools for opinion mining specific to the problems of social media.", "title": "" }, { "docid": "1cbbc5af1327338283ca75e0bed7d53c", "text": "Microscopic examination revealed polymorphic cells with abundant cytoplasm and large nuclei within the acanthotic epidermis (Figure 3). There were aggregated melanin granules in the epidermis, as well as a subepidermal lymphocytic infiltrate. The atypical cells were positive for CK7 (Figure 4). A few scattered cells were positive with the Melan-A stain (Figure 5). Pigmented lesion of the left nipple in a 49-year-old woman Case for Diagnosis", "title": "" } ]
scidocsrr
532c1004a293c214df23a335846b4cd7
Behavioral, Neural, and Computational Principles of Bodily Self-Consciousness
[ { "docid": "67bc52adf7c42c7a0ef6178ce4990e57", "text": "Recognizing oneself as the owner of a body and the agent of actions requires specific mechanisms which have been elucidated only recently. One of these mechanisms is the monitoring of signals arising from bodily movements, i.e. the central signals which contribute to the generation of the movements and the sensory signals which arise from their execution. The congruence between these two sets of signals is a strong index for determining the experiences of ownership and agency, which are the main constituents of the experience of being an independent self. This mechanism, however, does not account from the frequent cases where an intention is generated but the corresponding action is not executed. In this paper, it is postulated that such covert actions are internally simulated by activating specific cortical networks or representations of the intended actions. This process of action simulation is also extended to the observation and the recognition of actions performed or intended by other agents. The problem of disentangling representations that pertain to self-intended actions from those that pertain to actions executed or intended by others, is a critical one for attributing actions to their respective agents. Failure to recognize one's own actions and misattribution of actions may result from pathological conditions which alter the readability of these representations.", "title": "" } ]
[ { "docid": "80aeb12d50a77ad455e5786cf75e901f", "text": "New over-the-air (OTA) measurement technology is wanted for quantitative testing of modern wireless devices for use in multipath. We show that the reverberation chamber emulates a rich isotropic multipath (RIMP), making it an extreme reference environment for testing of wireless devices. This thereby complements testing in anechoic chambers representing the opposite extreme reference environment: pure line-of-sight (LOS). Antenna diversity gain was defined for RIMP environments based on improved fading performance. This paper finds this RIMP-diversity gain also valid as a metric of the cumulative improvement of the 1% worst users randomly distributed in the RIMP environment. The paper argues that LOS in modern wireless systems is random due to randomness of the orientations of the users and their devices. This leads to the definition of cumulative LOS-diversity gain of the 1% worst users in random LOS. This is generally not equal to the RIMP-diversity gain. The paper overviews the research on reverberation chambers for testing of wireless devices in RIMP environments. Finally, it presents a simple theory that can accurately model measured throughput for a long-term evolution (LTE) system with orthogonal frequency-division multiplexing (OFDM) and multiple-input-multiple-output (MIMO), the effects of which can clearly be seen and depend on the controllable time delay spread in the chamber.", "title": "" }, { "docid": "37efaf5cbd7fb400b713db6c7c980d76", "text": "Social media users who post bullying related tweets may later experience regret, potentially causing them to delete their posts. In this paper, we construct a corpus of bullying tweets and periodically check the existence of each tweet in order to infer if and when it becomes deleted. We then conduct exploratory analysis in order to isolate factors associated with deleted posts. Finally, we propose the construction of a regrettable posts predictor to warn users if a tweet might cause regret.", "title": "" }, { "docid": "00bbf015e2ea47cd641ff0f903d8b899", "text": "For a Marchand balun, the output imbalance due to the inevitable physical separation between the balance ports is a problem. In this paper, a general compensation method to cope with this imbalance issue is proposed with rigorous analysis and design formulas. The compensation relies on two intentionally shortened coupling sections and a pair of short-circuited transmission lines as the terminations. The proposed method is able to deal with a long connecting segment between the balance ports as long as the coupling sections are tight enough at the desired frequencies. The theory and formulation are first treated using transient analysis with multiple reflections/couplings between the networks. Design graphs are summarized and three examples are fabricated, validated, and discussed to demonstrate the design flexibility the proposed method provides.", "title": "" }, { "docid": "cdf4f5074ec86db3948df3497f9896ec", "text": "This paper investigates algorithms to automatically adapt the learning rate of neural networks (NNs). Starting with stochastic gradient descent, a large variety of learning methods has been proposed for the NN setting. However, these methods are usually sensitive to the initial learning rate which has to be chosen by the experimenter. We investigate several features and show how an adaptive controller can adjust the learning rate without prior knowledge of the learning problem at hand. Introduction Due to the recent successes of Neural Networks for tasks such as image classification (Krizhevsky, Sutskever, and Hinton 2012) and speech recognition (Hinton et al. 2012), the underlying gradient descent methods used for training have gained a renewed interest by the research community. Adding to the well known stochastic gradient descent and RMSprop methods (Tieleman and Hinton 2012), several new gradient based methods such as Adagrad (Duchi, Hazan, and Singer 2011) or Adadelta (Zeiler 2012) have been proposed. However, most of the proposed methods rely heavily on a good choice of an initial learning rate. Compounding this issue is the fact that the range of good learning rates for one problem is often small compared to the range of good learning rates across different problems, i.e., even an experienced experimenter often has to manually search for good problem-specific learning rates. A tempting alternative to manually searching for a good learning rate would be to learn a control policy that automatically adjusts the learning rate without further intervention using, for example, reinforcement learning techniques (Sutton and Barto 1998). Unfortunately, the success of learning such a controller from data is likely to depend heavily on the features made available to the learning algorithm. A wide array of reinforcement learning literature has shown the importance of good features in tasks ranging from Tetris (Thiery and Scherrer 2009) to haptile object identification (Kroemer, Lampert, and Peters 2011). Thus, the first step towards applying RL methods to control learning rates is to find good features. Subsequently, the main contributions of this paper are Copyright c © 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. • Identifying informative features for the automatic control of the learning rate. • Proposing a learning setup for a controller that automatically adapts the step size of NN training algorithms. • Showing that the resulting controller generalizes across different tasks and architectures. Together, these contributions enable robust and efficient training of NNs without the need of manual step size tuning. Method The goal of this paper is to develop an adaptive controller for the learning rate used in training algorithms such as Stochastic Gradient Descent (SGD) or RMSprop (Tieleman and Hinton 2012). We start with a general statement of the problem we are aiming to solve. Problem Statement We are interested in finding the minimizer ω∗ = arg min ω F (X;ω), (1) where in our case ω represents the weight vector of the NN and X = {x1, . . . ,xN} is the set of N training examples (e.g., images and labels). The function F (·) sums over the function values induced by the individual inputs such that", "title": "" }, { "docid": "a04302721f62c1af3b9be630524f03ab", "text": "Hyperspectral image processing has been a very dynamic area in remote sensing and other applications in recent years. Hyperspectral images provide ample spectral information to identify and distinguish spectrally similar materials for more accurate and detailed information extraction. Wide range of advanced classification techniques are available based on spectral information and spatial information. To improve classification accuracy it is essential to identify and reduce uncertainties in image processing chain. This paper presents the current practices, problems and prospects of hyperspectral image classification. In addition, some important issues affecting classification performance are discussed.", "title": "" }, { "docid": "6d8a413767d9fab8ef3ca22daaa0e921", "text": "Query-oriented summarization addresses the problem of information overload and help people get the main ideas within a short time. Summaries are composed by sentences. So, the basic idea of composing a salient summary is to construct quality sentences both for user specific queries and multiple documents. Sentence embedding has been shown effective in summarization tasks. However, these methods lack of the latent topic structure of contents. Hence, the summary lies only on vector space can hardly capture multi-topical content. In this paper, our proposed model incorporates the topical aspects and continuous vector representations, which jointly learns semantic rich representations encoded by vectors. Then, leveraged by topic filtering and embedding ranking model, the summarization can select desirable salient sentences. Experiments demonstrate outstanding performance of our proposed model from the perspectives of prominent topics and semantic coherence.", "title": "" }, { "docid": "700191eaaaf0bdd293fc3bbd24467a32", "text": "SMART (Semantic web information Management with automated Reasoning Tool) is an open-source project, which aims to provide intuitive tools for life scientists for represent, integrate, manage and query heterogeneous and distributed biological knowledge. SMART was designed with interoperability and extensibility in mind and uses AJAX, SVG and JSF technologies, RDF, OWL, SPARQL semantic web languages, triple stores (i.e. Jena) and DL reasoners (i.e. Pellet) for the automated reasoning. Features include semantic query composition and validation using DL reasoners, a graphical representation of the query, a mapping of DL queries to SPARQL, and the retrieval of pre-computed inferences from an RDF triple store. With a use case scenario, we illustrate how a biological scientist can intuitively query the yeast knowledge base and navigate the results. Continued development of this web-based resource for the biological semantic web will enable new information retrieval opportunities for the life sciences.", "title": "" }, { "docid": "cfeb97a848766269c2088d8191206cc8", "text": "We design a class of submodular functions meant for document summarization tasks. These functions each combine two terms, one which encourages the summary to be representative of the corpus, and the other which positively rewards diversity. Critically, our functions are monotone nondecreasing and submodular, which means that an efficient scalable greedy optimization scheme has a constant factor guarantee of optimality. When evaluated on DUC 2004-2007 corpora, we obtain better than existing state-of-art results in both generic and query-focused document summarization. Lastly, we show that several well-established methods for document summarization correspond, in fact, to submodular function optimization, adding further evidence that submodular functions are a natural fit for document summarization.", "title": "" }, { "docid": "affb15022a558f44e2117f08dd826bbe", "text": "We present a novel approach to free-viewpoint video. Our main contribution is the formulation of a hybrid approach between image morphing and depth-image based rendering. When rendering the scene from novel viewpoints, we use both dense pixel correspondences between image pairs as well as an underlying, view-dependent geometrical model. Our novel reconstruction scheme iteratively refines geometric and correspondence information. By combining the strengths of both depth and correspondence estimation, our approach enables free-viewpoint video also for challenging scenes as well as for recordings that may violate typical constraints in multiview reconstruction. For example, our method is robust against inaccurate camera calibration, asynchronous capture, and imprecise depth reconstruction. Rendering results for different scenes and applications demonstrate the versatility and robustness of our approach.", "title": "" }, { "docid": "0e6aef5fe905292db11115b4715f4f7a", "text": "Cytoreductive surgery and hyperthermic intraperitoneal chemotherapy (HIPEC) are maximally effective in early-stage colorectal cancer peritoneal metastases (CRC-PM); however, the use of HIPEC to treat subclinical-stage PM remains controversial. This prospective two-center study assessed adjuvant HIPEC in CRC patients at high risk for metachronous PM ( www.clinicaltrials.gov NCT02575859). During 2006–2012, a total of 22 patients without systemic metastases were prospectively enrolled to receive HIPEC simultaneously with curative surgery, plus adjuvant systemic chemotherapy (oxaliplatin/irinotecan-containing ± biologics), based on primary tumor-associated criteria: resected synchronous ovarian (n = 2) or minimal peritoneal (n = 6) metastases, primaries directly invading other organs (n = 4) or penetrating the visceral peritoneum (n = 10). A control group retrospectively included 44 matched (1:2) patients undergoing standard treatments and no HIPEC during the same period. The cumulative PM incidence was calculated in a competing-risks framework. Patient characteristics were comparable for all groups. Median follow-up was 65.2 months [95 % confidence interval (CI) 50.9–79.5] in the HIPEC group and 34.5 months (95 % CI 21.1–47.9) in the control group. The 5-year cumulative PM incidence was 9.3 % in the HIPEC group and 42.5 % in the control group (p = 0.004). Kaplan–Meier estimated 5-year overall survival (OS) was 81.3 % in the HIPEC group versus 70.0 % in the control group (p = 0.047). No operative death occurred. Grade 3–4 [National Cancer Institute Common Terminology Criteria for Adverse Events (NCI–CTCAE) version 4] morbidity rates were 18.2 % in the HIPEC group and 25 % in controls (p = 0.75). At multivariate analysis, HIPEC correlated to lower PM cumulative incidence [hazard ratio (HR) 0.04, 95 % CI 0.01–0.31; p = 0.002], and better OS (HR 0.25, 95 % CI 0.07–0.89; p = 0.039) and progression-free survival (HR 0.31, 95 % CI 0.11–0.85; p = 0.028). Adjuvant HIPEC may benefit CRC patients at high-risk for peritoneal failure. These results warrant confirmation in phase III trials.", "title": "" }, { "docid": "accb879062cf9c2e6fa3fb636f33b333", "text": "The CLEF eRisk 2018 challenge focuses on early detection of signs of depression or anorexia using posts or comments over social media. The eRisk lab has organized two tasks this year and released two different corpora for the individual tasks. The corpora are developed using the posts and comments over Reddit, a popular social media. The machine learning group at Ramakrishna Mission Vivekananda Educational and Research Institute (RKMVERI), India has participated in this challenge and individually submitted five results to accomplish the objectives of these two tasks. The paper presents different machine learning techniques and analyze their performance for early risk prediction of anorexia or depression. The techniques involve various classifiers and feature engineering schemes. The simple bag of words model has been used to perform ada boost, random forest, logistic regression and support vector machine classifiers to identify documents related to anorexia or depression in the individual corpora. We have also extracted the terms related to anorexia or depression using metamap, a tool to extract biomedical concepts. Theerefore, the classifiers have been implemented using bag of words features and metamap features individually and subsequently combining these features. The performance of the recurrent neural network is also reported using GloVe and Fasttext word embeddings. Glove and Fasttext are pre-trained word vectors developed using specific corpora e.g., Wikipedia. The experimental analysis on the training set shows that the ada boost classifier using bag of words model outperforms the other methods for task1 and it achieves best score on the test set in terms of precision over all the runs in the challenge. Support vector machine classifier using bag of words model outperforms the other methods in terms of fmeasure for task2. The results on the test set submitted to the challenge suggest that these framework achieve reasonably good performance.", "title": "" }, { "docid": "a9e3a274d732f57efc0aa093e24653f8", "text": "This work presents our recent progress in the development of an Si wire waveguiding system for microphotonics devices. The Si wire waveguide promises size reduction and high-density integration of optical circuits due to its strong light confinement. However, large connection and propagation losses had been serious problems. We solved these problems by using a spot-size converter and improving the microfabrication technology. As a result, propagation losses as low as 2.8 dB/cm for a 400/spl times/200 nm waveguide and a coupling loss of 0.5 dB per connection were obtained. As we have the technologies for the fabrication of complex, practical optical devices using Si wire waveguides, we used them to make microphotonics devices, such as a ring resonator and lattice filter. The devices we made exhibit excellent characteristics because of the microfabrication with the precision of a few nanometers. We have also demonstrated that Si wire waveguides have great potential for use in nonlinear optical devices.", "title": "" }, { "docid": "7a86db2874d602e768d0641bb18ae0c3", "text": "Most work in reinforcement learning (RL) is based on discounted techniques, such as Q learning, where long-term rewards are geometrically attenuated based on the delay in their occurence. Schwartz recently proposed an undiscounted RL technique called R learning that optimizes average reward, and argued that it was a better metric than the discounted one optimized by Q learning. In this paper we compare R learning with Q learning on a simulated robot box-pushing task. We compare these two techniques across three diierent exploration strategies: two of them undirected, Boltz-mann and semi-uniform, and one recency-based directed strategy. Our results show that Q learning performs better than R learning , even when both are evaluated using the same undiscounted performance measure. Furthermore, R learning appears to be very sensitive to choice of exploration strategy. In particular, a surprising result is that R learn-ing's performance noticeably deteriorates under Boltzmann exploration. We identify precisely a limit cycle situation that causes R learning's performance to deteriorate when combined with Boltzmann exploration, and show where such limit cycles arise in our robot task. However, R learning performs much better (although not as well as Q learning) when combined with semi-uniform and recency-based exploration. In this paper, we also argue for using medians over means as a better distribution-free estimator of average performance, and describe a simple non-parametric signiicance test for comparing learning data from two RL techniques.", "title": "" }, { "docid": "231d8ef95d02889d70000d70d8743004", "text": "Last decade witnessed a lot of research in the field of sentiment analysis. Understanding the attitude and the emotions that people express in written text proved to be really important and helpful in sociology, political science, psychology, market research, and, of course, artificial intelligence. This paper demonstrates a rule-based approach to clause-level sentiment analysis of reviews in Ukrainian. The general architecture of the implemented sentiment analysis system is presented, the current stage of research is described and further work is explained. The main emphasis is made on the design of rules for computing sentiments.", "title": "" }, { "docid": "aaa0e09d31dbc6cdf74c640b03a2fbbe", "text": "Received: 26 April 2008 Accepted: 4 September 2008 Abstract There has been a gigantic shift from a product based economy to one based on services, specifically digital services. From every indication it is likely to be more than a passing fad and the changes these emerging digital services represent will continue to transform commerce and have yet to reach market saturation. Digital services are being designed for and offered to users, yet very little is known about the design process that goes behind these developments. Is there a science behind designing digital services? By examining 12 leading digital services, we have developed a design taxonomy to be able to classify and contrast digital services. What emerged in the taxonomy were two broad dimensions; a set of fundamental design objectives and a set of fundamental service provider objectives. This paper concludes with an application of the proposed taxonomy to three leading digital services. We hope that the proposed taxonomy will be useful in understanding the science behind the design of digital services. European Journal of Information Systems (2008) 17, 505–517. doi:10.1057/ejis.2008.38", "title": "" }, { "docid": "5455a8fd6e6be03e3a4163665425247d", "text": "The change in spring phenology is recognized to exert a major influence on carbon balance dynamics in temperate ecosystems. Over the past several decades, several studies focused on shifts in spring phenology; however, large uncertainties still exist, and one understudied source could be the method implemented in retrieving satellite-derived spring phenology. To account for this potential uncertainty, we conducted a multimethod investigation to quantify changes in vegetation green-up date from 1982 to 2010 over temperate China, and to characterize climatic controls on spring phenology. Over temperate China, the five methods estimated that the vegetation green-up onset date advanced, on average, at a rate of 1.3 ± 0.6 days per decade (ranging from 0.4 to 1.9 days per decade) over the last 29 years. Moreover, the sign of the trends in vegetation green-up date derived from the five methods were broadly consistent spatially and for different vegetation types, but with large differences in the magnitude of the trend. The large intermethod variance was notably observed in arid and semiarid vegetation types. Our results also showed that change in vegetation green-up date is more closely correlated with temperature than with precipitation. However, the temperature sensitivity of spring vegetation green-up date became higher as precipitation increased, implying that precipitation is an important regulator of the response of vegetation spring phenology to change in temperature. This intricate linkage between spring phenology and precipitation must be taken into account in current phenological models which are mostly driven by temperature.", "title": "" }, { "docid": "00337220cd594074fa303d727071a2ff", "text": "INTRODUCTION\nIn the present era, thesauri as tools in indexing play an effective role in integrating retrieval preventing fragmentation as well as a multiplicity of terminologies and also in providing information content of documents.\n\n\nGOALS\nThis study aimed to investigate the keywords of articles indexed in IranMedex in terms of origin, structure and indexing situation and their Compliance with the Persian Medical Thesaurus and Medical Subject Headings (MeSH).\n\n\nMATERIALS AND METHODS\nThis study is an applied research, and a survey has been conducted. Statistical population includes 32,850 Persian articles which are indexed in the IranMedex during the years 1385-1391. 379 cases were selected as sample of the study. Data collection was done using a checklist. In analyzing the findings, the SPSS Software were used.\n\n\nFINDINGS\nAlthough there was no significant difference in terms of indexing origin between the proportion of different types of the Persian and English keywords of articles indexed in the IranMedex, the compliance rates of the Persian and English keywords with the Persian medical thesaurus and MeSH were different in different years. In the meantime, the structure of keywords is leaning more towards phrase structure, and a single word structure and the majority of keywords are selected from the titles and abstracts.\n\n\nCONCLUSION\nThe authors' familiarity with the thesauri and controlled tools causes homogeneity in assigning keywords and also provides more precise, faster, and easier retrieval of the keywords. It's suggested that a mixture of natural and control languages to be used in this database in order to reach more comprehensive results.", "title": "" }, { "docid": "449270c00ce54ba3772a7af9955f5231", "text": "Demand for high-speed DRAM in graphics application pushes a single-ended I/O signaling to operate up to 6Gb/s. To maintain the speed increase, the GDDR5 specification shifts from GDDR3/4 with respect to forwarded clocking, data training for write and read de-skewing, clock training, channel-error detection, bank group and data coding. This work tackles challenges in GDDR5 such as clock jitter and signal integrity.", "title": "" }, { "docid": "b7d8c0c59bb79db0a8ea671e99ee131e", "text": "This paper describes a design method to secure encryption algorithms against Differential Power Analysis at the logic level. The method employs logic gates with a power consumption, which is independent of the data signals, and therefore the technique removes the foundation for DPA. In a design experiment, a fundamental component of the DES algorithm has been implemented. Detailed transistor level simulations show a perfect security whenever the layout parasitics are not taken into account.", "title": "" }, { "docid": "bc67b47ecad41e15d17c963c11895ab3", "text": "Access Block and Emergency Department (ED) Overcrowding are well defined phenomena that have been described as the most serious issue confronting EDs. This paper provides a summary of the current evidence on the subject from around the world. In addition to the following evidence, one must always remember that this problem is associated with a large amount of human suffering that is preventable. The review has found that Australia is playing a key role in this field. It is important to understand what has been done to reduce or prevent deleterious consequences amongst patients who suffer extended delays in ED awaiting admission to hospital. This document concludes with a summary of the key points of evidence and solutions. The key points are: 1. The review reports on 27 factors that have been described and documented in the literature as associated with access block and overcrowding. These include health system, demographic and clinical factors. They are having a major impact on the primary healthcare system, patients, their families, health professionals and the whole community. 2. It has been estimated, by different authors and methods, that there is a 20%-30% excess mortality rate every year that is attributable to access block and ED overcrowding in Australia. This equates to approximately 1,500 deaths (at 2003 levels of access block) per year, which is similar to the road toll. 3. There is clear evidence that the main cause of access block and ED overcrowding is a combination of major increases in emergency admissions and ED presentations with almost no increase in the capacity of hospitals to cope with the demand. Between 2002 and 2007 the rate of available beds in Australia was reduced from 1998-99 levels from 2.65 beds per 1,000 population to 2.4 in 2002, and has since remained steady between 2.5-2.6 per 1,000 population. In the same period, the number of ED presentations has increased over 38%, from 4.1 million to 6.7 million. When compared with 1998-99 rates, the number of available beds in 2006-07 is very similar (2.65 vs. 2.60 beds per 1,000) but the number of ED presentations has almost doubled from 3.5 to 6.7 million. 4. The most vulnerable individuals affected by access block and ED overcrowding are those who due to their medical conditions require unplanned admissions to hospital. The most common groups include: the elderly, particularly those with chronic and complex conditions; people arriving by ambulance; people …", "title": "" } ]
scidocsrr